Description
Component(s)
exporter/datadog
What happened?
Description
The datadog exporter configuration option hostname
does not appear to affect the hostname displayed as the preferred hostname in the datadog UI infrastructure view host map or host inventory.
It continues to use the internal cloud provider id for the host; the configured hostname is assigned as a host alias instead. This is not great, as AWS host IDs are i-xxxxxxxxxxx
and GCP host-ids are UUIDs like 0b797c2d-36cc-4bd4-bdbb-f33d7a0fcc2b
. Many datadog dashboards don't support filtering by host aliases, only by the "main" hostname, so this impacts dashboard usability too.
This is the case whether host_metadata.hostname_source
is set to first_resource
or config_or_system
.
Steps to Reproduce
Configure an OpenTelemetry based datadog agent using the default otel node image otel/opentelemetry-collector-contrib:0.90.1
.
Use the recommended datadog configuration with the k8sattributes
processor, resourcedetection
processor configs for your cloud environment etc.
Add the following to your DaemonSet
's env
stanza:
- name: K8S_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
Configure the datadog exporter with:
exporters:
datadog:
hostname: ${env:K8S_NODE_NAME}
host_metadata:
enabled: true
hostname_source: config_or_system
and add your API key config etc.
Visit the linked DD account. Note that the new node shows up in the "Infrastructure -> Host Map" view under its internal cloud provider id (the host.id
detected by the processors) not the configured hostname.
Try changing the hostname_source
to first_resource
. Repeat. You will still see the internal host.id
as the hostname.
Expected Result
I expect to see the value of the host.name
or k8s.node.name
provided to the datadog exporter, not the internal host.id
. This is the behaviour seen with the DD agent.
If the reported preferred hostname changes after initial node registration, the DD UI should reflect the preferred hostname being sent.
Actual Result
No matter what I do, my nodes show up with the host.id
for their primary display hostname.
The real hostname (k8s node name) is sometimes shown in the "aliases", sometimes not. I've yet to determine why.
The real hostname is not shown in the overviews:
or usable for selecting hosts in dashboards:
I have verified via kubectl debug
based inspection of the otel collector process's /proc/$pid/env
that the K8S_NODE_NAME
env-var is present and set to the kube node name.
Collector version
0.90.1
Environment information
Environment
k8s on Azure AKS, Google Cloud GKE and AWS EKS.
OpenTelemetry Collector configuration
# My real config is long.
# Use the example from https://docs.datadoghq.com/opentelemetry/otel_collector_datadog_exporter/#2-configure-the-datadog-exporter
# and add the config
exporters:
datadog:
api:
key: ${env:DD_API_KEY}
site: ${env:DD_SITE}
hostname: ${env:K8S_NODE_NAME}
host_metadata:
enabled: true
hostname_source: config_or_system
Log output
N/A, nothing relevant is logged.
Additional context
Related issue requesting auto-discovery of host metadata tags: #29700