Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Datadog: datadog.hostname setting does not appear to affect host inventory display name #29866

Closed
ringerc opened this issue Dec 13, 2023 · 15 comments · Fixed by #31702
Closed
Labels
enhancement New feature or request exporter/datadog Datadog components priority:p2 Medium

Comments

@ringerc
Copy link

ringerc commented Dec 13, 2023

Component(s)

exporter/datadog

What happened?

Description

The datadog exporter configuration option hostname does not appear to affect the hostname displayed as the preferred hostname in the datadog UI infrastructure view host map or host inventory.

It continues to use the internal cloud provider id for the host; the configured hostname is assigned as a host alias instead. This is not great, as AWS host IDs are i-xxxxxxxxxxx and GCP host-ids are UUIDs like 0b797c2d-36cc-4bd4-bdbb-f33d7a0fcc2b. Many datadog dashboards don't support filtering by host aliases, only by the "main" hostname, so this impacts dashboard usability too.

This is the case whether host_metadata.hostname_source is set to first_resource or config_or_system.

Steps to Reproduce

Configure an OpenTelemetry based datadog agent using the default otel node image otel/opentelemetry-collector-contrib:0.90.1.

Use the recommended datadog configuration with the k8sattributes processor, resourcedetection processor configs for your cloud environment etc.

Add the following to your DaemonSet's env stanza:

          - name: K8S_NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName

Configure the datadog exporter with:

exporters:
  datadog:
    hostname: ${env:K8S_NODE_NAME}
    host_metadata:
      enabled: true
      hostname_source: config_or_system

and add your API key config etc.

Visit the linked DD account. Note that the new node shows up in the "Infrastructure -> Host Map" view under its internal cloud provider id (the host.id detected by the processors) not the configured hostname.

Try changing the hostname_source to first_resource. Repeat. You will still see the internal host.id as the hostname.

Expected Result

I expect to see the value of the host.name or k8s.node.name provided to the datadog exporter, not the internal host.id. This is the behaviour seen with the DD agent.

If the reported preferred hostname changes after initial node registration, the DD UI should reflect the preferred hostname being sent.

Actual Result

No matter what I do, my nodes show up with the host.id for their primary display hostname.

image

The real hostname (k8s node name) is sometimes shown in the "aliases", sometimes not. I've yet to determine why.

The real hostname is not shown in the overviews:

image

image

or usable for selecting hosts in dashboards:

image

I have verified via kubectl debug based inspection of the otel collector process's /proc/$pid/env that the K8S_NODE_NAME env-var is present and set to the kube node name.

Collector version

0.90.1

Environment information

Environment

k8s on Azure AKS, Google Cloud GKE and AWS EKS.

OpenTelemetry Collector configuration

# My real config is long.
# Use the example from https://docs.datadoghq.com/opentelemetry/otel_collector_datadog_exporter/#2-configure-the-datadog-exporter
# and add the config

exporters:
  datadog:
    api:
      key: ${env:DD_API_KEY}
      site: ${env:DD_SITE}
    hostname: ${env:K8S_NODE_NAME}
    host_metadata:
      enabled: true
      hostname_source: config_or_system

Log output

N/A, nothing relevant is logged.

Additional context

Related issue requesting auto-discovery of host metadata tags: #29700

@ringerc ringerc added bug Something isn't working needs triage New item requiring triage labels Dec 13, 2023
@github-actions github-actions bot added the exporter/datadog Datadog components label Dec 13, 2023
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@ringerc
Copy link
Author

ringerc commented Dec 18, 2023

I did more digging into this.

The datadog.hostname setting does affect the reported hostname... somewhat. It doesn't seem to override the preferred hostname, but it seems to add an "alias" to the displayed hostname in the infrastructure list if configured.

Without explicit datadog.hostname, hostname_source=config_or_system

Log event will be emitted

{"level":"debug","ts":1702596764.453713,"caller":"provider/provider.go:55","msg":"Unavailable source provider","kind":"exporter","data_type":"metrics","name":"datadog/datadog","provider":"config","error":"empty configuration hostname"}

then cloud provider specific ones from the cloud provider IMDS based discovery are emitted like:

{"level":"info","ts":1702596134.5370073,"caller":"provider/provider.go:59","msg":"Resolved source","kind":"exporter","data_type":"metrics","name":"datadog/datadog","provider":"azure","source":{"Kind":"host","Identifier":"c2c7b383-3aa8-4f3b-bb33-d332908e36af"}}
{"level":"info","ts":1702595688.6152918,"caller":"provider/provider.go:59","msg":"Resolved source","kind":"exporter","data_type":"metrics","name":"datadog/datadog","provider":"ec2","source":{"Kind":"host","Identifier":"i-0ad96d927c1802a8e"}}
{"level":"info","ts":1702596764.5491843,"caller":"provider/provider.go:59","msg":"Resolved source","kind":"exporter","data_type":"metrics","name":"datadog/datadog","provider":"gcp","source":{"Kind":"host","Identifier":"gke-dp-vcpete01clh2q-e2-standard-4-a0-cb95a6fc-vbvs.development-data-381310"}}

these point to the internal cloud provider host names, which are picked as the display names.

Azure IMDS for the node I'm using reports compute.osProfile.computerName = aks-f4sv2a0-24785920-vmss000000, but the discovery plugin seems to be picking the value from compute.vmId = c2c7b383-3aa8-4f3b-bb33-d332908e36af instead.

AWS IMDS query on the node I'm using to http://169.254.169.254/latest/meta-data/hostname returns ip-10-0-11-132.eu-west-1.compute.internal. I have no idea how DD is picking i-0ad96d927c1802a8e instead. Presumably it picks some other metadata source.

For GCP it seems to pick the actual IMDS-reported hostname in discovery.

For Azure and AWS it's IMO not picking the best hostname source. The internal IDs should be aliases, and it should prefer the Azure compute.osProfile.computerName or AWS /latest/meta-data/hostname.

With explicit hostname, hostname_source=first_resource

If you set datadog.hostname to (say) the kube downward-api for the node name and use hostname_source=first_resource, it gets more confusing.

Some reading of the source code suggests that the exporter may log the "Resolved source" message for the exporter's internal discovery even if it actually picked the host name from resource attribute based discovery. It kicks off the config and cloud provider probe based discovery asynchronously, but will then ignore its results if the attribute based discovery matched something it considered satisfactory.

AFAICS there is no log message to indicate whether or not resource attribute based discovery picked the hostname.

So while it will log messages with the k8s node names like

{"level":"info","ts":1702597850.33392,"caller":"provider/provider.go:59","msg":"Resolved source","kind":"exporter","data_type":"metrics","name":"datadog/datadog","provider":"config","source":{"Kind":"host","Identifier":"ip-10-0-11-132.eu-west-1.compute.internal"}}
{"level":"info","ts":1702597755.9463413,"caller":"provider/provider.go:59","msg":"Resolved source","kind":"exporter","data_type":"metrics","name":"datadog/datadog","provider":"config","source":{"Kind":"host","Identifier":"gke-dp-vcpete01clh2q-e2-standard-4-a0-cb95a6fc-vbvs"}}
{"level":"info","ts":1702598329.3088248,"caller":"provider/provider.go:59","msg":"Resolved source","kind":"exporter","data_type":"metrics","name":"datadog/datadog","provider":"config","source":{"Kind":"host","Identifier":"aks-d4sv4c0-42735447-vmss000000"}}

... it seems to actually ignore those and pick the values it picked from resource attributes instead.

And its attribute based discovery in https://github.com/DataDog/opentelemetry-mapping-go/ is very inconsistent from provider to provider. It seems to prefer to use the node.id instead of node.name for Azure and AWS, but prefers node.name for GCP. It also seems to ignore most of the other available, useful metadata tags from the cloud provider semantic conventions etc. Then there's another layer tacked on top in the hostmetrics exporter itself for some AWS-specific attribute based discovery.

The logic bounces back and forth between opentelemetry-mapping-go and opentelemetry-collector-contrib exporter/datadogexporter/internal/hostmetadata and is tricky to clearly unpick. It really needs a proper review to ensure that it does something sensible and consistent for both attribute based discovery and IMDS based discovery.

@mx-psi
Copy link
Member

mx-psi commented Jan 12, 2024

Hey, thanks for filing this issue. There's a lot to unpack here, so let me reply below to individual points so that I can hopefully clear up some confusion and get your setup to a satisfying state.

Questions about the hostname in the infrastructure list

The datadog exporter configuration option hostname does not appear to affect the hostname displayed as the preferred hostname in the datadog UI infrastructure view host map or host inventory.

The hostname option is the fallback hostname for any telemetry that does not have a hostname attribute already. If your telemetry already has a hostname (e.g. through an OpenTelemetry semantic convention such as host.name, host.id or a similar one that in your case are set by the processors you have set up), then that one will be honored.

If you are familar with the Datadog Agent, the situation there is similar: if you tag something with a host tag, the Agent will use that one, if not, it will put one in for you. The situation is a bit more complex here in that there are multiple host tags (host.name, host.id, k8s.node.name....) but the resulting behavior is the same.

This is the case whether host_metadata.hostname_source is set to first_resource or config_or_system. [...]
Try changing the hostname_source to first_resource. Repeat. You will still see the internal host.id as the hostname.

host_metadata::hostname_source controls what hostname is attached to the host metadata (e.g. things like CPU cores and other system information). It will not change the hostname on your telemetry at all, as you have discovered. If you want to change the hostname on your telemetry you need to use a processor for that.

I expect to see the value of the host.name or k8s.node.name provided to the datadog exporter, not the internal host.id.
This is the behaviour seen with the DD agent.

Can you elaborate on this? The Datadog Agent does not have any host_metadata configuration to my knowledge, nor does it have the resource detection processor or k8sattributes processor, so I am not sure what the equivalence is here.

Some reading of the source code suggests that the exporter may log the "Resolved source" message for the exporter's internal discovery even if it actually picked the host name from resource attribute based discovery.
AFAICS there is no log message to indicate whether or not resource attribute based discovery picked the hostname.

That is true, and I agree it is confusing. The first part is not very easy to change today (the second part is easy), but generally making this simpler is in our roadmap. In general using host_metadata::hostname_source: first_resource is only useful in a handful of exotic infrastructures and edge cases, and may break your setup in unexpected ways in other setups.

Comments about the hostname picked up by default in different cloud providers

For Azure and AWS it's IMO not picking the best hostname source. The internal IDs should be aliases, and it should prefer the Azure compute.osProfile.computerName or AWS /latest/meta-data/hostname.

The current implementation focuses on providing compatibility with the Datadog Agent and other Datadog products hostnames by default. OpenTelemetry provides a lot of flexibility to customize your hostname as you see fit, but the default will remain this because otherwise it wouldn't match that chosen by other Datadog products.

And its attribute based discovery in https://github.com/DataDog/opentelemetry-mapping-go/ is very inconsistent from provider to provider.

There is one exception in AWS, but otherwise it is the same. Here is a table with all the code linked and a brief summary of wht it does

Provider Attribute based discovery Resource detection processor
AWS EC2 instance id if OS hostname is default host.id Set host.id to AWS EC2 instance id
Azure VM VM id host.id Set host.id to Azure VM id
GCE Part of hostname + project ID part of host.name + cloud.account.id Set host.name to the hostname, cloud.account.id to the project ID

It seems to prefer to use the node.id instead of node.name for Azure and AWS, but prefers node.name for GCP.

I don't think this is the case. On both the attribute based discovery as well as the builtin provider case all cloud providers are checked before the Kubernetes node name is checked.

It also seems to ignore most of the other available, useful metadata tags from the cloud provider semantic conventions etc.

Not sure what you mean by this, could you elaborate?

Then there's another layer tacked on top in the hostmetrics exporter itself for some AWS-specific attribute based discovery.

Yup, this is there for legacy reasons indeed. Other Datadog products expect this so we fetch it.

The logic bounces back and forth between opentelemetry-mapping-go and opentelemetry-collector-contrib exporter/datadogexporter/internal/hostmetadata and is tricky to clearly unpick. It really needs a proper review to ensure that it does something sensible and consistent for both attribute based discovery and IMDS based discovery.

If you have concrete suggestions about this I am happy to tackle this. The logic itself is pretty unwieldy to be able to support a wide range of cases, so it is fundamentally complex, but I am happy to take feedback as to what to simplify while preserving behavior.

@mx-psi mx-psi added waiting for author priority:p2 Medium and removed needs triage New item requiring triage labels Jan 12, 2024
@ringerc
Copy link
Author

ringerc commented Jan 18, 2024

Thanks very much @mx-psi for your comments.

I definitely do not expect datadog.hostname to change attributes on telemetry (metrics, logs). I am aware I would need processors for that. The k8sattributes processor just does the right thing by default anyway.

I did expect it to change the hostname shown in the DD infrastructure view UI. It does not appear to do so.

The use of the cloud provider's low level node id (usually exposed in OTLP as node.id) instead of the node name (OTLP node.name) when using either attribute discovery or IMDS discovery is surprising and confusing. I expected that when I provide a host name, DD would use that as the host name. It's useful and important to send the host.id, but surely the user-facing display name should be the host.name?

It's particularly painful because many DD dashboards only support filtering by host using this low-level host-id, not by host name aliases or tags. "OpenTelemetry Host Metrics Dashboard" is one such example.

image

it's ... not great UX.

Putting the table you prepared above into the DD exporter docs, along with the configuration guidance you gave, would be an immense improvement since at least it would be clearly documented that DD expects and requires the low level cloud provider node names, and you cannot get it to use a user-friendly node name instead.

Clarifying the behaviour and recommended use of hostname_source in

## - 'first_resource' picks the host metadata hostname from the resource attributes on the first OTLP payload that gets to the exporter.
per your notes would help too.

In the end though, I don't think it's unreasonable to want to display nodes with the names the user knows those nodes by, not:

image

Individual comments:

I expect to see the value of the host.name or k8s.node.name provided to the datadog exporter, not the internal host.id.
This is the behaviour seen with the DD agent.

Can you elaborate on this? The Datadog Agent does not have any host_metadata configuration to my knowledge, nor does it have the resource detection processor or k8sattributes processor, so I am not sure what the equivalence is here.

I was unclear. I meant that when hostname_source is set to first_resource, I would expect the provided host name like host.name to be preferred over the low level host identifier like host.id.

The hostname option is the fallback hostname for any telemetry that does not have a hostname attribute

So it's not also supposed to set the host name used in the DD infrastructure node list and infrastructure map views? The docs say "Source for the hostname of host metadata", so I would have expected it to use the configured host name.

See e.g.
image

The k8s node name injected via downward api and set as hostname: ${env:K8S_NODE_NAME} is completely ignored, and not even shown as an alias

image

possibly because I currently have hostname_source: first_resource set - but even when config_or_system is used, DD doesn't respect the configured host name and still seems to prefer the node id discovered via IMDS.

It also seems to ignore most of the other available, useful metadata tags from the cloud provider semantic conventions etc.

Not sure what you mean by this, could you elaborate?

Nodes in the infrastructure view have "tags" sections for "datadog", "user", and if explicit tags are configured in the otel collector, "otel". The DD exporter for the otel collector does not set basic tags that the DD UI appears to expect to be defined by default like availability-zone, even for AWS nodes, in its host metadata requests.

image

so the default view groups everything under "no availability-zone" even AWS nodes that have an AZ

image

(Ignore the missing cpu data, that's a known issue in my current config, and not relevant to this issue)

Similarly, there are no tags for cloud provider, region etc. Even though suitable tags are provided on metrics by the resourcedetection processor, and are available from the same IMDS endpoints that the DD exporter is using, as seen below:

image

except for a couple of limited ones for GCP nodes.

@mx-psi
Copy link
Member

mx-psi commented Jan 18, 2024

Thanks again for the detailed reply :)

So it's not also supposed to set the host name used in the DD infrastructure node list and infrastructure map views? The docs say "Source for the hostname of host metadata", so I would have expected it to use the configured host name.

The explanation is technically correct, but confusing.

The infrastructure list will show all hostnames associated with the telemetry you send. It will also show additional data sent on a separate, dedicated payload called 'host metadata'. If present, the hostname option is indeed the hostname used for host metadata, but this does not mean that it will prevent other hosts from showing up in your infrastructure list, if some metrics/traces/logs have other hostnames set.

In any case that detail is not very relevant for end-users, so we should change the documentation to improve this.


I believe these are the things you would like to see:

  • Clearer explanation of the hostname setting and its implications for infrastructure monitoring.
  • Clearer explanation of the host_metadata::hostname_source setting and its implications for infrastructure monitoring.
  • Explain what hostname is used by default in different cloud provider environments
  • Have 'prettier' hostnames as default hosts and add current hosts as host aliases by default in cloud provider environments.
  • Pick up more of the OpenTelemetry semantic conventions as host tags (specially availability zone).

Am I missing anything? A couple of these are already in our backlog, I will add the rest and link this issue to them so you are kept updated with progress.

@ringerc
Copy link
Author

ringerc commented Jan 23, 2024

@mx-psi Right, except for "Add 'prettier' hostnames as host aliases by default in cloud provider environments."

I want to see the hostname as the primary display name. If there's some low level cloud provider host ID it's useful to have that in the aliases. Either that, or some other means to tell DD's UI what the preferred user-facing host name is. Because 508f05a4-3994-4a14-8161-0fd1de1f5842 is just not great UX for picking hosts in dashboards, infrastructure list overviews etc.

If that's just not possible with DD's platform right now, then clear docs to explain that you have to live with the low-level opaque hostnames as the primary display name, why, and that the real hostname is in the aliases.

It'd help a lot if dashboards allowed target hosts to be picked via a quicksearch that accepted aliases though. See e.g. the DD opentelemetry hosts dashboard mentioned earlier. Many other places display the meaningless-to-the-user host IDs prominently, but at least most other places allow searching by host alias.

And for

Pick up more of the OpenTelemetry semantic conventions as host tags (specially availability zone).

I suggest that cloud.region, cloud.platform, cloud.provider and cloud.availability_zone are probably the minimum appropriate set to promote to host tags. Since you recommend not using first_resource based discovery the equivalents would need to be obtained from the DD exporter plugin's provider-specific IMDS based discovery if config_or_system is used.

The k8s.cluster.name is the other main thing I'd want to see promoted to a host tag automatically.

Thanks again for engaging on this.

@ringerc
Copy link
Author

ringerc commented Jan 23, 2024

Example of user experience issues, and why

  • Add 'prettier' hostnames as host aliases by default in cloud provider environments.

is more than a minor cosmetic issue:

In kube, I have an otel collector called xx-sj2jr from Daemonset xx running on a node aks-d2sv3c0-34429652-vmss000000

➜  dp-vcthirdpmlt3KIFC-westeurope-1 git:(main) ✗ kubectl get pod -n xx -o wide
NAME                                       READY   STATUS    RESTARTS   AGE     IP             NODE                              NOMINATED NODE   READINESS GATES
xx-sj2jr   1/1     Running   0          8m39s   10.240.0.128   aks-d2sv3c0-34429652-vmss000000   <none>           <none>

(This could be any "real" workload on the node that I want to look up in DD too, I'm just using the collector as an example).

Because the exporter doesn't even set the alias to the configured node name for Azure nodes, I can't find the node it in DD's infrastructure view:

image

Instead I have to find the CSP-specific internal identifier for the node in the k8s Node resource, which for Azure is in .status.nodeInfo

➜  dp-vcthirdpmlt3KIFC-westeurope-1 git:(main) ✗ kubectl get node aks-d2sv3c0-34429652-vmss000000 -o yaml | yq '.status.nodeInfo.systemUUID'
508f05a4-3994-4a14-8161-0fd1de1f5842

(For AWS it's in an annotation on the node, for GCP it's somewhere else...)

Then I can look this up in DD.

image

image


Or I could search some metrics tags in DD to figure out the association between the k8s.node.name and host.id and re-map it that way, but this is cumbersome and difficult due to the lack of a query language for DD metrics. Even if I filter by one tag in DD, it doesn't filter the other tags to only those that match the filter target tag.

Here I add a filter by tag for the k8s node name:

image

but the metric summary still shows all values for node.id, even though only one value 508f05a4-3994-4a14-8161-0fd1de1f5842 corresponds to the filter.

image

This means it's not even practical to use the metrics sent to DD to map the k8s.node.name to the node.id used by the infrastructure view.


The config has

      datadog/datadog:
        hostname: ${env:K8S_NODE_NAME}
        host_metadata:
          enabled: true
          hostname_source: config_or_system

and the k8s node name is correctly injected from a downward-api provided env-var on the DaemonSet.


At least with the DD agent, the aliases are more consistently sent (I think) so the user has some hope of finding their node.

If the dd exporter for the otel collector consistently set node aliases, that would make some small usability improvement here.

I might be able to work around it by sending an otel tag with the k8s node name, but then the tags get added to every datapoint sent by the exporter, not just the node's entry in the infrastructure view. Including logs, in which case it's going to increase log data sizes...

@ringerc
Copy link
Author

ringerc commented Jan 28, 2024

@mx-psi For comparison, see here that the proprietary DD agent shows the Azure node's host name as the primary display name, and the Azure VMID (low level system UUID) as an alias. This is the expected, and desired, behaviour for the OpenTelemetry Collector Datadog Exporter too.

image

Compare to the same view of another node that's reporting via the otel collector

image

@ringerc
Copy link
Author

ringerc commented Jan 28, 2024

See also #11033

@mx-psi mx-psi added enhancement New feature or request and removed waiting for author bug Something isn't working labels Jan 31, 2024
@mx-psi
Copy link
Member

mx-psi commented Jan 31, 2024

@ringerc Thanks again for clarifying, I updated the wording on the list from #29866 (comment) regarding the hostnames and host aliases. I will keep this issue linked with the related tickets and you will get an update when any changes related to these items happen

@mx-psi
Copy link
Member

mx-psi commented Mar 12, 2024

Hi, circling back on this, I am going to close this issue with #31702, since after that PR is merged I believe we have made sufficient progress on all the items listed on #29866 (comment). To summarize:

mx-psi added a commit that referenced this issue Mar 12, 2024
…_metadata::hostname_source settings (#31702)

**Description:**

Improves documentation on the `hostname` and
`host_metadata::hostname_source` settings based on feedback from #29866

**Link to tracking Issue:** Fixes #29866
DougManton pushed a commit to DougManton/opentelemetry-collector-contrib that referenced this issue Mar 13, 2024
…_metadata::hostname_source settings (open-telemetry#31702)

**Description:**

Improves documentation on the `hostname` and
`host_metadata::hostname_source` settings based on feedback from open-telemetry#29866

**Link to tracking Issue:** Fixes open-telemetry#29866
XinRanZhAWS pushed a commit to XinRanZhAWS/opentelemetry-collector-contrib that referenced this issue Mar 13, 2024
…_metadata::hostname_source settings (open-telemetry#31702)

**Description:**

Improves documentation on the `hostname` and
`host_metadata::hostname_source` settings based on feedback from open-telemetry#29866

**Link to tracking Issue:** Fixes open-telemetry#29866
@jmcarp
Copy link
Contributor

jmcarp commented Jun 5, 2024

Thanks for the writeup. My team is running into this issue as well, and even after reading over the updated docs, I don't see a way to prevent datadog from using the ec2 instance id as the host name in the infrastructure list, and creating an alias for the provided host tag, if any (e.g. datadog.host.name).

The only way we've found to prevent datadog from creating this alias is to disable host metadata entirely—and even this only works for new hosts that datadog hasn't seen before. Once a host alias is created, it seems to persist indefinitely (at least for a few days), such that metrics still have their host tag automatically changed by datadog to the instance id server-side even if we stop sending host metadata.

Is there really no way to prevent otelcol from using the ec2 instance id as the canonical host name?

@mx-psi
Copy link
Member

mx-psi commented Jun 5, 2024

I don't see a way to prevent datadog from using the ec2 instance id as the host name in the infrastructure list

The exporter reports information about:

  1. The host that the Datadog exporter is running on
  2. Any host that comes in the resource attributes if you have opted-in as described in https://docs.datadoghq.com/opentelemetry/schema_semantics/host_metadata/

The hostname field will modify the name in the infrastructure list for the host that the Datadog exporter is running on to an arbitrary name. The fields in https://docs.datadoghq.com/opentelemetry/schema_semantics/hostname/?tab=datadogexporter will modify the name in the infrastructure list for the other hosts.

and creating an alias for the provided host tag

Arbitrary aliases are not currently supported indeed

Once a host alias is created, it seems to persist indefinitely (at least for a few days),

I believe it may take up to two weeks for the host to be entirely removed from our systems if it stops reporting, yes.

@jmcarp
Copy link
Contributor

jmcarp commented Jun 5, 2024

Thanks @mx-psi. I think I'm still not understanding a few points, though.

Arbitrary aliases are not currently supported indeed

My goal here is not to create an alias at all. As far as I can tell, datadog creates the alias, and uses the ec2 instance id as the primary host name, regardless of the value of host, datadog.host.name, host.id, etc. Can you explain what causes datadog to create the alias in the first place? Is it the request to /intake? I haven't been able to find any public docs about this so far.

The hostname field will modify the name in the infrastructure list for the host that the Datadog exporter is running on to an arbitrary name.

Sorry if I'm misunderstanding, but this doesn't seem to be the case. If we send host metadata, the infrastructure list always uses the ec2 instance id, not the hostname field, to name the instance. Even though we set datadog.host.name to the ec2 hostname for all our metrics, datadog still seems to change the hostname to the ec2 instance id. Is there a way to prevent this from happening?

@ringerc
Copy link
Author

ringerc commented Jun 12, 2024

I don't see a way to prevent datadog from using the ec2 instance id as the host name in the infrastructure list

From my conversations with a DD product manager and elsewhere, this is apparently a "feature" and will not be changed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request exporter/datadog Datadog components priority:p2 Medium
Projects
None yet
3 participants