-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use k8s.pod.ip
to record resource IP instead of just ip
#183
Use k8s.pod.ip
to record resource IP instead of just ip
#183
Conversation
processor/k8sprocessor/processor.go
Outdated
} | ||
|
||
// Jaeger client libs tag the process with the process/resource IP and | ||
// jaeger to OC translator maps jaeger process to OC node. | ||
// TODO: Should jaeger translator map jaeger process to OC resource instead? | ||
if podIP == "" && td.SourceFormat == sourceFormatJaeger { | ||
if td.Node != nil { | ||
podIP = td.Node.Attributes[ipLabelName] | ||
podIP = td.Node.Attributes[clientIPLabelName] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible that k8sIPLabelName
exists in td.Node.Attributes
at this point? Do we need to check for it as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unlikely but possible. I think it can happen if Otel agent detects and adds the IP to resource but then exports to Otel collector in Jaeger or SAPM instead of OTLP/OC. I don't see any downsides to covering for this case. Will update.
Since the k8s processor prefixes all labels with `k8s.*.`, adding the same prefix to the IP label. We'll still continue to look for the `ip` label on resource/node when we can't find the IP by other means but will only write the IP back to `k8s.pod.ip`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks @owais ! |
Since the k8s processor prefixes all labels with `k8s.*.`, adding the same prefix to the IP label. We'll still continue to look for the `ip` label on resource/node when we can't find the IP by other means but will only write the IP back to `k8s.pod.ip`.
* Use batcher and queued-retry on perf tests The typical usage is expected to have both the batcher and the queued-retry so changing the perf tests to use them. Also adding a test with addattributesprocessor that requires a differenc configuration file. * Merge corrections * Fix configuration, do not batch for now * Adjust memory for TestNoBackend10kSPS
open-telemetry#183) * [receivers/awscontainerinsightsreceiver] Add podresourcesstore in awscontainerinsightreceiver (open-telemetry#167) ref : amazon-contributing#167
Description: Use
k8s.pod.ip
to record resource IP instead of justip
Since the k8s processor prefixes all labels with
k8s.*.
, adding thesame prefix to the IP label. We'll still continue to look for the
ip
label on resource/node when we can't find the IP by other means but will
only write the IP back to
k8s.pod.ip
.Testing: Tested locally and updated unit tests