Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Routing Processor not working properly with Azuremonitoring #29495

Closed
scavassa-yld opened this issue Nov 24, 2023 · 9 comments
Closed

Routing Processor not working properly with Azuremonitoring #29495

scavassa-yld opened this issue Nov 24, 2023 · 9 comments
Assignees
Labels

Comments

@scavassa-yld
Copy link

Component(s)

connector/routing, exporter/azuremonitor

What happened?

Description

Setting the Routing Processor with AzureMonitorExporter Traces are not using the correct Exporter, instead of it, its routing to the first Exporter registered when running Docker-Compose Up command.

Steps to Reproduce

  • Setup Azure to have 3 Application Insights instances.
  • Configure OtelCollector Contrib Routing with Statement
  • We are using OTLP and have include_metadata as true

Full configuration can be found on OpenTelemetry Collector configuration session.

After running docker-compose up we can check the Exporters are registered in a random order - the order we define then on yaml file will not afect the registering/logs here.

docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     info    service@v0.89.0/telemetry.go:85 Setting up own telemetry...
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     info    service@v0.89.0/telemetry.go:202        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     debug   extension@v0.89.0/extension.go:162      Beta component. May change in the future.       {"kind": "extension", "name": "health_check"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     debug   exporter@v0.89.0/exporter.go:273        Beta component. May change in the future.       {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     debug   exporter@v0.89.0/exporter.go:273        Beta component. May change in the future.       {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/first"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     info    exporter@v0.89.0/exporter.go:275        Development component. May change in the future.        {"kind": "exporter", "data_type": "traces", "name": "debug"}
docker-otel-collector-1  | 2023-11-24T13:10:59.488Z     debug   exporter@v0.89.0/exporter.go:273        Beta component. May change in the future.       {"kind": "exporter", "data_type": "traces", "name": "azuremonitor"}
docker-otel-collector-1  | 2023-11-24T13:10:59.488Z     debug   processor@v0.89.0/processor.go:287      Beta component. May change in the future.       {"kind": "processor", "name": "routing", "pipeline": "traces"}

Then when docker-otel-collector receives the input data with the Header that match the Routing Statement, it will use the first Exporter registered, that we can check on the log above. And for this very case, any data will routed to azuremonitor/second.

Expected Result

with the HEADER x-code = mycode, I expect it to be routed to azuremonitor/first

Actual Result

Here is the log after sending data to docker-otel-collector endpoint with HEADER x-code = mycode

docker-otel-collector-1  | 2023-11-24T13:17:21.573Z     debug   azuremonitorexporter@v0.89.0/factory.go:139     --------- Transmitting 8 items ---------        {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}
docker-otel-collector-1  | 2023-11-24T13:17:22.813Z     debug   azuremonitorexporter@v0.89.0/factory.go:139    <other infos>    {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}
docker-otel-collector-1  | 2023-11-24T13:17:32.816Z     debug   azuremonitorexporter@v0.89.0/factory.go:139     --------- Transmitting 8 items ---------        {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}

Collector version

0.87.0

Environment information

Environment

OS: Docker Linux

OpenTelemetry Collector configuration

receivers:
  otlp:
    protocols:
      http:
        include_metadata: true
        endpoint: 0.0.0.0:4318

exporters:
  azuremonitor:
    instrumentation_key: "some-key-here"
  azuremonitor/first:
    instrumentation_key: "other-key"
  azuremonitor/second:
    instrumentation_key: "any-other-key"
  debug:
    verbosity: detailed
    sampling_initial: 5
    sampling_thereafter: 200

processors:
  batch:
  routing:
    default_exporters:
      - azuremonitor
    table:
      - statement: route() where resource.attributes["x-code"] == "mycode"
        exporters: [azuremonitor/first]

service:
  telemetry:
    metrics:
    logs:
      level: debug
  extensions: [health_check]
  pipelines:
    traces:
      receivers: [otlp]
      processors: [routing]
      exporters: [debug,azuremonitor,azuremonitor/first,azuremonitor/second]


### Log output

```shell
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     info    service@v0.89.0/telemetry.go:85 Setting up own telemetry...
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     info    service@v0.89.0/telemetry.go:202        Serving Prometheus metrics      {"address": ":8888", "level": "Basic"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     debug   extension@v0.89.0/extension.go:162      Beta component. May change in the future.       {"kind": "extension", "name": "health_check"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     debug   exporter@v0.89.0/exporter.go:273        Beta component. May change in the future.       {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     debug   exporter@v0.89.0/exporter.go:273        Beta component. May change in the future.       {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/first"}
docker-otel-collector-1  | 2023-11-24T13:10:59.487Z     info    exporter@v0.89.0/exporter.go:275        Development component. May change in the future.        {"kind": "exporter", "data_type": "traces", "name": "debug"}
docker-otel-collector-1  | 2023-11-24T13:10:59.488Z     debug   exporter@v0.89.0/exporter.go:273        Beta component. May change in the future.       {"kind": "exporter", "data_type": "traces", "name": "azuremonitor"}
docker-otel-collector-1  | 2023-11-24T13:10:59.488Z     debug   processor@v0.89.0/processor.go:287      Beta component. May change in the future.       {"kind": "processor", "name": "routing", "pipeline": "traces"}

docker-otel-collector-1  | 2023-11-24T13:17:21.573Z     debug   azuremonitorexporter@v0.89.0/factory.go:139     --------- Transmitting 8 items ---------        {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}
docker-otel-collector-1  | 2023-11-24T13:17:22.813Z     debug   azuremonitorexporter@v0.89.0/factory.go:139    <other infos>    {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}
docker-otel-collector-1  | 2023-11-24T13:17:32.816Z     debug   azuremonitorexporter@v0.89.0/factory.go:139     --------- Transmitting 8 items ---------        {"kind": "exporter", "data_type": "traces", "name": "azuremonitor/second"}

Additional context

1 - I have added the include_metadata: true to Receivers´ OTLP session.
2 - Exporter azuremonitor/second is not even registered in the Routing session
3 - If you run the Docker-Compose multiple times, the order of those exporters will change and regarding it, the Fracgory.go will chose the first Exporter registered.
4 - Have tested the two approaches mentioned in Routing´s README document, both will result in the same.

@scavassa-yld scavassa-yld added bug Something isn't working needs triage New item requiring triage labels Nov 24, 2023
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1 crobert-1 added processor/routing Routing processor and removed connector/routing labels Nov 27, 2023
Copy link
Contributor

Pinging code owners for processor/routing: @jpkrohling. See Adding Labels via Comments if you do not have permissions to add labels yourself.

@crobert-1
Copy link
Member

Hello @scavassa-yld, I'm not seeing anything obvious as to why this isn't working. Can you confirm that you're not getting any log messages from the routing processor while the collector is running? I'd expect some logs to be showing whether the routing is successful or not, or some confirmation that it's attempting to work.

I'm especially confused that the data is going to an exporter that isn't even registered in the router processor.

Also, if you share a sample trace that you're trying to route I could test it locally as well.

@crobert-1
Copy link
Member

While this is under investigation, you could try to use the routing connector instead to see if that works for your use-case. The eventual plan is to deprecate the processor in favor of the connector, so it may be good to move to using the connector sooner rather than later.

@jpkrohling jpkrohling self-assigned this Nov 28, 2023
@jpkrohling jpkrohling removed the needs triage New item requiring triage label Nov 28, 2023
@simonmercernewday
Copy link

simonmercernewday commented Dec 4, 2023

Hello @crobert-1 , when you say provide a trace, do you just mean the trace that we're posting to the collector? Or are you talking about a specific set of debug logs?

For the routing we simply added the header 'x-code' with one of the values for the routing eg, 'any-other-key'

@crobert-1
Copy link
Member

when you say provide a trace, do you just mean the trace that we're posting to the collector? Or are you talking about a specific set of debug logs?

Yes to both. To the first, an example of a trace that you're posting to the collector that isn't being routed properly would be very helpful. For the second, I was wondering if there were any logs coming from the routing processor while the collector is running. If there were any they may be helpful in seeing where things went awry.

Copy link
Contributor

github-actions bot commented Feb 6, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Feb 6, 2024
@crobert-1 crobert-1 removed the Stale label Feb 6, 2024
Copy link
Contributor

github-actions bot commented Apr 8, 2024

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@jpkrohling
Copy link
Member

I'm closing this, but feel free to provide the additional information and reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants