Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[service] add logger provider configuration support #10544

Conversation

codeboten
Copy link
Contributor

This allows us to use the otel-go/config package to support configuring external destinations for logs. I'm putting this in draft to gather community feedback on whether this is a desirable feature for the collector.

I used the following configuration with this PR to send data to an OTLP backend:

telemetry:
   logs:
     processors:
       - batch:
           exporter:
             otlp:
               protocol: http/protobuf
               endpoint: https://api.honeycomb.io:443
               headers:
                 "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"

This allowed me to see logs in my backend:

Screenshot 2024-07-04 at 1 49 04 PM

@frzifus
Copy link
Member

frzifus commented Jul 20, 2024

While I think the ability to directly configure the destination of collector logs in the telemetry section might be cool for small and simple configurations in certain environments where you can not simply pick up your looks using system or file logs.
But I feel it would be more intuitive if we could specify a specific pipeline for exporting the data.

Something like this:

exporter:
 otlp/honeycomb:
   protocol: http/protobuf
   endpoint: https://api.honeycomb.io:443
     headers:
      "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
pipeline:
   telemetry:
     metrics:
       verbosity: detailed
       pipelines: [logs/collector]
     logs:
       verbosity: detailed
       pipelines: [logs/collector]
   logs/collector:
       receivers:  [telemetry/logs]
       processors: [batch]
       exporters:  [otlp/honeycomb]
   metrics/collector:
       receivers:  [telemetry/metrics]
       processors: [batch]
       exporters:  [otlp/honeycomb]
   traces/collector:
      ...

Compared to:

telemetry:
   logs:
     processors:
       - batch:
           exporter:
             otlp:
               protocol: http/protobuf
               endpoint: https://api.honeycomb.io:443
               headers:
                 "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
   metrics:
     processors:
       - batch:
           exporter:
             otlp:
               protocol: http/protobuf
               endpoint: https://api.honeycomb.io:443
               headers:
                 "x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
   traces:
     ...

wdyt?

@codeboten
Copy link
Contributor Author

@frzifus there was a decision made some time ago (i'm struggling to find a link with this info) to not pipe the collector's own telemetry through itself, to avoid the scenario of the collector's own telemetry being interrupted by an overloaded collector. This is why I'm proposing supporting the logging provider directly.

Are you thinking of the scenario where you'd like to be able to apply some transformations to the logs?

Copy link
Contributor

github-actions bot commented Aug 6, 2024

This PR was marked stale due to lack of activity. It will be closed in 14 days.

@github-actions github-actions bot added the Stale label Aug 6, 2024
@codeboten codeboten removed the Stale label Aug 6, 2024
Copy link
Contributor

This PR was marked stale due to lack of activity. It will be closed in 14 days.

@github-actions github-actions bot added the Stale label Aug 21, 2024
@codeboten codeboten removed the Stale label Aug 23, 2024
Copy link
Contributor

This PR was marked stale due to lack of activity. It will be closed in 14 days.

@github-actions github-actions bot added the Stale label Sep 11, 2024
@codeboten codeboten force-pushed the codeboten/add-log-record-processor-config branch from d8f5ce0 to d0e3509 Compare September 19, 2024 18:21
@codeboten codeboten marked this pull request as ready for review September 19, 2024 18:30
@codeboten codeboten requested a review from a team as a code owner September 19, 2024 18:30
Copy link

codecov bot commented Sep 19, 2024

Codecov Report

Attention: Patch coverage is 91.37931% with 5 lines in your changes missing coverage. Please review.

Project coverage is 91.43%. Comparing base (cb19e1d) to head (809edbe).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
service/service.go 33.33% 3 Missing and 1 partial ⚠️
service/telemetry/factory_impl.go 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main   #10544      +/-   ##
==========================================
- Coverage   91.43%   91.43%   -0.01%     
==========================================
  Files         435      435              
  Lines       23712    23752      +40     
==========================================
+ Hits        21682    21718      +36     
- Misses       1653     1656       +3     
- Partials      377      378       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@codeboten codeboten force-pushed the codeboten/add-log-record-processor-config branch from 4d01082 to 3ad3a90 Compare September 19, 2024 18:57
@codeboten codeboten removed the Stale label Sep 19, 2024
@codeboten codeboten force-pushed the codeboten/add-log-record-processor-config branch from 44aa182 to 09a60e5 Compare October 3, 2024 20:03
@codeboten
Copy link
Contributor Author

@open-telemetry/collector-approvers this PR should be ready for review

return nil, err
}
logger = logger.WithOptions(zap.WrapCore(func(_ zapcore.Core) zapcore.Core {
return otelzap.NewCore("go.opentelemetry.io/collector/service/telemetry", otelzap.WithLoggerProvider(sdk.LoggerProvider()))
Copy link
Member

@mx-psi mx-psi Oct 4, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you reach this point, I believe we are ignoring many of the existing options in the service::telemetry::logs section (e.g. output paths for sure, I wonder about some of the other options we already support). I think we should fail if said options are set and force the user to pick whether they want to use the processors section or the other options.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the code to use zap's NewTee function to preserve the existing logs, wdyt?

@codeboten codeboten force-pushed the codeboten/add-log-record-processor-config branch from 766208b to dc5df50 Compare October 10, 2024 19:51
@codeboten codeboten force-pushed the codeboten/add-log-record-processor-config branch from dc5df50 to 7753636 Compare October 21, 2024 22:01
Copy link
Member

@mx-psi mx-psi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Teeing sounds reasonable, it makes sense to have a fallback logging to be able to see errors related to the logs exporting.

Not necessarily in this PR but I think it would be good to have docs about this

service/telemetry/logger.go Outdated Show resolved Hide resolved
service/telemetry/logger.go Outdated Show resolved Hide resolved
}

logger = logger.WithOptions(zap.WrapCore(func(c zapcore.Core) zapcore.Core {
return zapcore.NewTee(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using NewTee here means that logs will be written to both cores right? So logs would be written to stdout and exported? Are we sure we want to do that initially or is configuring a processor acknowledgement that logs won't go to stdout? For users that want the collector to stop writing logs to stdout and only use the bridge to export their logs they won't have a solution.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i like the idea of having local logs as a fallback, which makes tee'ing nice in that sense. A user could configure the console log exporter if they wanted to have both local logs and data exported to another destination, but the output from the console will be significantly different.

Output from existing logger:

2024-10-22T15:14:38.081-0700    info    service@v0.111.0/service.go:280 Shutdown complete.

Output from console exporter:

{
        "Timestamp": "2024-10-22T15:14:38.081624-07:00",
        "ObservedTimestamp": "2024-10-22T15:14:38.08164-07:00",
        "Severity": 9,
        "SeverityText": "info",
        "Body": {
                "Type": "String",
                "Value": "Shutdown complete."
        },
        "Attributes": [],
        "TraceID": "00000000000000000000000000000000",
        "SpanID": "0000000000000000",
        "TraceFlags": "00",
        "Resource": [
                {
                        "Key": "service.name",
                        "Value": {
                                "Type": "STRING",
                                "Value": "otelcorecol"
                        }
                },
                {
                        "Key": "service.version",
                        "Value": {
                                "Type": "STRING",
                                "Value": "0.111.0-dev"
                        }
                },
                {
                        "Key": "telemetry.sdk.language",
                        "Value": {
                                "Type": "STRING",
                                "Value": "go"
                        }
                },
                {
                        "Key": "telemetry.sdk.name",
                        "Value": {
                                "Type": "STRING",
                                "Value": "opentelemetry"
                        }
                },
                {
                        "Key": "telemetry.sdk.version",
                        "Value": {
                                "Type": "STRING",
                                "Value": "1.31.0"
                        }
                }
        ],
        "Scope": {
                "Name": "go.opentelemetry.io/collector/service/telemetry",
                "Version": "",
                "SchemaURL": ""
        },
        "DroppedAttributes": 0
}

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One case where I think the fallback is interesting is if the OTLP logs exporting fails and we have some log from e.g. grpc-go that would be helpful in debugging

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR gives users the option to start producing logs over OTLP, without breaking existing functionality. I would suggest we follow up this PR with a feature gate that disables the tee'ing of the logs.

This would give users that don't want to double emit the logs to turn off the local console logging if a provider is configured. This could become the default in the future if users support this idea.

One thing to note is that configuration of the logging today supports sampling logs, which the otel logging provider has no concept of at this time

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That works for me. As long as we have some plan for how to give users the option to disable the console logging if they want I'm happy.

This allows us to use the otel-go/config package to support configuring external destinations
for logs. I'm putting this in draft to gather community feedback on whether this is a desirable
feature for the collector.

Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
@codeboten codeboten force-pushed the codeboten/add-log-record-processor-config branch from 8d29d6f to ba20612 Compare October 22, 2024 22:08
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
@codeboten codeboten merged commit 0daa813 into open-telemetry:main Oct 23, 2024
48 of 49 checks passed
@codeboten codeboten deleted the codeboten/add-log-record-processor-config branch October 23, 2024 17:14
@github-actions github-actions bot added this to the next release milestone Oct 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants