-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[service] add logger provider configuration support #10544
[service] add logger provider configuration support #10544
Conversation
While I think the ability to directly configure the destination of collector logs in the telemetry section might be cool for small and simple configurations in certain environments where you can not simply pick up your looks using Something like this: exporter:
otlp/honeycomb:
protocol: http/protobuf
endpoint: https://api.honeycomb.io:443
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
pipeline:
telemetry:
metrics:
verbosity: detailed
pipelines: [logs/collector]
logs:
verbosity: detailed
pipelines: [logs/collector]
logs/collector:
receivers: [telemetry/logs]
processors: [batch]
exporters: [otlp/honeycomb]
metrics/collector:
receivers: [telemetry/metrics]
processors: [batch]
exporters: [otlp/honeycomb]
traces/collector:
... Compared to: telemetry:
logs:
processors:
- batch:
exporter:
otlp:
protocol: http/protobuf
endpoint: https://api.honeycomb.io:443
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
metrics:
processors:
- batch:
exporter:
otlp:
protocol: http/protobuf
endpoint: https://api.honeycomb.io:443
headers:
"x-honeycomb-team": "${env:HONEYCOMB_API_KEY}"
traces:
... wdyt? |
@frzifus there was a decision made some time ago (i'm struggling to find a link with this info) to not pipe the collector's own telemetry through itself, to avoid the scenario of the collector's own telemetry being interrupted by an overloaded collector. This is why I'm proposing supporting the logging provider directly. Are you thinking of the scenario where you'd like to be able to apply some transformations to the logs? |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
This PR was marked stale due to lack of activity. It will be closed in 14 days. |
d8f5ce0
to
d0e3509
Compare
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #10544 +/- ##
==========================================
- Coverage 91.43% 91.43% -0.01%
==========================================
Files 435 435
Lines 23712 23752 +40
==========================================
+ Hits 21682 21718 +36
- Misses 1653 1656 +3
- Partials 377 378 +1 ☔ View full report in Codecov by Sentry. |
4d01082
to
3ad3a90
Compare
44aa182
to
09a60e5
Compare
@open-telemetry/collector-approvers this PR should be ready for review |
service/telemetry/logger.go
Outdated
return nil, err | ||
} | ||
logger = logger.WithOptions(zap.WrapCore(func(_ zapcore.Core) zapcore.Core { | ||
return otelzap.NewCore("go.opentelemetry.io/collector/service/telemetry", otelzap.WithLoggerProvider(sdk.LoggerProvider())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you reach this point, I believe we are ignoring many of the existing options in the service::telemetry::logs
section (e.g. output paths for sure, I wonder about some of the other options we already support). I think we should fail if said options are set and force the user to pick whether they want to use the processors
section or the other options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the code to use zap's NewTee
function to preserve the existing logs, wdyt?
766208b
to
dc5df50
Compare
dc5df50
to
7753636
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Teeing sounds reasonable, it makes sense to have a fallback logging to be able to see errors related to the logs exporting.
Not necessarily in this PR but I think it would be good to have docs about this
} | ||
|
||
logger = logger.WithOptions(zap.WrapCore(func(c zapcore.Core) zapcore.Core { | ||
return zapcore.NewTee( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using NewTee
here means that logs will be written to both cores right? So logs would be written to stdout and exported? Are we sure we want to do that initially or is configuring a processor acknowledgement that logs won't go to stdout? For users that want the collector to stop writing logs to stdout and only use the bridge to export their logs they won't have a solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i like the idea of having local logs as a fallback, which makes tee'ing nice in that sense. A user could configure the console log exporter if they wanted to have both local logs and data exported to another destination, but the output from the console will be significantly different.
Output from existing logger:
2024-10-22T15:14:38.081-0700 info service@v0.111.0/service.go:280 Shutdown complete.
Output from console exporter:
{
"Timestamp": "2024-10-22T15:14:38.081624-07:00",
"ObservedTimestamp": "2024-10-22T15:14:38.08164-07:00",
"Severity": 9,
"SeverityText": "info",
"Body": {
"Type": "String",
"Value": "Shutdown complete."
},
"Attributes": [],
"TraceID": "00000000000000000000000000000000",
"SpanID": "0000000000000000",
"TraceFlags": "00",
"Resource": [
{
"Key": "service.name",
"Value": {
"Type": "STRING",
"Value": "otelcorecol"
}
},
{
"Key": "service.version",
"Value": {
"Type": "STRING",
"Value": "0.111.0-dev"
}
},
{
"Key": "telemetry.sdk.language",
"Value": {
"Type": "STRING",
"Value": "go"
}
},
{
"Key": "telemetry.sdk.name",
"Value": {
"Type": "STRING",
"Value": "opentelemetry"
}
},
{
"Key": "telemetry.sdk.version",
"Value": {
"Type": "STRING",
"Value": "1.31.0"
}
}
],
"Scope": {
"Name": "go.opentelemetry.io/collector/service/telemetry",
"Version": "",
"SchemaURL": ""
},
"DroppedAttributes": 0
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One case where I think the fallback is interesting is if the OTLP logs exporting fails and we have some log from e.g. grpc-go that would be helpful in debugging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR gives users the option to start producing logs over OTLP, without breaking existing functionality. I would suggest we follow up this PR with a feature gate that disables the tee'ing of the logs.
This would give users that don't want to double emit the logs to turn off the local console logging if a provider is configured. This could become the default in the future if users support this idea.
One thing to note is that configuration of the logging today supports sampling logs, which the otel logging provider has no concept of at this time
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That works for me. As long as we have some plan for how to give users the option to disable the console logging if they want I'm happy.
This allows us to use the otel-go/config package to support configuring external destinations for logs. I'm putting this in draft to gather community feedback on whether this is a desirable feature for the collector. Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
Signed-off-by: Alex Boten <223565+codeboten@users.noreply.github.com>
8d29d6f
to
ba20612
Compare
This allows us to use the otel-go/config package to support configuring external destinations for logs. I'm putting this in draft to gather community feedback on whether this is a desirable feature for the collector.
I used the following configuration with this PR to send data to an OTLP backend:
This allowed me to see logs in my backend: