Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

exporting logs is not working #3604

Closed
fhuthmacher opened this issue Dec 28, 2023 · 10 comments
Closed

exporting logs is not working #3604

fhuthmacher opened this issue Dec 28, 2023 · 10 comments
Labels
bug Something isn't working logging

Comments

@fhuthmacher
Copy link

fhuthmacher commented Dec 28, 2023

I am trying to get to work traces, metrics and logs via AWS Distro for OpenTelemetry Collector (ADOT Collector), using the following Lambda Layer: arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python-amd64-ver-1-20-0:3 and Python 3.8.

My understanding is that this version includes OpenTelemetry Python v1.20.0 and ADOT Collector v0.35.0 and should support, trace, metrics and logs as experimental.
The issue seems to be that no matter how I configure otlphttp, the collector does not seem to export logs at all.

Steps to reproduce

  1. create Python 3.8 Lambda
  2. Add Layer arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python-amd64-ver-1-20-0:3
  3. create collector.yml file with below content in root dir:
extensions:
  sigv4auth:
    region: "us-east-1"
    service: "osis"

receivers:
  otlp:
    protocols:
      http:
        endpoint:

exporters:
  logging:
    loglevel: debug
  otlphttp:
    traces_endpoint: "https://XXXX/v1/traces"
    metrics_endpoint: "https://YYYY/v1/metrics"
    logs_endpoint: "https:///ZZZZ/v1/logs"
    auth:
      authenticator: sigv4auth
    compression: none
    
service:
  extensions: [sigv4auth]
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlphttp,logging]

    metrics:
      receivers: [otlp]
      exporters: [otlphttp,logging]
    
    logs:
      receivers: [otlp]
      exporters: [otlphttp,logging]
      
  telemetry:
    metrics:
      address: localhost:8888
  1. Set two environment variables:
    AWS_LAMBDA_EXEC_WRAPPER = /opt/otel-instrument
    OPENTELEMETRY_COLLECTOR_CONFIG_FILE = /var/task/collector.yml

  2. Update lambda_function.py with below code:

import json
import boto3
from opentelemetry import metrics
from opentelemetry import trace
import logging

tracer = trace.get_tracer("appl.tracer")

meter = metrics.get_meter("appl.meter")

dir_counter = meter.create_counter(
    "dir.calls",
    description="The number of directory calls",
)

@tracer.start_as_current_span("do_work")
def lambda_handler(event, context):

    logging.getLogger().error("This is a log message")    

    # This adds 1 to the counter
    result = '0'
    dir_counter.add(1, {"dir.value": result})
        
    client = boto3.client("s3")
    client.list_buckets()
    
    client = boto3.client("ec2")
    list = client.describe_instances()
    
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Lambda!')
    }

What is the expected behavior?
I would expect to see a similar output in logging for "logs" to what I get for traces and metrics.

What is the actual behavior?
I don't see any log output, only traces and metrics, see below example.

Function Logs
0\nScopeSpans SchemaURL: \nInstrumentationScope opentelemetry.instrumentation.botocore 0.41b0\nSpan #0\n    Trace ID       : b348c171927592ab5c0d60183c9352f4\n    Parent ID      : 6cdd27675458cb3f\n    ID             : f7ce677c8611cddd\n    Name           : S3.ListBuckets\n    Kind           : Client\n    Start time     : 2023-12-28 16:59:09.835148036 +0000 UTC\n    End time       : 2023-12-28 16:59:10.177052834 +0000 UTC\n    Status code    : Unset\n    Status message : \nAttributes:\n     -> rpc.system: Str(aws-api)\n     -> rpc.service: Str(S3)\n     -> rpc.method: Str(ListBuckets)\n     -> aws.region: Str(us-east-1)\n     -> aws.request_id: Str(VP5ZCEZERDWAKQ33)\n     -> retry_attempts: Int(0)\n     -> http.status_code: Int(200)\nSpan #1\n    Trace ID       : b348c171927592ab5c0d60183c9352f4\n    Parent ID      : 6cdd27675458cb3f\n    ID             : c9f69080451e23c4\n    Name           : EC2.DescribeInstances\n    Kind           : Client\n    Start time     : 2023-12-28 16:59:11.253592039 +0000 UTC\n    End time       : 2023-12-28 16:59:11.735706355 +0000 UTC\n    Status code    : Unset\n    Status message : \nAttributes:\n     -> rpc.system: Str(aws-api)\n     -> rpc.service: Str(EC2)\n     -> rpc.method: Str(DescribeInstances)\n     -> aws.region: Str(us-east-1)\n     -> aws.request_id: Str(8f664204-f9dd-4e05-9079-1a76caf78c84)\n     -> retry_attempts: Int(0)\n     -> http.status_code: Int(200)\nScopeSpans #1\nScopeSpans SchemaURL: \nInstrumentationScope appl.tracer \nSpan #0\n    Trace ID       : b348c171927592ab5c0d60183c9352f4\n    Parent ID      : 4c77eecbc20517cd\n    ID             : 6cdd27675458cb3f\n    Name           : do_work\n    Kind           : Internal\n    Start time     : 2023-12-28 16:59:08.037059752 +0000 UTC\n    End time       : 2023-12-28 16:59:11.793806031 +0000 UTC\n    Status code    : Unset\n    Status message : \nScopeSpans #2\nScopeSpans SchemaURL: \nInstrumentationScope opentelemetry.instrumentation.aws_lambda 0.41b0\nSpan #0\n    Trace ID       : b348c171927592ab5c0d60183c9352f4\n    Parent ID      : \n    ID             : 4c77eecbc20517cd\n    Name           : lambda_function.lambda_handler\n    Kind           : Server\n    Start time     : 2023-12-28 16:59:07.968461942 +0000 UTC\n    End time       : 2023-12-28 16:59:11.793881653 +0000 UTC\n    Status code    : Unset\n    Status message : \nAttributes:\n     -> faas.id: Str(arn:aws:lambda:us-east-1:026459568683:function:AOS_Trace_Demo38)\n     -> faas.execution: Str(0058b917-9560-4762-9e51-550b33479b0f)\n","**kind":"exporter","data_type":"traces","name":"logging"**}

{"level":"info","ts":1703782753.9983406,"msg":"**MetricsExporter","kind":"exporter","data_type":"metrics"**,"name":"logging","resource metrics":1,"metrics":1,"data points":1}
{"level":"info","ts":1703782753.9984167,"msg":"ResourceMetrics #0\nResource SchemaURL: \nResource attributes:\n     -> telemetry.sdk.language: Str(python)\n     -> telemetry.sdk.name: Str(opentelemetry)\n     -> telemetry.sdk.version: Str(1.20.0)\n     -> cloud.region: Str(us-east-1)\n     -> cloud.provider: Str(aws)\n     -> faas.name: Str(AOS_Trace_Demo38)\n     -> faas.version: Str($LATEST)\n     -> faas.instance: Str(2023/12/28/[$LATEST]c2c54d603ad44f4d8ee20f683340f655)\n     -> service.name: Str(demo-lambda)\n     -> telemetry.auto.version: Str(0.41b0)\nScopeMetrics #0\nScopeMetrics SchemaURL: \nInstrumentationScope appl.meter \nMetric #0\nDescriptor:\n     -> Name: dir.calls\n     -> Description: The number of directory calls\n     -> Unit: \n     -> DataType: Sum\n     -> IsMonotonic: true\n     -> AggregationTemporality: Cumulative\nNumberDataPoints #0\nData point attributes:\n     -> dir.value: Str(0)\nStartTimestamp: 2023-12-28 16:59:08.093701828 +0000 UTC\nTimestamp: 2023-12-28 16:59:13.791589871 +0000 UTC\nValue: 1\n","kind":"exporter","data_type":"metrics","name":"logging"}

END RequestId: 0058b917-9560-4762-9e51-550b33479b0f
REPORT RequestId: 0058b917-9560-4762-9e51-550b33479b0f	Duration: 6073.88 ms	Billed Duration: 6074 ms	Memory Size: 128 MB	Max Memory Used: 128 MB	Init Duration: 1847.31 ms
@fhuthmacher fhuthmacher added the bug Something isn't working label Dec 28, 2023
@srikanthccv
Copy link
Member

If you rely on manual instrumentation, you should add OTEL Handler to your logging. See example here https://github.com/open-telemetry/opentelemetry-python/blob/main/docs/examples/logs/example.py. If you rely on auto-instrumentation you should set OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED to true

_OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED = (
"OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED"
)
"""
.. envvar:: OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED
The :envvar:`OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED` environment variable allows users to
enable/disable the logging SDK auto instrumentation.
Default: False
Note: Logs SDK and its related settings are experimental.
"""

@fhuthmacher
Copy link
Author

Thank you, I tried OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED, but that did not change anything.

And when I add the manual instrumentation I get only the below warning in the logs:

"Overriding of current LoggerProvider is not allowed"

so no logs either :/

@srikanthccv
Copy link
Member

I couldn't reproduce this. If you can share simple repo with reproduction steps that would be helpful

@fhuthmacher
Copy link
Author

fhuthmacher commented Jan 20, 2024

Hi,

Here is what I did:

  1. in my Lambda Dockerfile, I extract the Lambda Layer to /opt
RUN curl $(aws lambda get-layer-version-by-arn --arn arn:aws:lambda:us-east-1:901920570463:layer:aws-otel-python-amd64-ver-1-21-0:1 --query 'Content.Location' --output text) --output layer.zip
RUN unzip layer.zip -d /opt
RUN rm layer.zip
  1. Added manual instrumentation to Python code:

import logging
from opentelemetry._logs import get_logger
from opentelemetry._logs import get_logger_provider
from opentelemetry._logs import set_logger_provider
from opentelemetry.exporter.otlp.proto.http._log_exporter import (
   OTLPLogExporter,
)
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor, SimpleLogRecordProcessor
from opentelemetry.sdk.extension.aws.resource._lambda import (
  AwsLambdaResourceDetector,
)
from opentelemetry.sdk.resources import get_aggregated_resources

logger_provider = LoggerProvider(
    resource=get_aggregated_resources(
          [
              AwsLambdaResourceDetector(),
          ]
      ),
)
set_logger_provider(logger_provider)

exporter = OTLPLogExporter(endpoint='http://0.0.0.0:4318/v1/logs')
logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
handler = LoggingHandler(level=logging.INFO, logger_provider=logger_provider)
# Attach OTLP handler to root logger
logger = logging.getLogger().addHandler(handler)
# Create different namespaced loggers
loggerAgent = logging.getLogger("financeagent.handler")
loggerAgent.setLevel(os.environ['OTEL_LOG_LEVEL'])
### END

I got the above manual instrumentation to work. But I was not able to get the auto-instrumentation to work.
Also how would you set the LogLevel with the auto instrumentation? I wonder if that is what is/was missing?

Here is the repo, which includes the Dockerfile and source code if you want to replicate it (don't deploy the CF template as it deploys a bunch of other things as well).

@srikanthccv
Copy link
Member

Also how would you set the LogLevel with the auto instrumentation? I wonder if that is what is/was missing?

It respects the log level of your application settings. It is just another handler registered with global that receives the logs and exports.

@atimush
Copy link

atimush commented Feb 12, 2024

Hello,
I have the same problem with logs. The OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true did not help.

@goyerl
Copy link

goyerl commented Feb 12, 2024

I think the logger instrumentation is not present in the lambda distribution https://github.com/open-telemetry/opentelemetry-lambda/blob/main/python/src/otel/otel_sdk/requirements-nodeps.txt

@lzchen
Copy link
Contributor

lzchen commented Feb 16, 2024

@goyerl

You shouldn't need the logger instrumentation. Logging is part of the OpenTelemetry SDK.

@lzchen
Copy link
Contributor

lzchen commented Jul 2, 2024

@fhuthmacher

Is this still an issue?

@lzchen lzchen added the logging label Jul 5, 2024
@lzchen
Copy link
Contributor

lzchen commented Oct 31, 2024

Closing due to inactivity.

@lzchen lzchen closed this as completed Oct 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working logging
Projects
None yet
Development

No branches or pull requests

5 participants