Conversation
purple4reina
left a comment
There was a problem hiding this comment.
Question. How does LMI handle cold starts then? Does the customer ever see their invocations waiting on a cold start? or does it always do something like proactive init?
|
Question. Will you also be doing this for python? I assume the universal languages will be handled already as their cold start spans are created in the extension. |
As provisioned concurrency, or when a sandbox is proactively initialized, that's how the span would look on the trace, now we are avoiding having this experience in LMI specifically
Yes, and for those, the change has been already made on the extension |
With provisioned concurrency, containers are pre-warmed before invocations, causing module load timestamps to be captured minutes (or hours) before the first invocation. This inflates aws.lambda.load span durations to the full pre-warm gap rather than the actual load time. Skip cold start span creation for provisioned-concurrency, consistent with the existing fix for lambda-managed-instances (PR #717). Fixes #723. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
With provisioned concurrency, containers are pre-warmed before invocations, causing module load timestamps to be captured minutes (or hours) before the first invocation. This inflates aws.lambda.load span durations to the full pre-warm gap rather than the actual load time. Skip cold start span creation for provisioned-concurrency, consistent with the existing fix for lambda-managed-instances (PR #717). Fixes #723.
What does this PR do?
Omit creating cold start tracing on LMI
Motivation
We want to provide a better experience when a new sandbox is initialized and the gap between invocation and init is quite long, and SVLS-8482
Testing Guidelines
Unit tests
Additional Notes
Metrics are not affected since we still want to know how many sandbox are span up.
Types of Changes
Check all that apply