Skip to content

feat(llmobs): track prompt caching for openai chat completions #13755

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 34 commits into from
Jul 9, 2025

Conversation

lievan
Copy link
Contributor

@lievan lievan commented Jun 24, 2025

Tracks number of tokens read from the prompt cache for openai chat completions

openai does prompt caching by default and returns a cached_tokens field in prompt_tokens_details
https://platform.openai.com/docs/api-reference/chat/create

We rely on two keys in metrics for prompt caching:

  • cache_read_input_tokens
  • cache_write_input_tokens

We have both of these fields since bedrock/anthropic return info on cache read/writes

cached_tokens maps to cache_read_input_tokens

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

Copy link
Contributor

github-actions bot commented Jun 24, 2025

CODEOWNERS have been resolved as:

releasenotes/notes/oai-p-cache-78c511f97709a357.yaml                    @DataDog/apm-python
tests/contrib/openai/cassettes/v1/chat_completion_prompt_caching_cache_read.yaml  @DataDog/ml-observability
tests/contrib/openai/cassettes/v1/chat_completion_prompt_caching_cache_write.yaml  @DataDog/ml-observability
tests/contrib/openai/cassettes/v1/chat_completion_stream_prompt_caching_cache_read.yaml  @DataDog/ml-observability
tests/contrib/openai/cassettes/v1/chat_completion_stream_prompt_caching_cache_write.yaml  @DataDog/ml-observability
tests/contrib/openai/cassettes/v1/responses_prompt_caching_cache_read.yaml  @DataDog/ml-observability
tests/contrib/openai/cassettes/v1/responses_prompt_caching_cache_write.yaml  @DataDog/ml-observability
tests/contrib/openai/cassettes/v1/responses_stream_prompt_caching_cache_read.yaml  @DataDog/ml-observability
tests/contrib/openai/cassettes/v1/responses_stream_prompt_caching_cache_write.yaml  @DataDog/ml-observability
ddtrace/llmobs/_integrations/openai.py                                  @DataDog/ml-observability
tests/contrib/openai/test_openai_llmobs.py                              @DataDog/ml-observability

Copy link
Contributor

github-actions bot commented Jun 24, 2025

Bootstrap import analysis

Comparison of import times between this PR and base.

Summary

The average import time from this PR is: 275 ± 2 ms.

The average import time from base is: 277 ± 2 ms.

The import time difference between this PR and base is: -1.95 ± 0.08 ms.

Import time breakdown

The following import paths have shrunk:

ddtrace.auto 1.974 ms (0.72%)
ddtrace.bootstrap.sitecustomize 1.299 ms (0.47%)
ddtrace.bootstrap.preload 1.299 ms (0.47%)
ddtrace.internal.remoteconfig.client 0.656 ms (0.24%)
ddtrace 0.675 ms (0.25%)
ddtrace.internal._unpatched 0.032 ms (0.01%)
json 0.032 ms (0.01%)
json.decoder 0.032 ms (0.01%)
re 0.032 ms (0.01%)
enum 0.032 ms (0.01%)
types 0.032 ms (0.01%)

@pr-commenter
Copy link

pr-commenter bot commented Jun 24, 2025

Benchmarks

Benchmark execution time: 2025-07-09 15:29:00

Comparing candidate commit 749f06a in PR branch evan.li/openai-prompt-caching with baseline commit 573a530 in branch main.

Found 0 performance improvements and 1 performance regressions! Performance is the same for 523 metrics, 2 unstable metrics.

scenario:iastaspectsospath-ospathnormcase_aspect

  • 🟥 execution_time [+398.337ns; +474.077ns] or [+11.400%; +13.567%]

@lievan lievan marked this pull request as ready for review June 24, 2025 19:56
@lievan lievan requested review from a team as code owners June 24, 2025 19:56
@lievan lievan requested review from gnufede and quinna-h June 24, 2025 19:56
Copy link
Contributor

@Yun-Kim Yun-Kim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nicely done! Small nits but otherwise lgtm

@lievan lievan enabled auto-merge (squash) July 3, 2025 19:11
@lievan lievan merged commit 4e913c9 into main Jul 9, 2025
462 checks passed
@lievan lievan deleted the evan.li/openai-prompt-caching branch July 9, 2025 15:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants