Skip to content

Conversation

@verdie-g
Copy link

@verdie-g verdie-g commented Dec 2, 2025

Fixes #1959

Changes

This PR adds two new GenAI attributes to represent provider-level prompt caching:

  • gen_ai.usage.cache_read_input_tokens
  • gen_ai.usage.cache_creation_input_tokens

It also updates the description of gen_ai.usage.input_tokens to state that it must include cached tokens. OpenAI and Vertex AI already count cached tokens in input_tokens, while Anthropic excludes them. Without clarification, instrumentation would produce incompatible values. Requiring input_tokens to always include cached tokens ensures a consistent, cross-provider definition of total input tokens, with the new attributes exposing the cached breakdown.

Merge requirement checklist

  • CONTRIBUTING.md guidelines followed.
  • Change log entry added, according to the guidelines in When to add a changelog entry.
    • If your PR does not need a change log, start the PR title with [chore]
  • Links to the prototypes or existing instrumentations (when adding or changing conventions)

- ref: gen_ai.usage.cache_read_input_tokens
requirement_level: recommended
- ref: gen_ai.usage.cache_creation_input_tokens
requirement_level: opt_in
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think opt_in refers to the user opting in to instrumenting this attribute, which shouldn't be necessary. I don't think it's about whether the user has enabled the relevant feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:gen-ai enhancement New feature or request

Projects

Status: Awaiting codeowners approval

Development

Successfully merging this pull request may close these issues.

More detailed token usage span attributes and metrics

2 participants