Skip to content

feat: updates documentation with cache token metric #125

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -222,6 +222,9 @@ Convert the event(s) returned by your model to the Strands Agents [StreamEvent](
"inputTokens": 234, # Number of tokens sent in the request to the model..
"outputTokens": 234, # Number of tokens that the model generated for the request.
"totalTokens": 468 # Total number of tokens (input + output).
"cacheWriteInputTokens": 234 # Optional: Number of input tokens written to cache.
"cacheReadInputTokens": 0 # Optional: Number of input tokens read from cache.

}
}
```
Expand Down
2 changes: 1 addition & 1 deletion docs/user-guide/observability-evaluation/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Metrics are essential for understanding agent performance, optimizing behavior,

The Strands Agents SDK automatically tracks key metrics during agent execution:

- **Token usage**: Input tokens, output tokens, and total tokens consumed
- **Token usage**: Input tokens, output tokens, total tokens, and cache tokens consumed
- **Performance metrics**: Latency and execution time measurements
- **Tool usage**: Call counts, success rates, and execution times for each tool
- **Event loop cycles**: Number of reasoning cycles and their durations
Expand Down