Skip to content

Conversation

@fpacifici
Copy link
Contributor

We do not have a metric that records the execution time of an EVALSHA.
This will alos help us understanding if the logger is behavior properly.

The last_log_time reset logic was also wrong. This fixes it.

Removed a test that was redundant. The same logic was in the previous one

@fpacifici fpacifici requested review from a team as code owners January 19, 2026 23:44
@github-actions github-actions bot added the Scope: Backend Automatically applied to PRs that change backend components label Jan 19, 2026
"""
Record a single EVALSHA operation and periodically log the top offenders.
"""
metrics.timing("spans.buffer.evalsha_latency", latency_ms)
Copy link
Member

@untitaker untitaker Jan 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this will be too many metric calls. we aggregate metrics already per batch, not per subtree (see redirect count metrics). you can do this but it will impact performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Scope: Backend Automatically applied to PRs that change backend components

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants