-
Notifications
You must be signed in to change notification settings - Fork 440
chore(telemetry): benchmark to track impact of metrics #12559
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…r a lot of metrics.
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #12559 +/- ##
==========================================
- Coverage 3.69% 3.69% -0.01%
==========================================
Files 1359 1359
Lines 133666 133666
==========================================
- Hits 4940 4938 -2
- Misses 128726 128728 +2 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you want to also run this as part of the microbenchmarks CI job, you need to add it to the list in .gitlab/benchmarks/micrbenchmarks.yml
Datadog ReportBranch report: ✅ 0 Failed, 43 Passed, 215 Skipped, 51.63s Total duration (5m 13.11s time saved) |
The check regressions job is failing because this new job doesn't exist on |
Related to the efforts in #12430. I separated the benchmarks in a separate PR since it doesn't need to be tied up with that PR.
The goal of this PR is to add the benchmark measuring the impact of creating instrumentation telemetry metrics.
Credits: @erikayasuda and @brettlangdon for pairing with me on this and checking out the results!
Checklist
Reviewer Checklist