Skip to content

chore(telemetry): benchmark to track impact of metrics #12559

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Feb 28, 2025

Conversation

wantsui
Copy link
Collaborator

@wantsui wantsui commented Feb 27, 2025

Related to the efforts in #12430. I separated the benchmarks in a separate PR since it doesn't need to be tied up with that PR.

The goal of this PR is to add the benchmark measuring the impact of creating instrumentation telemetry metrics.

Credits: @erikayasuda and @brettlangdon for pairing with me on this and checking out the results!

Checklist

  • PR author has checked that all the criteria below are met
  • The PR description includes an overview of the change
  • The PR description articulates the motivation for the change
  • The change includes tests OR the PR description describes a testing strategy
  • The PR description notes risks associated with the change, if any
  • Newly-added code is easy to change
  • The change follows the library release note guidelines
  • The change includes or references documentation updates if necessary
  • Backport labels are set (if applicable)

Reviewer Checklist

  • Reviewer has checked that all the criteria below are met
  • Title is accurate
  • All changes are related to the pull request's stated goal
  • Avoids breaking API changes
  • Testing strategy adequately addresses listed risks
  • Newly-added code is easy to change
  • Release note makes sense to a user of the library
  • If necessary, author has acknowledged and discussed the performance implications of this PR as reported in the benchmarks PR comment
  • Backport labels are set in a manner that is consistent with the release branch maintenance policy

@wantsui wantsui added the changelog/no-changelog A changelog entry is not required for this PR. label Feb 27, 2025
@wantsui wantsui requested a review from a team as a code owner February 27, 2025 21:25
@wantsui wantsui requested a review from juanjux February 27, 2025 21:25
Copy link
Contributor

github-actions bot commented Feb 27, 2025

CODEOWNERS have been resolved as:

benchmarks/telemetry_add_metric/config.yaml                             @DataDog/apm-core-python
benchmarks/telemetry_add_metric/scenario.py                             @DataDog/apm-core-python
.gitlab/benchmarks/microbenchmarks.yml                                  @DataDog/python-guild @DataDog/apm-core-python

@codecov-commenter
Copy link

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 3.69%. Comparing base (8c4d62d) to head (7ff2789).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main   #12559      +/-   ##
==========================================
- Coverage    3.69%    3.69%   -0.01%     
==========================================
  Files        1359     1359              
  Lines      133666   133666              
==========================================
- Hits         4940     4938       -2     
- Misses     128726   128728       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@brettlangdon brettlangdon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you want to also run this as part of the microbenchmarks CI job, you need to add it to the list in .gitlab/benchmarks/micrbenchmarks.yml

@wantsui wantsui requested a review from a team as a code owner February 27, 2025 21:44
@wantsui wantsui changed the title chore(telemetry-metrics): benchmark to track impact of metrics chore(telemetry): benchmark to track impact of metrics Feb 27, 2025
@datadog-dd-trace-py-rkomorn
Copy link

Datadog Report

Branch report: wantsui/add-benchmark-for-adding-metrics
Commit report: dc15d86
Test service: dd-trace-py

✅ 0 Failed, 43 Passed, 215 Skipped, 51.63s Total duration (5m 13.11s time saved)

@brettlangdon brettlangdon enabled auto-merge (squash) February 28, 2025 17:00
@brettlangdon
Copy link
Member

The check regressions job is failing because this new job doesn't exist on main, manually merging rather than removing, merging, and then adding in a new PR.

@brettlangdon brettlangdon merged commit 542f62a into main Feb 28, 2025
804 of 808 checks passed
@brettlangdon brettlangdon deleted the wantsui/add-benchmark-for-adding-metrics branch February 28, 2025 20:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
changelog/no-changelog A changelog entry is not required for this PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants