-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[APMSP-1280] pkg/trace: refactor peer tags config and add to info endpoint #27603
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall, maybe remove the code that is commented out?
Yes, for sure. There's some cleanup to do, and the commented tests are there for now so I can go back and create their equivalents in the config code so we cover those cases where that logic now lives. |
Regression DetectorRegression Detector ResultsRun ID: 091982f6-df4e-4e46-9d30-49121b848cdf Metrics dashboard Target profiles Baseline: ae49421 Performance changes are noted in the perf column of each table:
No significant changes in experiment optimization goalsConfidence level: 90.00% There were no significant changes in experiment optimization goals at this confidence level and effect size tolerance.
|
perf | experiment | goal | Δ mean % | Δ mean % CI | links |
---|---|---|---|---|---|
➖ | tcp_syslog_to_blackhole | ingress throughput | +0.80 | [-12.06, +13.65] | Logs |
➖ | otel_to_otel_logs | ingress throughput | +0.61 | [-0.20, +1.43] | Logs |
➖ | file_tree | memory utilization | +0.14 | [+0.08, +0.21] | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | +0.00 | [-0.00, +0.00] | Logs |
➖ | idle | memory utilization | -0.10 | [-0.13, -0.06] | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.14 | [-1.01, +0.74] | Logs |
➖ | pycheck_1000_100byte_tags | % cpu utilization | -0.18 | [-4.88, +4.52] | Logs |
➖ | basic_py_check | % cpu utilization | -0.48 | [-3.07, +2.11] | Logs |
Explanation
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
As part of moving towards enabling client-side-stats by default we want to add some parametric tests. To provide some peer_tags to test with this change starts returning the peer_tags as expected by the datadog-agent (PR for that change here: DataDog/datadog-agent#27603 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv create-vm --pipeline-id=40684404 --os-family=ubuntu Note: This applies to commit 047b943 |
/merge |
🚂 MergeQueue: pull request added to the queue The median merge time in Use |
What does this PR do?
This PR refactors the configuration of 'peer tags' to make it more consistent across the various pieces of code that use them, and exposes them through the agent's info endpoint.
Motivation
Clients will need peer tags in order to calculate trace stats. We want to make sure the same set of peer tags are used everywhere.
Additional Notes
Possible Drawbacks / Trade-offs
Describe how to test/QA your changes
Write an e2e test that configures peer tagging and ensures we get the expected tags in stats.