-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[EBPF] telemetry: clean maps before stopping the manager #32522
Conversation
057c6ed
to
a6d04c3
Compare
Regression DetectorRegression Detector ResultsMetrics dashboard Baseline: 7d64f0b Optimization Goals: ✅ No significant changes detected
|
perf | experiment | goal | Δ mean % | Δ mean % CI | trials | links |
---|---|---|---|---|---|---|
➖ | quality_gate_idle | memory utilization | +0.25 | [+0.21, +0.28] | 1 | Logs bounds checks dashboard |
➖ | quality_gate_idle_all_features | memory utilization | +0.19 | [+0.11, +0.27] | 1 | Logs bounds checks dashboard |
➖ | file_to_blackhole_500ms_latency | egress throughput | +0.07 | [-0.71, +0.84] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency | egress throughput | +0.06 | [-0.71, +0.83] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http2 | egress throughput | +0.06 | [-0.81, +0.92] | 1 | Logs |
➖ | file_tree | memory utilization | +0.04 | [-0.09, +0.18] | 1 | Logs |
➖ | file_to_blackhole_100ms_latency | egress throughput | +0.04 | [-0.66, +0.74] | 1 | Logs |
➖ | file_to_blackhole_300ms_latency | egress throughput | +0.02 | [-0.62, +0.67] | 1 | Logs |
➖ | quality_gate_logs | % cpu utilization | +0.00 | [-3.24, +3.25] | 1 | Logs |
➖ | tcp_dd_logs_filter_exclude | ingress throughput | +0.00 | [-0.01, +0.01] | 1 | Logs |
➖ | uds_dogstatsd_to_api | ingress throughput | -0.01 | [-0.11, +0.09] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency_http1 | egress throughput | -0.03 | [-0.89, +0.83] | 1 | Logs |
➖ | file_to_blackhole_0ms_latency | egress throughput | -0.05 | [-0.90, +0.80] | 1 | Logs |
➖ | tcp_syslog_to_blackhole | ingress throughput | -0.18 | [-0.24, -0.12] | 1 | Logs |
➖ | file_to_blackhole_1000ms_latency_linear_load | egress throughput | -0.21 | [-0.67, +0.25] | 1 | Logs |
➖ | uds_dogstatsd_to_api_cpu | % cpu utilization | -0.95 | [-1.63, -0.27] | 1 | Logs |
Bounds Checks: ✅ Passed
perf | experiment | bounds_check_name | replicates_passed | links |
---|---|---|---|---|
✅ | file_to_blackhole_0ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http1 | memory_usage | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | lost_bytes | 10/10 | |
✅ | file_to_blackhole_0ms_latency_http2 | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_1000ms_latency_linear_load | memory_usage | 10/10 | |
✅ | file_to_blackhole_100ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_100ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_300ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_300ms_latency | memory_usage | 10/10 | |
✅ | file_to_blackhole_500ms_latency | lost_bytes | 10/10 | |
✅ | file_to_blackhole_500ms_latency | memory_usage | 10/10 | |
✅ | quality_gate_idle | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_idle_all_features | memory_usage | 10/10 | bounds checks dashboard |
✅ | quality_gate_logs | lost_bytes | 10/10 | |
✅ | quality_gate_logs | memory_usage | 10/10 |
Explanation
Confidence level: 90.00%
Effect size tolerance: |Δ mean %| ≥ 5.00%
Performance changes are noted in the perf column of each table:
- ✅ = significantly better comparison variant performance
- ❌ = significantly worse comparison variant performance
- ➖ = no significant change in performance
A regression test is an A/B test of target performance in a repeatable rig, where "performance" is measured as "comparison variant minus baseline variant" for an optimization goal (e.g., ingress throughput). Due to intrinsic variability in measuring that goal, we can only estimate its mean value for each experiment; we report uncertainty in that value as a 90.00% confidence interval denoted "Δ mean % CI".
For each experiment, we decide whether a change in performance is a "regression" -- a change worth investigating further -- if all of the following criteria are true:
-
Its estimated |Δ mean %| ≥ 5.00%, indicating the change is big enough to merit a closer look.
-
Its 90.00% confidence interval "Δ mean % CI" does not contain zero, indicating that if our statistical model is accurate, there is at least a 90.00% chance there is a difference in performance between baseline and comparison variants.
-
Its configuration does not mark it "erratic".
CI Pass/Fail Decision
✅ Passed. All Quality Gates passed.
- quality_gate_logs, bounds check lost_bytes: 10/10 replicas passed. Gate passed.
- quality_gate_logs, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle_all_features, bounds check memory_usage: 10/10 replicas passed. Gate passed.
- quality_gate_idle, bounds check memory_usage: 10/10 replicas passed. Gate passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewed
// Cleanup mapKeys (initialized in initializeMapErrTelemetryMap) | ||
h := keyHash() | ||
for _, mapName := range maps { | ||
delete(e.mapKeys, mapTelemetryKey(mapName, mn)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not just create a new empty map instead of deleting each element?
e.mapKeys = make(map[telemetryKey]uint64)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There might be more keys in the map for other modules, the ebpfTelemetry
object is a singleton for all managers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how this handles a map that is shared between 2 modules? (i think the connection map is shared between npm and usm). I think it is more general question of how the telemetry is counted in that case. iiuc we will have 2 entries in the telemetry map, since the key is a tuple of {module, map}.
are we going to show the errors on that map for each module separately?
cc @usamasaqib
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good question. Looking at the code, it seems we maybe handling shared maps incorrectly. For example, in the case of connections map we would just be dropping error telemetry from USM code, and only recording it for NPM code.
Ill try to make a test to verify whether this is happening.
pkg/ebpf/telemetry/modifier.go
Outdated
if perfCollector != nil { | ||
perfCollector.unregisterTelemetry(m) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we really want to do it in the modifer BeforeStop? that feels very wrong design wise.
This should be handled by the ebpfErrorsTelemetry
itself and not by the modifier
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
perfCollector
is not even part of errors telemetry. This should definitely not be done here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I thought it was part of the same module as I saw the UnregisterTelemetry
calls. Removed.
@@ -60,6 +60,7 @@ func (k *telemetryKey) String() string { | |||
type ebpfErrorsTelemetry interface { | |||
sync.Locker | |||
fill([]names.MapName, names.ModuleName, *maps.GenericMap[uint64, mapErrTelemetry], *maps.GenericMap[uint64, helperErrTelemetry]) error | |||
cleanup([]names.MapName, names.ModuleName, *maps.GenericMap[uint64, mapErrTelemetry], *maps.GenericMap[uint64, helperErrTelemetry]) error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit - it will be useful to document the functions and the parameters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM for USM owned files
Uncompressed package size comparisonComparison with ancestor Diff per package
Decision✅ Passed |
Test changes on VMUse this command from test-infra-definitions to manually test this PR changes on a VM: inv aws.create-vm --pipeline-id=51999561 --os-family=ubuntu Note: This applies to commit 6f874b3 |
/merge |
Devflow running:
|
What does this PR do?
This PR changes how the telemetry cleanup is done. Instead of relying on an extra call, we use the manager modifier
BeforeStop
method introduced in #32453 to automatically clean up without explicit calls. Cleanup for other structures the telemetry modifier created has also been added to that hook.Motivation
Fix an error where GPU and USM modules could not start at the same time, as the shared-libraries program would not re-start due to some leftover data in the telemetry maps.
Describe how you validated your changes
Unit tests included.
Possible Drawbacks / Trade-offs
Additional Notes