-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache instruments so repeatedly creating identical instruments doesn't leak memory #4820
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #4820 +/- ##
=====================================
Coverage 82.3% 82.3%
=====================================
Files 226 226
Lines 18481 18557 +76
=====================================
+ Hits 15222 15286 +64
- Misses 2973 2983 +10
- Partials 286 288 +2
|
I think this looks like a decent proposal. Looking back through the record, we had originally tried to do this. |
Ah, cool. I'd definitely like feedback from @MadVikingGod, then |
That work was pulled out because of added complexity, trying to get something working, and concerns over memory freeing. At first glance, this looks to add caching in a simple way. I would suggest putting a benchmark to ensure that it performs better when reused and doesn't affect single-use too much. |
I think this is still going to be needed to ensure view modified instruments that result in the same aggregators get the same aggregators. But I haven't tested it. |
This is a bit worrisome. Solving the issue to return the same instrument instance may lead users into thinking they can just make the same calls repeatedly. Should we document that only first call using |
Looks like python logs a warning and ignores callbacks after the first: https://github.com/open-telemetry/opentelemetry-python/blob/975733c71473cddddd0859c6fcbd2b02405f7e12/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/__init__.py#L171 |
This should resolve #3527 |
a805a5d
to
e87d2e5
Compare
+1 to this behavior being implemented here. |
d27e99d
to
fd4d810
Compare
This now ignores callbacks after the first. For benchmarking, If I run the existing From main
This PR:
If I move NewMeterProvider and Meter() inside the loop (so that it isn't using the cached one), I get From main
This PR:
Which is slightly slower, but that is partially because we are doing some extra instantiation of the caches in the MeterProvider. |
fd4d810
to
80d90cf
Compare
…s doesn't leak memory (open-telemetry#4820)" This reverts commit 1978044.
…s doesn't leak memory (open-telemetry#4820)" This reverts commit 1978044.
…ith callbacks (#5606) In #4820, I only added a comment describing the behavior to `Int64ObservableCounter`, but forgot other instruments. This adds the comment to all observable instruments. Fixes #5561 --------- Co-authored-by: Robert Pająk <pellared@hotmail.com> Co-authored-by: Tyler Yahn <MrAlias@users.noreply.github.com>
Resolve #3527
Fixes open-telemetry/opentelemetry-go-contrib#4226.
This is one potential solution to the problem of creating multiple instances of an instrumentation library leaking memory. Alternatively, we could require users to cache the instrumentation library instance itself, as was done in googleapis/google-api-go-client#2329.
This works well for synchronous instruments, as making observations on the instrument from multiple instances of a library correctly aggregates across instances. But for async instruments, this is a bit more confusing. e.g. what should happen if I instantiate two instances of the runtime metrics library? Presumably the last observation should win?
This also only solves the problem of caching instruments themselves. The pattern of making multiple instances of an instrumentation library would still leak memory if each instance created and registered a new Callback. If RegisterCallback is used, the callback can be
Unregistered()
, but if the callback is passed as an option to the instrument, it will exist forever, which would also leak memory.TODO: