Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add functions to count object in use and objects in pool #29

Merged
merged 8 commits into from
Nov 1, 2023

Conversation

nickrobinson251
Copy link
Member

  • useful for tracking purposes
  • doesn't change any current behaviour (though i have misgivings about how Pool behaves: Rethink Pool design  #28)

@codecov
Copy link

codecov bot commented Oct 3, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (c5cb574) 89.63% compared to head (ad2f4b0) 89.86%.

❗ Current head ad2f4b0 differs from pull request most recent head adf4fa0. Consider uploading reports for the commit adf4fa0 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main      #29      +/-   ##
==========================================
+ Coverage   89.63%   89.86%   +0.22%     
==========================================
  Files           8        8              
  Lines         357      365       +8     
==========================================
+ Hits          320      328       +8     
  Misses         37       37              
Files Coverage Δ
src/pools.jl 95.08% <100.00%> (+0.74%) ⬆️

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

src/pools.jl Outdated Show resolved Hide resolved
@nickrobinson251 nickrobinson251 marked this pull request as ready for review October 10, 2023 11:31
src/pools.jl Outdated Show resolved Hide resolved
src/pools.jl Outdated
Comment on lines 60 to 66
"""
Pools.max(pool::Pool) -> Int

Return the maximum number of objects permitted to be in use at the same time.
See `Pools.permits(pool)` for the number of objects currently in use.
"""
max(pool::Pool) = Base.@lock pool.lock pool.max
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think capacity might be a good name for this? Or max_capacity?

Or maybe max_allowed_in_use? I actually think i like this best.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i didn't go for "in use" ("usage", "reusable", etc) because although that's a good description for the HTTP use-case, this Pool functionality is pretty generic (and we already have punny names like drain! and the docs talk about "permits")

But also i didn't like capacity because that suggested to me that there's a limit on number of "in the pool" rather than "in use".

happy to change to "use"-type names if you think that's better

src/pools.jl Outdated Show resolved Hide resolved
Copy link
Member

@NHDaly NHDaly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM based on what I read here. Would be good to rename those functions though yeah

Comment on lines +68 to +83
limit(pool::Pool) = Base.@lock pool.lock pool.limit

"""
Pools.in_use(pool::Pool) -> Int

Return the number of objects currently in use. Less than or equal to `Pools.limit(pool)`.
"""
in_use(pool::Pool) = Base.@lock pool.lock pool.cur

"""
Pools.in_pool(pool::Pool) -> Int

Return the number of objects in the pool available for reuse.
"""
in_pool(pool::Pool) = Base.@lock pool.lock mapreduce(length, +, values(pool.keyedvalues); init=0)
in_pool(pool::Pool{Nothing}) = Base.@lock pool.lock length(pool.values)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need the locking here? IIUC, this trades one race condition (do we read the a field of a struct before someone changes is) for another (do we grab the lock first), not sure if this is worth a potential context switch.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i don't think we want to risk values being mutated whilst computing length, right?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, this simply reads the count field on the Dict instance, seems safe to me

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

... and for arrays I thought checking their length is racy but safe

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems unnecessary to rely on implementation details when we could guarantee safety -- what would be the benefit of not locking (given we expect the computation to take nanoseconds)?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trying to grab the lock could cause the task that is to wait if the lock is already held by someone, this would be somewhat unexpected I'd say for just trying to see what limit is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, for limit, yeah i can see that (because there's no public API to change limit, in fact we should probably const it)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading from a mutable field in a struct from multiple threads should be locked.

There shouldn't be any risk of deadlock, as long as there's only one lock involved, so I don't see how this introduces a new kind of race condition.

If you're worried about increased contention, you could change these fields to be atomics instead. In practice usually a single global lock on a struct is fine until you can show otherwise with a contention profile. But it could make sense to switch to atomics here if you're worried about that. (The tradeoff is that the atomics add some minor overhead, and if you are mostly mutating/reading these fields when you would have had the lock anyway, then the single lock would've been more performant.)

Finally, I think these fields are mostly only used from our metrics logging in our server, so it doesn't matter if you block on a lock here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading from a mutable field in a struct from multiple threads should be locked.

Well, I'm asking which bad things could happen if you don't lock. Like, could the array length value only be partially updated? That would be a good reason to lock.

I don't see how this introduces a new kind of race condition

I meant that grabbing the lock is "racy" in the sense that sometimes it will be the reader who wins the race and sometimes the mutator (so assuming no "partial" values can be read, it won't be a big improvement).

it doesn't matter if you block on a lock here

I think avoiding work that is not needed is a good principle to follow

@nickrobinson251 nickrobinson251 merged commit 4bf9b3e into main Nov 1, 2023
4 checks passed
@nickrobinson251 nickrobinson251 deleted the npr-pool-size branch November 1, 2023 14:43
Comment on lines -29 to +32
max::Int
limit::Int

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HTTP.jl does a field access of this and now errors in tests (probably the access is only in tests though). Can this rename be reverted? It is very annoying to have some combinations of HTTP and ConcurrentUtils not working together...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants