You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
--disable-gil builds use critical sections more and more. In some cases a critcal section is held and will be re-entered immediately by the calling thread and this can be hard to avoid (e.g when we're going from C code to Python code back into related C code). Some examples of this are flushing I/O and calculating the MRO on a metaclass.
When this happens we treat this as re-entrancy to the critical section, and send it off to the parking lot to be released and then re-acquired. This can greatly slow down something which should just simply re-acquire the lock.
It's not super straight forward as we still need to support detaching and resuming the critical section in other cases of contention correctly for the lock.
As part of this it'd be good to also have the ability to collect stats on how often this is happening as we do want to generally avoid it and prefer lock-free paths where possible.
Add a fast path to (single-mutex) critical section locking _iff_ the mutex
is already held by the currently active, top-most critical section of this
thread. This can matter a lot for indirectly recursive critical sections
without intervening critical sections.
The optimisation is in, but without tracking or statistics on how often it happens. (The solution was more straight-forward than anticipated because we can simple skip entering the critical section altogether, leaving the original one in place.) Avoiding the recursion wasn't easily possible in the cases I looked at (e.g. a functools.lru_cache object wrapping methods, which means the cache is accessed per class), so I'm not sure tracking this specific situation is particularly useful, and would probably fall in the general "detect contention" bucket of problems.
(Leaving this issue open for Dino to decide if we need to do more here.)
Feature or enhancement
Proposal:
--disable-gil
builds use critical sections more and more. In some cases a critcal section is held and will be re-entered immediately by the calling thread and this can be hard to avoid (e.g when we're going from C code to Python code back into related C code). Some examples of this are flushing I/O and calculating the MRO on a metaclass.When this happens we treat this as re-entrancy to the critical section, and send it off to the parking lot to be released and then re-acquired. This can greatly slow down something which should just simply re-acquire the lock.
It's not super straight forward as we still need to support detaching and resuming the critical section in other cases of contention correctly for the lock.
As part of this it'd be good to also have the ability to collect stats on how often this is happening as we do want to generally avoid it and prefer lock-free paths where possible.
Has this already been discussed elsewhere?
No response given
Links to previous discussion of this feature:
No response
Linked PRs
The text was updated successfully, but these errors were encountered: