Skip to content

Commit 4fdcc11

Browse files
ReentrantLock: wakeup a single task on unlock and add a short spin (JuliaLang#56814) (#200)
I propose a change in the implementation of the `ReentrantLock` to improve its overall throughput for short critical sections and fix the quadratic wake-up behavior where each unlock schedules **all** waiting tasks on the lock's wait queue. This implementation follows the same principles of the `Mutex` in the [parking_lot](https://github.com/Amanieu/parking_lot/tree/master) Rust crate which is based on the Webkit [WTF::ParkingLot](https://webkit.org/blog/6161/locking-in-webkit/) class. Only the basic working principle is implemented here, further improvements such as eventual fairness will be proposed separately. The gist of the change is that we add one extra state to the lock, essentially going from: ``` 0x0 => The lock is not locked 0x1 => The lock is locked by exactly one task. No other task is waiting for it. 0x2 => The lock is locked and some other task tried to lock but failed (conflict) ``` To: ``` ``` In the current implementation we must schedule all tasks to cause a conflict (state 0x2) because on unlock we only notify any task if the lock is in the conflict state. This behavior means that with high contention and a short critical section the tasks will be effectively spinning in the scheduler queue. With the extra state the proposed implementation has enough information to know if there are other tasks to be notified or not, which means we can always notify one task at a time while preserving the optimized path of not notifying if there are no tasks waiting. To improve throughput for short critical sections we also introduce a bounded amount of spinning before attempting to park. Not spinning on the scheduler queue greatly reduces the CPU utilization of the following example: ```julia function example() lock = ReentrantLock() @sync begin for i in 1:10000 Threads.@Spawn begin @lock lock begin sleep(0.001) end end end end end @time example() ``` Current: ``` 28.890623 seconds (101.65 k allocations: 7.646 MiB, 0.25% compilation time) ``` ![image](https://github.com/user-attachments/assets/dbd6ce57-c760-4f5a-b68a-27df6a97a46e) Proposed: ``` 22.806669 seconds (101.65 k allocations: 7.814 MiB, 0.35% compilation time) ``` ![image](https://github.com/user-attachments/assets/b0254180-658d-4493-86d3-dea4c500b5ac) In a micro-benchmark where 8 threads contend for a single lock with a very short critical section we see a ~2x improvement. Current: ``` 8-element Vector{Int64}: 6258688 5373952 6651904 6389760 6586368 3899392 5177344 5505024 Total iterations: 45842432 ``` Proposed: ``` 8-element Vector{Int64}: 12320768 12976128 10354688 12845056 7503872 13598720 13860864 11993088 Total iterations: 95453184 ``` ~~In the uncontended scenario the extra bookkeeping causes a 10% throughput reduction:~~ EDIT: I reverted _trylock to the simple case to recover the uncontended throughput and now both implementations are on the same ballpark (without hurting the above numbers). In the uncontended scenario: Current: ``` Total iterations: 236748800 ``` Proposed: ``` Total iterations: 237699072 ``` Closes JuliaLang#56182 Co-authored-by: André Guedes <andre.guedes@relational.ai>
1 parent e36910c commit 4fdcc11

File tree

2 files changed

+106
-23
lines changed

2 files changed

+106
-23
lines changed

base/lock.jl

Lines changed: 104 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,14 @@
22

33
const ThreadSynchronizer = GenericCondition{Threads.SpinLock}
44

5+
# This bit is set in the `havelock` of a `ReentrantLock` when that lock is locked by some task.
6+
const LOCKED_BIT = 0b01
7+
# This bit is set in the `havelock` of a `ReentrantLock` just before parking a task. A task is being
8+
# parked if it wants to lock the lock, but it is currently being held by some other task.
9+
const PARKED_BIT = 0b10
10+
11+
const MAX_SPIN_ITERS = 40
12+
513
# Advisory reentrant lock
614
"""
715
ReentrantLock()
@@ -36,7 +44,28 @@ mutable struct ReentrantLock <: AbstractLock
3644
# offset32 = 20, offset64 = 24
3745
reentrancy_cnt::UInt32
3846
# offset32 = 24, offset64 = 28
39-
@atomic havelock::UInt8 # 0x0 = none, 0x1 = lock, 0x2 = conflict
47+
#
48+
# This atomic integer holds the current state of the lock instance. Only the two lowest bits
49+
# are used. See `LOCKED_BIT` and `PARKED_BIT` for the bitmask for these bits.
50+
#
51+
# # State table:
52+
#
53+
# PARKED_BIT | LOCKED_BIT | Description
54+
# 0 | 0 | The lock is not locked, nor is anyone waiting for it.
55+
# -----------+------------+------------------------------------------------------------------
56+
# 0 | 1 | The lock is locked by exactly one task. No other task is
57+
# | | waiting for it.
58+
# -----------+------------+------------------------------------------------------------------
59+
# 1 | 0 | The lock is not locked. One or more tasks are parked.
60+
# -----------+------------+------------------------------------------------------------------
61+
# 1 | 1 | The lock is locked by exactly one task. One or more tasks are
62+
# | | parked waiting for the lock to become available.
63+
# | | In this state, PARKED_BIT is only ever cleared when the cond_wait lock
64+
# | | is held (i.e. on unlock). This ensures that
65+
# | | we never end up in a situation where there are parked tasks but
66+
# | | PARKED_BIT is not set (which would result in those tasks
67+
# | | potentially never getting woken up).
68+
@atomic havelock::UInt8
4069
# offset32 = 28, offset64 = 32
4170
cond_wait::ThreadSynchronizer # 2 words
4271
# offset32 = 36, offset64 = 48
@@ -91,7 +120,7 @@ function islocked end
91120
# `ReentrantLock`.
92121

93122
function islocked(rl::ReentrantLock)
94-
return (@atomic :monotonic rl.havelock) != 0
123+
return (@atomic :monotonic rl.havelock) & LOCKED_BIT != 0
95124
end
96125

97126
"""
@@ -115,17 +144,15 @@ function trylock end
115144
@inline function trylock(rl::ReentrantLock)
116145
ct = current_task()
117146
if rl.locked_by === ct
118-
#@assert rl.havelock !== 0x00
119147
rl.reentrancy_cnt += 0x0000_0001
120148
return true
121149
end
122150
return _trylock(rl, ct)
123151
end
124152
@noinline function _trylock(rl::ReentrantLock, ct::Task)
125153
GC.disable_finalizers()
126-
if (@atomicreplace :acquire rl.havelock 0x00 => 0x01).success
127-
#@assert rl.locked_by === nothing
128-
#@assert rl.reentrancy_cnt === 0
154+
state = (@atomic :monotonic rl.havelock) & PARKED_BIT
155+
if (@atomicreplace :acquire rl.havelock state => (state | LOCKED_BIT)).success
129156
rl.reentrancy_cnt = 0x0000_0001
130157
@atomic :release rl.locked_by = ct
131158
return true
@@ -146,23 +173,69 @@ Each `lock` must be matched by an [`unlock`](@ref).
146173
@inline function lock(rl::ReentrantLock)
147174
trylock(rl) || (@noinline function slowlock(rl::ReentrantLock)
148175
c = rl.cond_wait
149-
lock(c.lock)
150-
try
151-
while true
152-
if (@atomicreplace rl.havelock 0x01 => 0x02).old == 0x00 # :sequentially_consistent ? # now either 0x00 or 0x02
153-
# it was unlocked, so try to lock it ourself
154-
_trylock(rl, current_task()) && break
155-
else # it was locked, so now wait for the release to notify us
156-
wait(c)
176+
ct = current_task()
177+
iteration = 1
178+
while true
179+
state = @atomic :monotonic rl.havelock
180+
# Grab the lock if it isn't locked, even if there is a queue on it
181+
if state & LOCKED_BIT == 0
182+
GC.disable_finalizers()
183+
result = (@atomicreplace :acquire :monotonic rl.havelock state => (state | LOCKED_BIT))
184+
if result.success
185+
rl.reentrancy_cnt = 0x0000_0001
186+
@atomic :release rl.locked_by = ct
187+
return
157188
end
189+
GC.enable_finalizers()
190+
continue
158191
end
159-
finally
160-
unlock(c.lock)
192+
193+
if state & PARKED_BIT == 0
194+
# If there is no queue, try spinning a few times
195+
if iteration <= MAX_SPIN_ITERS
196+
Base.yield()
197+
iteration += 1
198+
continue
199+
end
200+
201+
# If still not locked, try setting the parked bit
202+
@atomicreplace :monotonic :monotonic rl.havelock state => (state | PARKED_BIT)
203+
end
204+
205+
# lock the `cond_wait`
206+
lock(c.lock)
207+
208+
# Last check before we wait to make sure `unlock` did not win the race
209+
# to the `cond_wait` lock and cleared the parked bit
210+
state = @atomic :acquire rl.havelock
211+
if state != LOCKED_BIT | PARKED_BIT
212+
unlock(c.lock)
213+
continue
214+
end
215+
216+
# It was locked, so now wait for the unlock to notify us
217+
wait_no_relock(c)
218+
219+
# Loop back and try locking again
220+
iteration = 1
161221
end
162222
end)(rl)
163223
return
164224
end
165225

226+
function wait_no_relock(c::GenericCondition)
227+
ct = current_task()
228+
_wait2(c, ct)
229+
token = unlockall(c.lock)
230+
try
231+
return wait()
232+
catch
233+
ct.queue === nothing || list_deletefirst!(ct.queue, ct)
234+
rethrow()
235+
end
236+
end
237+
238+
166239
"""
167240
unlock(lock)
168241
@@ -179,18 +252,27 @@ internal counter and return immediately.
179252
rl.reentrancy_cnt = n
180253
if n == 0x0000_00000
181254
@atomic :monotonic rl.locked_by = nothing
182-
if (@atomicswap :release rl.havelock = 0x00) == 0x02
255+
result = (@atomicreplace :release :monotonic rl.havelock LOCKED_BIT => 0x00)
256+
if result.success
257+
return true
258+
else
183259
(@noinline function notifywaiters(rl)
184260
cond_wait = rl.cond_wait
185261
lock(cond_wait)
186-
try
187-
notify(cond_wait)
188-
finally
189-
unlock(cond_wait)
262+
263+
notify(cond_wait, all=false)
264+
if !isempty(cond_wait.waitq)
265+
@atomic :release rl.havelock = PARKED_BIT
266+
else
267+
# We may have won the race to the `cond_wait` lock as a task was about to park
268+
# but we unlock anyway as any parking task will retry
269+
@atomic :release rl.havelock = 0x00
190270
end
271+
272+
unlock(cond_wait)
191273
end)(rl)
274+
return true
192275
end
193-
return true
194276
end
195277
return false
196278
end)(rl) && GC.enable_finalizers()

test/threads.jl

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,8 @@ let lk = ReentrantLock()
1616
t2 = @async (notify(c2); trylock(lk))
1717
wait(c1)
1818
wait(c2)
19-
@test t1.queue === lk.cond_wait.waitq
19+
# wait for the task to park in the queue (it may be spinning)
20+
@test timedwait(() -> t1.queue === lk.cond_wait.waitq, 1.0) == :ok
2021
@test t2.queue !== lk.cond_wait.waitq
2122
@test istaskdone(t2)
2223
@test !fetch(t2)

0 commit comments

Comments
 (0)