Skip to content

Commit e59d3c6

Browse files
soheilhytorvalds
authored andcommitted
epoll: eliminate unnecessary lock for zero timeout
We call ep_events_available() under lock when timeout is 0, and then call it without locks in the loop for the other cases. Instead, call ep_events_available() without lock for all cases. For non-zero timeouts, we will recheck after adding the thread to the wait queue. For zero timeout cases, by definition, user is opportunistically polling and will have to call epoll_wait again in the future. Note that this lock was kept in c5a282e because the whole loop was historically under lock. This patch results in a 1% CPU/RPC reduction in RPC benchmarks. Link: https://lkml.kernel.org/r/20201106231635.3528496-9-soheil.kdev@gmail.com Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com> Suggested-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Khazhismel Kumykov <khazhy@google.com> Cc: Guantao Liu <guantaol@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 00b2763 commit e59d3c6

File tree

1 file changed

+12
-13
lines changed

1 file changed

+12
-13
lines changed

fs/eventpoll.c

+12-13
Original file line numberDiff line numberDiff line change
@@ -1743,7 +1743,7 @@ static inline struct timespec64 ep_set_mstimeout(long ms)
17431743
static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
17441744
int maxevents, long timeout)
17451745
{
1746-
int res, eavail = 0, timed_out = 0;
1746+
int res, eavail, timed_out = 0;
17471747
u64 slack = 0;
17481748
wait_queue_entry_t wait;
17491749
ktime_t expires, *to = NULL;
@@ -1759,18 +1759,21 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
17591759
} else if (timeout == 0) {
17601760
/*
17611761
* Avoid the unnecessary trip to the wait queue loop, if the
1762-
* caller specified a non blocking operation. We still need
1763-
* lock because we could race and not see an epi being added
1764-
* to the ready list while in irq callback. Thus incorrectly
1765-
* returning 0 back to userspace.
1762+
* caller specified a non blocking operation.
17661763
*/
17671764
timed_out = 1;
1768-
1769-
write_lock_irq(&ep->lock);
1770-
eavail = ep_events_available(ep);
1771-
write_unlock_irq(&ep->lock);
17721765
}
17731766

1767+
/*
1768+
* This call is racy: We may or may not see events that are being added
1769+
* to the ready list under the lock (e.g., in IRQ callbacks). For, cases
1770+
* with a non-zero timeout, this thread will check the ready list under
1771+
* lock and will added to the wait queue. For, cases with a zero
1772+
* timeout, the user by definition should not care and will have to
1773+
* recheck again.
1774+
*/
1775+
eavail = ep_events_available(ep);
1776+
17741777
while (1) {
17751778
if (eavail) {
17761779
/*
@@ -1786,10 +1789,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
17861789
if (timed_out)
17871790
return 0;
17881791

1789-
eavail = ep_events_available(ep);
1790-
if (eavail)
1791-
continue;
1792-
17931792
eavail = ep_busy_loop(ep, timed_out);
17941793
if (eavail)
17951794
continue;

0 commit comments

Comments
 (0)