Skip to content

Commit

Permalink
Merge branches 'doc.2021.11.30c', 'exp.2021.12.07a', 'fastnohz.2021.1…
Browse files Browse the repository at this point in the history
…1.30c', 'fixes.2021.11.30c', 'nocb.2021.12.09a', 'nolibc.2021.11.30c', 'tasks.2021.12.09a', 'torture.2021.12.07a' and 'torturescript.2021.11.30c' into HEAD

doc.2021.11.30c: Documentation updates.
exp.2021.12.07a: Expedited-grace-period fixes.
fastnohz.2021.11.30c: Remove CONFIG_RCU_FAST_NO_HZ.
fixes.2021.11.30c: Miscellaneous fixes.
nocb.2021.12.09a: No-CB CPU updates.
nolibc.2021.11.30c: Tiny in-kernel library updates.
tasks.2021.12.09a: RCU-tasks updates, including update-side scalability.
torture.2021.12.07a: Torture-test in-kernel module updates.
torturescript.2021.11.30c: Torture-test scripting updates.
  • Loading branch information
paulmckrcu committed Dec 9, 2021
9 parents 5861dad + 81f6d49 + bc849e9 + 1f8da40 + 10d4703 + b0fe9de + fd796e4 + 53b541f + 90b21bc commit f80fe66
Show file tree
Hide file tree
Showing 51 changed files with 1,089 additions and 719 deletions.
11 changes: 0 additions & 11 deletions Documentation/RCU/stallwarn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -254,17 +254,6 @@ period (in this case 2603), the grace-period sequence number (7075), and
an estimate of the total number of RCU callbacks queued across all CPUs
(625 in this case).

In kernels with CONFIG_RCU_FAST_NO_HZ, more information is printed
for each CPU::

0: (64628 ticks this GP) idle=dd5/3fffffffffffffff/0 softirq=82/543 last_accelerate: a345/d342 dyntick_enabled: 1

The "last_accelerate:" prints the low-order 16 bits (in hex) of the
jiffies counter when this CPU last invoked rcu_try_advance_all_cbs()
from rcu_needs_cpu() or last invoked rcu_accelerate_cbs() from
rcu_prepare_for_idle(). "dyntick_enabled: 1" indicates that dyntick-idle
processing is enabled.

If the grace period ends just as the stall warning starts printing,
there will be a spurious stall-warning message, which will include
the following::
Expand Down
70 changes: 52 additions & 18 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -4343,19 +4343,30 @@
Disable the Correctable Errors Collector,
see CONFIG_RAS_CEC help text.

rcu_nocbs= [KNL]
The argument is a cpu list, as described above.

In kernels built with CONFIG_RCU_NOCB_CPU=y, set
the specified list of CPUs to be no-callback CPUs.
Invocation of these CPUs' RCU callbacks will be
offloaded to "rcuox/N" kthreads created for that
purpose, where "x" is "p" for RCU-preempt, and
"s" for RCU-sched, and "N" is the CPU number.
This reduces OS jitter on the offloaded CPUs,
which can be useful for HPC and real-time
workloads. It can also improve energy efficiency
for asymmetric multiprocessors.
rcu_nocbs[=cpu-list]
[KNL] The optional argument is a cpu list,
as described above.

In kernels built with CONFIG_RCU_NOCB_CPU=y,
enable the no-callback CPU mode, which prevents
such CPUs' callbacks from being invoked in
softirq context. Invocation of such CPUs' RCU
callbacks will instead be offloaded to "rcuox/N"
kthreads created for that purpose, where "x" is
"p" for RCU-preempt, "s" for RCU-sched, and "g"
for the kthreads that mediate grace periods; and
"N" is the CPU number. This reduces OS jitter on
the offloaded CPUs, which can be useful for HPC
and real-time workloads. It can also improve
energy efficiency for asymmetric multiprocessors.

If a cpulist is passed as an argument, the specified
list of CPUs is set to no-callback mode from boot.

Otherwise, if the '=' sign and the cpulist
arguments are omitted, no CPU will be set to
no-callback mode from boot but the mode may be
toggled at runtime via cpusets.

rcu_nocb_poll [KNL]
Rather than requiring that offloaded CPUs
Expand Down Expand Up @@ -4489,10 +4500,6 @@
on rcutree.qhimark at boot time and to zero to
disable more aggressive help enlistment.

rcutree.rcu_idle_gp_delay= [KNL]
Set wakeup interval for idle CPUs that have
RCU callbacks (RCU_FAST_NO_HZ=y).

rcutree.rcu_kick_kthreads= [KNL]
Cause the grace-period kthread to get an extra
wake_up() if it sleeps three times longer than
Expand Down Expand Up @@ -4603,8 +4610,12 @@
in seconds.

rcutorture.fwd_progress= [KNL]
Enable RCU grace-period forward-progress testing
Specifies the number of kthreads to be used
for RCU grace-period forward-progress testing
for the types of RCU supporting this notion.
Defaults to 1 kthread, values less than zero or
greater than the number of CPUs cause the number
of CPUs to be used.

rcutorture.fwd_progress_div= [KNL]
Specify the fraction of a CPU-stall-warning
Expand Down Expand Up @@ -4805,6 +4816,29 @@
period to instead use normal non-expedited
grace-period processing.

rcupdate.rcu_task_collapse_lim= [KNL]
Set the maximum number of callbacks present
at the beginning of a grace period that allows
the RCU Tasks flavors to collapse back to using
a single callback queue. This switching only
occurs when rcupdate.rcu_task_enqueue_lim is
set to the default value of -1.

rcupdate.rcu_task_contend_lim= [KNL]
Set the minimum number of callback-queuing-time
lock-contention events per jiffy required to
cause the RCU Tasks flavors to switch to per-CPU
callback queuing. This switching only occurs
when rcupdate.rcu_task_enqueue_lim is set to
the default value of -1.

rcupdate.rcu_task_enqueue_lim= [KNL]
Set the number of callback queues to use for the
RCU Tasks family of RCU flavors. The default
of -1 allows this to be automatically (and
dynamically) adjusted. This parameter is intended
for use in testing.

rcupdate.rcu_task_ipi_delay= [KNL]
Set time in jiffies during which RCU tasks will
avoid sending IPIs, starting with the beginning
Expand Down
10 changes: 3 additions & 7 deletions Documentation/timers/no_hz.rst
Original file line number Diff line number Diff line change
Expand Up @@ -184,16 +184,12 @@ There are situations in which idle CPUs cannot be permitted to
enter either dyntick-idle mode or adaptive-tick mode, the most
common being when that CPU has RCU callbacks pending.

The CONFIG_RCU_FAST_NO_HZ=y Kconfig option may be used to cause such CPUs
to enter dyntick-idle mode or adaptive-tick mode anyway. In this case,
a timer will awaken these CPUs every four jiffies in order to ensure
that the RCU callbacks are processed in a timely fashion.

Another approach is to offload RCU callback processing to "rcuo" kthreads
Avoid this by offloading RCU callback processing to "rcuo" kthreads
using the CONFIG_RCU_NOCB_CPU=y Kconfig option. The specific CPUs to
offload may be selected using The "rcu_nocbs=" kernel boot parameter,
which takes a comma-separated list of CPUs and CPU ranges, for example,
"1,3-5" selects CPUs 1, 3, 4, and 5.
"1,3-5" selects CPUs 1, 3, 4, and 5. Note that CPUs specified by
the "nohz_full" kernel boot parameter are also offloaded.

The offloaded CPUs will never queue RCU callbacks, and therefore RCU
never prevents offloaded CPUs from entering either dyntick-idle mode
Expand Down
51 changes: 37 additions & 14 deletions include/linux/rcu_segcblist.h
Original file line number Diff line number Diff line change
Expand Up @@ -69,15 +69,15 @@ struct rcu_cblist {
*
*
* ----------------------------------------------------------------------------
* | SEGCBLIST_SOFTIRQ_ONLY |
* | SEGCBLIST_RCU_CORE |
* | |
* | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, without holding nocb_lock. |
* ----------------------------------------------------------------------------
* |
* v
* ----------------------------------------------------------------------------
* | SEGCBLIST_OFFLOADED |
* | SEGCBLIST_RCU_CORE | SEGCBLIST_LOCKING | SEGCBLIST_OFFLOADED |
* | |
* | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, while holding nocb_lock. Waking up CB and GP kthreads, |
Expand All @@ -89,7 +89,9 @@ struct rcu_cblist {
* | |
* v v
* --------------------------------------- ----------------------------------|
* | SEGCBLIST_OFFLOADED | | | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_RCU_CORE | | | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | | | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | | | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | | SEGCBLIST_KTHREAD_GP |
* | | | |
* | | | |
Expand All @@ -104,9 +106,10 @@ struct rcu_cblist {
* |
* v
* |--------------------------------------------------------------------------|
* | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | |
* | SEGCBLIST_KTHREAD_GP |
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_GP | |
* | SEGCBLIST_KTHREAD_CB |
* | |
* | Kthreads handle callbacks holding nocb_lock, local rcu_core() stops |
* | handling callbacks. Enable bypass queueing. |
Expand All @@ -120,7 +123,8 @@ struct rcu_cblist {
*
*
* |--------------------------------------------------------------------------|
* | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | |
* | SEGCBLIST_KTHREAD_GP |
* | |
Expand All @@ -130,6 +134,22 @@ struct rcu_cblist {
* |
* v
* |--------------------------------------------------------------------------|
* | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_OFFLOADED | |
* | SEGCBLIST_KTHREAD_CB | |
* | SEGCBLIST_KTHREAD_GP |
* | |
* | CB/GP kthreads handle callbacks holding nocb_lock, local rcu_core() |
* | handles callbacks concurrently. Bypass enqueue is enabled. |
* | Invoke RCU core so we make sure not to preempt it in the middle with |
* | leaving some urgent work unattended within a jiffy. |
* ----------------------------------------------------------------------------
* |
* v
* |--------------------------------------------------------------------------|
* | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | |
* | SEGCBLIST_KTHREAD_CB | |
* | SEGCBLIST_KTHREAD_GP |
* | |
Expand All @@ -143,7 +163,9 @@ struct rcu_cblist {
* | |
* v v
* ---------------------------------------------------------------------------|
* | |
* | | |
* | SEGCBLIST_RCU_CORE | | SEGCBLIST_RCU_CORE | |
* | SEGCBLIST_LOCKING | | SEGCBLIST_LOCKING | |
* | SEGCBLIST_KTHREAD_CB | SEGCBLIST_KTHREAD_GP |
* | | |
* | GP kthread woke up and | CB kthread woke up and |
Expand All @@ -159,7 +181,7 @@ struct rcu_cblist {
* |
* v
* ----------------------------------------------------------------------------
* | 0 |
* | SEGCBLIST_RCU_CORE | SEGCBLIST_LOCKING |
* | |
* | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, while holding nocb_lock. Forbid nocb_timer to be armed. |
Expand All @@ -168,17 +190,18 @@ struct rcu_cblist {
* |
* v
* ----------------------------------------------------------------------------
* | SEGCBLIST_SOFTIRQ_ONLY |
* | SEGCBLIST_RCU_CORE |
* | |
* | Callbacks processed by rcu_core() from softirqs or local |
* | rcuc kthread, without holding nocb_lock. |
* ----------------------------------------------------------------------------
*/
#define SEGCBLIST_ENABLED BIT(0)
#define SEGCBLIST_SOFTIRQ_ONLY BIT(1)
#define SEGCBLIST_KTHREAD_CB BIT(2)
#define SEGCBLIST_KTHREAD_GP BIT(3)
#define SEGCBLIST_OFFLOADED BIT(4)
#define SEGCBLIST_RCU_CORE BIT(1)
#define SEGCBLIST_LOCKING BIT(2)
#define SEGCBLIST_KTHREAD_CB BIT(3)
#define SEGCBLIST_KTHREAD_GP BIT(4)
#define SEGCBLIST_OFFLOADED BIT(5)

struct rcu_segcblist {
struct rcu_head *head;
Expand Down
50 changes: 28 additions & 22 deletions include/linux/rcupdate.h
Original file line number Diff line number Diff line change
Expand Up @@ -364,46 +364,48 @@ static inline void rcu_preempt_sleep_check(void) { }
#define rcu_check_sparse(p, space)
#endif /* #else #ifdef __CHECKER__ */

#define __unrcu_pointer(p, local) \
({ \
typeof(*p) *local = (typeof(*p) *__force)(p); \
rcu_check_sparse(p, __rcu); \
((typeof(*p) __force __kernel *)(local)); \
})
/**
* unrcu_pointer - mark a pointer as not being RCU protected
* @p: pointer needing to lose its __rcu property
*
* Converts @p from an __rcu pointer to a __kernel pointer.
* This allows an __rcu pointer to be used with xchg() and friends.
*/
#define unrcu_pointer(p) \
({ \
typeof(*p) *_________p1 = (typeof(*p) *__force)(p); \
rcu_check_sparse(p, __rcu); \
((typeof(*p) __force __kernel *)(_________p1)); \
})
#define unrcu_pointer(p) __unrcu_pointer(p, __UNIQUE_ID(rcu))

#define __rcu_access_pointer(p, space) \
#define __rcu_access_pointer(p, local, space) \
({ \
typeof(*p) *_________p1 = (typeof(*p) *__force)READ_ONCE(p); \
typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
rcu_check_sparse(p, space); \
((typeof(*p) __force __kernel *)(_________p1)); \
((typeof(*p) __force __kernel *)(local)); \
})
#define __rcu_dereference_check(p, c, space) \
#define __rcu_dereference_check(p, local, c, space) \
({ \
/* Dependency order vs. p above. */ \
typeof(*p) *________p1 = (typeof(*p) *__force)READ_ONCE(p); \
typeof(*p) *local = (typeof(*p) *__force)READ_ONCE(p); \
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_check() usage"); \
rcu_check_sparse(p, space); \
((typeof(*p) __force __kernel *)(________p1)); \
((typeof(*p) __force __kernel *)(local)); \
})
#define __rcu_dereference_protected(p, c, space) \
#define __rcu_dereference_protected(p, local, c, space) \
({ \
RCU_LOCKDEP_WARN(!(c), "suspicious rcu_dereference_protected() usage"); \
rcu_check_sparse(p, space); \
((typeof(*p) __force __kernel *)(p)); \
})
#define rcu_dereference_raw(p) \
#define __rcu_dereference_raw(p, local) \
({ \
/* Dependency order vs. p above. */ \
typeof(p) ________p1 = READ_ONCE(p); \
((typeof(*p) __force __kernel *)(________p1)); \
typeof(p) local = READ_ONCE(p); \
((typeof(*p) __force __kernel *)(local)); \
})
#define rcu_dereference_raw(p) __rcu_dereference_raw(p, __UNIQUE_ID(rcu))

/**
* RCU_INITIALIZER() - statically initialize an RCU-protected global variable
Expand Down Expand Up @@ -490,7 +492,7 @@ do { \
* when tearing down multi-linked structures after a grace period
* has elapsed.
*/
#define rcu_access_pointer(p) __rcu_access_pointer((p), __rcu)
#define rcu_access_pointer(p) __rcu_access_pointer((p), __UNIQUE_ID(rcu), __rcu)

/**
* rcu_dereference_check() - rcu_dereference with debug checking
Expand Down Expand Up @@ -526,7 +528,8 @@ do { \
* annotated as __rcu.
*/
#define rcu_dereference_check(p, c) \
__rcu_dereference_check((p), (c) || rcu_read_lock_held(), __rcu)
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || rcu_read_lock_held(), __rcu)

/**
* rcu_dereference_bh_check() - rcu_dereference_bh with debug checking
Expand All @@ -541,7 +544,8 @@ do { \
* rcu_read_lock() but also rcu_read_lock_bh() into account.
*/
#define rcu_dereference_bh_check(p, c) \
__rcu_dereference_check((p), (c) || rcu_read_lock_bh_held(), __rcu)
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || rcu_read_lock_bh_held(), __rcu)

/**
* rcu_dereference_sched_check() - rcu_dereference_sched with debug checking
Expand All @@ -556,7 +560,8 @@ do { \
* only rcu_read_lock() but also rcu_read_lock_sched() into account.
*/
#define rcu_dereference_sched_check(p, c) \
__rcu_dereference_check((p), (c) || rcu_read_lock_sched_held(), \
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || rcu_read_lock_sched_held(), \
__rcu)

/*
Expand All @@ -566,7 +571,8 @@ do { \
* The no-tracing version of rcu_dereference_raw() must not call
* rcu_read_lock_held().
*/
#define rcu_dereference_raw_check(p) __rcu_dereference_check((p), 1, __rcu)
#define rcu_dereference_raw_check(p) \
__rcu_dereference_check((p), __UNIQUE_ID(rcu), 1, __rcu)

/**
* rcu_dereference_protected() - fetch RCU pointer when updates prevented
Expand All @@ -585,7 +591,7 @@ do { \
* but very ugly failures.
*/
#define rcu_dereference_protected(p, c) \
__rcu_dereference_protected((p), (c), __rcu)
__rcu_dereference_protected((p), __UNIQUE_ID(rcu), (c), __rcu)


/**
Expand Down
2 changes: 1 addition & 1 deletion include/linux/rcutiny.h
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ static inline void rcu_irq_enter_irqson(void) { }
static inline void rcu_irq_exit(void) { }
static inline void rcu_irq_exit_check_preempt(void) { }
#define rcu_is_idle_cpu(cpu) \
(is_idle_task(current) && !in_nmi() && !in_irq() && !in_serving_softirq())
(is_idle_task(current) && !in_nmi() && !in_hardirq() && !in_serving_softirq())
static inline void exit_rcu(void) { }
static inline bool rcu_preempt_need_deferred_qs(struct task_struct *t)
{
Expand Down
3 changes: 2 additions & 1 deletion include/linux/srcu.h
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,8 @@ static inline int srcu_read_lock_held(const struct srcu_struct *ssp)
* lockdep_is_held() calls.
*/
#define srcu_dereference_check(p, ssp, c) \
__rcu_dereference_check((p), (c) || srcu_read_lock_held(ssp), __rcu)
__rcu_dereference_check((p), __UNIQUE_ID(rcu), \
(c) || srcu_read_lock_held(ssp), __rcu)

/**
* srcu_dereference - fetch SRCU-protected pointer for later dereferencing
Expand Down
Loading

0 comments on commit f80fe66

Please sign in to comment.