Skip to content

Conversation

@kernel-patches-daemon-bpf
Copy link

Pull request for series with
subject: rqspinlock: Annotate rqspinlock lock acquiring functions with __must_check
version: 2
url: https://patchwork.kernel.org/project/netdevbpf/list/?series=1026504

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 792f258
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1026504
version: 2

@kernel-patches-review-bot
Copy link

commit 4ba76c7812846cfb67d2eeece9bd8e2868a5aa12
Author: Amery Hung <ameryhung@gmail.com>

rqspinlock: Annotate rqspinlock lock acquiring functions with __must_check

Locking a resilient queued spinlock can fail when deadlock or timeout
happen. This patch adds __must_check annotations to ensure callers
handle returned errors. The macro-to-function conversions enable these
annotations.

> diff --git a/include/asm-generic/rqspinlock.h b/include/asm-generic/rqspinlock.h
> index 6d4244d64..855c09435 100644
> --- a/include/asm-generic/rqspinlock.h
> +++ b/include/asm-generic/rqspinlock.h

[ ... ]

> @@ -171,7 +171,7 @@ static __always_inline void release_held_lock_entry(void)
>   * * -EDEADLK	- Lock acquisition failed because of AA/ABBA deadlock.
>   * * -ETIMEDOUT - Lock acquisition failed because of timeout.
>   */
> -static __always_inline int res_spin_lock(rqspinlock_t *lock)
> +static __always_inline __must_check int res_spin_lock(rqspinlock_t *lock)
>  {
>  	int val = 0;
>
> @@ -223,27 +223,36 @@ static __always_inline void res_spin_unlock(rqspinlock_t *lock)
>  #define raw_res_spin_lock_init(lock) ({ *(lock) = (rqspinlock_t){0}; })
>  #endif
>
> -#define raw_res_spin_lock(lock)                    \
> -	({                                         \
> -		int __ret;                         \
> -		preempt_disable();                 \
> -		__ret = res_spin_lock(lock);	   \
> -		if (__ret)                         \
> -			preempt_enable();          \
> -		__ret;                             \
> -	})
> +static __always_inline __must_check int raw_res_spin_lock(rqspinlock_t *lock)
> +{
> +	int ret;
> +
> +	preempt_disable();
> +	ret = res_spin_lock(lock);
> +	if (ret)
> +		preempt_enable();
> +
> +	return ret;
> +}
>
>  #define raw_res_spin_unlock(lock) ({ res_spin_unlock(lock); preempt_enable(); })
>
> -#define raw_res_spin_lock_irqsave(lock, flags)    \
> -	({                                        \
> -		int __ret;                        \
> -		local_irq_save(flags);            \
> -		__ret = raw_res_spin_lock(lock);  \
> -		if (__ret)                        \
> -			local_irq_restore(flags); \
> -		__ret;                            \
> -	})
> +static __always_inline __must_check int
> +__raw_res_spin_lock_irqsave(rqspinlock_t *lock, unsigned long *flags)
> +{
> +	unsigned long __flags;
> +	int ret;
> +
> +	local_irq_save(__flags);
> +	ret = raw_res_spin_lock(lock);
> +	if (ret)
> +		local_irq_restore(__flags);
> +
> +	*flags = __flags;
> +	return ret;
> +}
> +
> +#define raw_res_spin_lock_irqsave(lock, flags) __raw_res_spin_lock_irqsave(lock, &flags)

The __must_check annotations will cause compiler warnings in
kernel/locking/locktorture.c where torture_raw_res_spin_write_lock()
and torture_raw_res_spin_write_lock_irq() ignore lock acquisition
failures.

When raw_res_spin_lock() returns an error (deadlock or timeout), the
torture test functions still return 0 to indicate success. The torture
framework then calls the unlock functions, but the lock was never
acquired. In res_spin_unlock(), this causes an out-of-bounds array
access when computing locks[cnt - 1] with cnt=0, potentially corrupting
per-CPU memory.

Should the torture test handle lock acquisition failures, or use
different lock types that cannot fail?



AI reviewed your patch. Please fix the bug or email reply why it's not a bug.
See: https://github.com/kernel-patches/vmtest/blob/master/ci/claude/README.md

In-Reply-To-Subject: rqspinlock: Annotate rqspinlock lock acquiring functions with __must_check
CI run summary: https://github.com/kernel-patches/bpf/actions/runs/19583558278

@kernel-patches-daemon-bpf
Copy link
Author

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 878ee3c
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1026504
version: 2

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: ae24fc8
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1026504
version: 2

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 4dd3a48
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1026504
version: 2

@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: 8f7cf30
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1026504
version: 2

…check

Locking a resilient queued spinlock can fail when deadlock or timeout
happen. Mark the lock acquring functions with __must_check to make sure
callers always handle the returned error.

Suggested-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Signed-off-by: Amery Hung <ameryhung@gmail.com>
Return errors from raw_res_spin_lock{_irqsave}() to writelock(). This is
simply to silence the unused result warning. lock_torture_writer()
currently does not handle errors returned from writelock(). This aligns
with the existing torture test for ww_mutex.

Signed-off-by: Amery Hung <ameryhung@gmail.com>
@kernel-patches-daemon-bpf
Copy link
Author

Upstream branch: c427320
series: https://patchwork.kernel.org/project/netdevbpf/list/?series=1026504
version: 2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants