Skip to content

Commit 838a10b

Browse files
kkdwivediAlexei Starovoitov
authored and
Alexei Starovoitov
committed
bpf: Augment raw_tp arguments with PTR_MAYBE_NULL
Arguments to a raw tracepoint are tagged as trusted, which carries the semantics that the pointer will be non-NULL. However, in certain cases, a raw tracepoint argument may end up being NULL. More context about this issue is available in [0]. Thus, there is a discrepancy between the reality, that raw_tp arguments can actually be NULL, and the verifier's knowledge, that they are never NULL, causing explicit NULL check branch to be dead code eliminated. A previous attempt [1], i.e. the second fixed commit, was made to simulate symbolic execution as if in most accesses, the argument is a non-NULL raw_tp, except for conditional jumps. This tried to suppress branch prediction while preserving compatibility, but surfaced issues with production programs that were difficult to solve without increasing verifier complexity. A more complete discussion of issues and fixes is available at [2]. Fix this by maintaining an explicit list of tracepoints where the arguments are known to be NULL, and mark the positional arguments as PTR_MAYBE_NULL. Additionally, capture the tracepoints where arguments are known to be ERR_PTR, and mark these arguments as scalar values to prevent potential dereference. Each hex digit is used to encode NULL-ness (0x1) or ERR_PTR-ness (0x2), shifted by the zero-indexed argument number x 4. This can be represented as follows: 1st arg: 0x1 2nd arg: 0x10 3rd arg: 0x100 ... and so on (likewise for ERR_PTR case). In the future, an automated pass will be used to produce such a list, or insert __nullable annotations automatically for tracepoints. Each compilation unit will be analyzed and results will be collated to find whether a tracepoint pointer is definitely not null, maybe null, or an unknown state where verifier conservatively marks it PTR_MAYBE_NULL. A proof of concept of this tool from Eduard is available at [3]. Note that in case we don't find a specification in the raw_tp_null_args array and the tracepoint belongs to a kernel module, we will conservatively mark the arguments as PTR_MAYBE_NULL. This is because unlike for in-tree modules, out-of-tree module tracepoints may pass NULL freely to the tracepoint. We don't protect against such tracepoints passing ERR_PTR (which is uncommon anyway), lest we mark all such arguments as SCALAR_VALUE. While we are it, let's adjust the test raw_tp_null to not perform dereference of the skb->mark, as that won't be allowed anymore, and make it more robust by using inline assembly to test the dead code elimination behavior, which should still stay the same. [0]: https://lore.kernel.org/bpf/ZrCZS6nisraEqehw@jlelli-thinkpadt14gen4.remote.csb [1]: https://lore.kernel.org/all/20241104171959.2938862-1-memxor@gmail.com [2]: https://lore.kernel.org/bpf/20241206161053.809580-1-memxor@gmail.com [3]: https://github.com/eddyz87/llvm-project/tree/nullness-for-tracepoint-params Reported-by: Juri Lelli <juri.lelli@redhat.com> # original bug Reported-by: Manu Bretelle <chantra@meta.com> # bugs in masking fix Fixes: 3f00c52 ("bpf: Allow trusted pointers to be passed to KF_TRUSTED_ARGS kfuncs") Fixes: cb4158c ("bpf: Mark raw_tp arguments with PTR_MAYBE_NULL") Reviewed-by: Eduard Zingerman <eddyz87@gmail.com> Co-developed-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Jiri Olsa <jolsa@kernel.org> Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com> Link: https://lore.kernel.org/r/20241213221929.3495062-3-memxor@gmail.com Signed-off-by: Alexei Starovoitov <ast@kernel.org>
1 parent c00d738 commit 838a10b

File tree

2 files changed

+147
-10
lines changed

2 files changed

+147
-10
lines changed

kernel/bpf/btf.c

+138
Original file line numberDiff line numberDiff line change
@@ -6439,6 +6439,101 @@ int btf_ctx_arg_offset(const struct btf *btf, const struct btf_type *func_proto,
64396439
return off;
64406440
}
64416441

6442+
struct bpf_raw_tp_null_args {
6443+
const char *func;
6444+
u64 mask;
6445+
};
6446+
6447+
static const struct bpf_raw_tp_null_args raw_tp_null_args[] = {
6448+
/* sched */
6449+
{ "sched_pi_setprio", 0x10 },
6450+
/* ... from sched_numa_pair_template event class */
6451+
{ "sched_stick_numa", 0x100 },
6452+
{ "sched_swap_numa", 0x100 },
6453+
/* afs */
6454+
{ "afs_make_fs_call", 0x10 },
6455+
{ "afs_make_fs_calli", 0x10 },
6456+
{ "afs_make_fs_call1", 0x10 },
6457+
{ "afs_make_fs_call2", 0x10 },
6458+
{ "afs_protocol_error", 0x1 },
6459+
{ "afs_flock_ev", 0x10 },
6460+
/* cachefiles */
6461+
{ "cachefiles_lookup", 0x1 | 0x200 },
6462+
{ "cachefiles_unlink", 0x1 },
6463+
{ "cachefiles_rename", 0x1 },
6464+
{ "cachefiles_prep_read", 0x1 },
6465+
{ "cachefiles_mark_active", 0x1 },
6466+
{ "cachefiles_mark_failed", 0x1 },
6467+
{ "cachefiles_mark_inactive", 0x1 },
6468+
{ "cachefiles_vfs_error", 0x1 },
6469+
{ "cachefiles_io_error", 0x1 },
6470+
{ "cachefiles_ondemand_open", 0x1 },
6471+
{ "cachefiles_ondemand_copen", 0x1 },
6472+
{ "cachefiles_ondemand_close", 0x1 },
6473+
{ "cachefiles_ondemand_read", 0x1 },
6474+
{ "cachefiles_ondemand_cread", 0x1 },
6475+
{ "cachefiles_ondemand_fd_write", 0x1 },
6476+
{ "cachefiles_ondemand_fd_release", 0x1 },
6477+
/* ext4, from ext4__mballoc event class */
6478+
{ "ext4_mballoc_discard", 0x10 },
6479+
{ "ext4_mballoc_free", 0x10 },
6480+
/* fib */
6481+
{ "fib_table_lookup", 0x100 },
6482+
/* filelock */
6483+
/* ... from filelock_lock event class */
6484+
{ "posix_lock_inode", 0x10 },
6485+
{ "fcntl_setlk", 0x10 },
6486+
{ "locks_remove_posix", 0x10 },
6487+
{ "flock_lock_inode", 0x10 },
6488+
/* ... from filelock_lease event class */
6489+
{ "break_lease_noblock", 0x10 },
6490+
{ "break_lease_block", 0x10 },
6491+
{ "break_lease_unblock", 0x10 },
6492+
{ "generic_delete_lease", 0x10 },
6493+
{ "time_out_leases", 0x10 },
6494+
/* host1x */
6495+
{ "host1x_cdma_push_gather", 0x10000 },
6496+
/* huge_memory */
6497+
{ "mm_khugepaged_scan_pmd", 0x10 },
6498+
{ "mm_collapse_huge_page_isolate", 0x1 },
6499+
{ "mm_khugepaged_scan_file", 0x10 },
6500+
{ "mm_khugepaged_collapse_file", 0x10 },
6501+
/* kmem */
6502+
{ "mm_page_alloc", 0x1 },
6503+
{ "mm_page_pcpu_drain", 0x1 },
6504+
/* .. from mm_page event class */
6505+
{ "mm_page_alloc_zone_locked", 0x1 },
6506+
/* netfs */
6507+
{ "netfs_failure", 0x10 },
6508+
/* power */
6509+
{ "device_pm_callback_start", 0x10 },
6510+
/* qdisc */
6511+
{ "qdisc_dequeue", 0x1000 },
6512+
/* rxrpc */
6513+
{ "rxrpc_recvdata", 0x1 },
6514+
{ "rxrpc_resend", 0x10 },
6515+
/* sunrpc */
6516+
{ "xs_stream_read_data", 0x1 },
6517+
/* ... from xprt_cong_event event class */
6518+
{ "xprt_reserve_cong", 0x10 },
6519+
{ "xprt_release_cong", 0x10 },
6520+
{ "xprt_get_cong", 0x10 },
6521+
{ "xprt_put_cong", 0x10 },
6522+
/* tcp */
6523+
{ "tcp_send_reset", 0x11 },
6524+
/* tegra_apb_dma */
6525+
{ "tegra_dma_tx_status", 0x100 },
6526+
/* timer_migration */
6527+
{ "tmigr_update_events", 0x1 },
6528+
/* writeback, from writeback_folio_template event class */
6529+
{ "writeback_dirty_folio", 0x10 },
6530+
{ "folio_wait_writeback", 0x10 },
6531+
/* rdma */
6532+
{ "mr_integ_alloc", 0x2000 },
6533+
/* bpf_testmod */
6534+
{ "bpf_testmod_test_read", 0x0 },
6535+
};
6536+
64426537
bool btf_ctx_access(int off, int size, enum bpf_access_type type,
64436538
const struct bpf_prog *prog,
64446539
struct bpf_insn_access_aux *info)
@@ -6449,6 +6544,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
64496544
const char *tname = prog->aux->attach_func_name;
64506545
struct bpf_verifier_log *log = info->log;
64516546
const struct btf_param *args;
6547+
bool ptr_err_raw_tp = false;
64526548
const char *tag_value;
64536549
u32 nr_args, arg;
64546550
int i, ret;
@@ -6597,6 +6693,39 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
65976693
if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
65986694
info->reg_type |= PTR_MAYBE_NULL;
65996695

6696+
if (prog->expected_attach_type == BPF_TRACE_RAW_TP) {
6697+
struct btf *btf = prog->aux->attach_btf;
6698+
const struct btf_type *t;
6699+
const char *tname;
6700+
6701+
/* BTF lookups cannot fail, return false on error */
6702+
t = btf_type_by_id(btf, prog->aux->attach_btf_id);
6703+
if (!t)
6704+
return false;
6705+
tname = btf_name_by_offset(btf, t->name_off);
6706+
if (!tname)
6707+
return false;
6708+
/* Checked by bpf_check_attach_target */
6709+
tname += sizeof("btf_trace_") - 1;
6710+
for (i = 0; i < ARRAY_SIZE(raw_tp_null_args); i++) {
6711+
/* Is this a func with potential NULL args? */
6712+
if (strcmp(tname, raw_tp_null_args[i].func))
6713+
continue;
6714+
if (raw_tp_null_args[i].mask & (0x1 << (arg * 4)))
6715+
info->reg_type |= PTR_MAYBE_NULL;
6716+
/* Is the current arg IS_ERR? */
6717+
if (raw_tp_null_args[i].mask & (0x2 << (arg * 4)))
6718+
ptr_err_raw_tp = true;
6719+
break;
6720+
}
6721+
/* If we don't know NULL-ness specification and the tracepoint
6722+
* is coming from a loadable module, be conservative and mark
6723+
* argument as PTR_MAYBE_NULL.
6724+
*/
6725+
if (i == ARRAY_SIZE(raw_tp_null_args) && btf_is_module(btf))
6726+
info->reg_type |= PTR_MAYBE_NULL;
6727+
}
6728+
66006729
if (tgt_prog) {
66016730
enum bpf_prog_type tgt_type;
66026731

@@ -6641,6 +6770,15 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
66416770
bpf_log(log, "func '%s' arg%d has btf_id %d type %s '%s'\n",
66426771
tname, arg, info->btf_id, btf_type_str(t),
66436772
__btf_name_by_offset(btf, t->name_off));
6773+
6774+
/* Perform all checks on the validity of type for this argument, but if
6775+
* we know it can be IS_ERR at runtime, scrub pointer type and mark as
6776+
* scalar.
6777+
*/
6778+
if (ptr_err_raw_tp) {
6779+
bpf_log(log, "marking pointer arg%d as scalar as it may encode error", arg);
6780+
info->reg_type = SCALAR_VALUE;
6781+
}
66446782
return true;
66456783
}
66466784
EXPORT_SYMBOL_GPL(btf_ctx_access);

tools/testing/selftests/bpf/progs/raw_tp_null.c

+9-10
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33

44
#include <vmlinux.h>
55
#include <bpf/bpf_tracing.h>
6+
#include "bpf_misc.h"
67

78
char _license[] SEC("license") = "GPL";
89

@@ -17,16 +18,14 @@ int BPF_PROG(test_raw_tp_null, struct sk_buff *skb)
1718
if (task->pid != tid)
1819
return 0;
1920

20-
i = i + skb->mark + 1;
21-
/* The compiler may move the NULL check before this deref, which causes
22-
* the load to fail as deref of scalar. Prevent that by using a barrier.
21+
/* If dead code elimination kicks in, the increment +=2 will be
22+
* removed. For raw_tp programs attaching to tracepoints in kernel
23+
* modules, we mark input arguments as PTR_MAYBE_NULL, so branch
24+
* prediction should never kick in.
2325
*/
24-
barrier();
25-
/* If dead code elimination kicks in, the increment below will
26-
* be removed. For raw_tp programs, we mark input arguments as
27-
* PTR_MAYBE_NULL, so branch prediction should never kick in.
28-
*/
29-
if (!skb)
30-
i += 2;
26+
asm volatile ("%[i] += 1; if %[ctx] != 0 goto +1; %[i] += 2;"
27+
: [i]"+r"(i)
28+
: [ctx]"r"(skb)
29+
: "memory");
3130
return 0;
3231
}

0 commit comments

Comments
 (0)