Skip to content

Commit 5d34a30

Browse files
Ming Leigregkh
Ming Lei
authored andcommitted
ublk: fix handling recovery & reissue in ublk_abort_queue()
[ Upstream commit 6ee6bd5 ] Commit 8284066 ("ublk: grab request reference when the request is handled by userspace") doesn't grab request reference in case of recovery reissue. Then the request can be requeued & re-dispatch & failed when canceling uring command. If it is one zc request, the request can be freed before io_uring returns the zc buffer back, then cause kernel panic: [ 126.773061] BUG: kernel NULL pointer dereference, address: 00000000000000c8 [ 126.773657] #PF: supervisor read access in kernel mode [ 126.774052] #PF: error_code(0x0000) - not-present page [ 126.774455] PGD 0 P4D 0 [ 126.774698] Oops: Oops: 0000 [#1] SMP NOPTI [ 126.775034] CPU: 13 UID: 0 PID: 1612 Comm: kworker/u64:55 Not tainted 6.14.0_blk+ torvalds#182 PREEMPT(full) [ 126.775676] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-1.fc39 04/01/2014 [ 126.776275] Workqueue: iou_exit io_ring_exit_work [ 126.776651] RIP: 0010:ublk_io_release+0x14/0x130 [ublk_drv] Fixes it by always grabbing request reference for aborting the request. Reported-by: Caleb Sander Mateos <csander@purestorage.com> Closes: https://lore.kernel.org/linux-block/CADUfDZodKfOGUeWrnAxcZiLT+puaZX8jDHoj_sfHZCOZwhzz6A@mail.gmail.com/ Fixes: 8284066 ("ublk: grab request reference when the request is handled by userspace") Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20250409011444.2142010-2-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Sasha Levin <sashal@kernel.org>
1 parent 2a3d2e3 commit 5d34a30

File tree

1 file changed

+26
-4
lines changed

1 file changed

+26
-4
lines changed

drivers/block/ublk_drv.c

+26-4
Original file line numberDiff line numberDiff line change
@@ -1094,6 +1094,25 @@ static void ublk_complete_rq(struct kref *ref)
10941094
__ublk_complete_rq(req);
10951095
}
10961096

1097+
static void ublk_do_fail_rq(struct request *req)
1098+
{
1099+
struct ublk_queue *ubq = req->mq_hctx->driver_data;
1100+
1101+
if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
1102+
blk_mq_requeue_request(req, false);
1103+
else
1104+
__ublk_complete_rq(req);
1105+
}
1106+
1107+
static void ublk_fail_rq_fn(struct kref *ref)
1108+
{
1109+
struct ublk_rq_data *data = container_of(ref, struct ublk_rq_data,
1110+
ref);
1111+
struct request *req = blk_mq_rq_from_pdu(data);
1112+
1113+
ublk_do_fail_rq(req);
1114+
}
1115+
10971116
/*
10981117
* Since __ublk_rq_task_work always fails requests immediately during
10991118
* exiting, __ublk_fail_req() is only called from abort context during
@@ -1107,10 +1126,13 @@ static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
11071126
{
11081127
WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
11091128

1110-
if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
1111-
blk_mq_requeue_request(req, false);
1112-
else
1113-
ublk_put_req_ref(ubq, req);
1129+
if (ublk_need_req_ref(ubq)) {
1130+
struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
1131+
1132+
kref_put(&data->ref, ublk_fail_rq_fn);
1133+
} else {
1134+
ublk_do_fail_rq(req);
1135+
}
11141136
}
11151137

11161138
static void ubq_complete_io_cmd(struct ublk_io *io, int res,

0 commit comments

Comments
 (0)