syzbot


possible deadlock in __bpf_ringbuf_reserve

Status: upstream: reported C repro on 2025/07/04 03:51
Bug presence: origin:lts-only
[Documentation on labels]
Reported-by: syzbot+33bfc46927a4ecf45b6d@syzkaller.appspotmail.com
First crash: 23d, last: 1d11h
Bug presence (2)
Date Name Commit Repro Result
2025/07/07 linux-6.6.y (ToT) a5df3a702b2c C [report] possible deadlock in __bpf_ringbuf_reserve
2025/07/07 upstream (ToT) d7b8f8e20813 C Didn't crash
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in __bpf_ringbuf_reserve origin:upstream missing-backport 4 C inconclusive 69 62d 505d 0/3 upstream: reported C repro on 2024/03/08 23:13
upstream possible deadlock in __bpf_ringbuf_reserve bpf 4 C error 2490 96d 501d 28/29 fixed on 2025/06/10 16:19

Sample crash report:
============================================
WARNING: possible recursive locking detected
6.6.100-syzkaller #0 Not tainted
--------------------------------------------
syz-executor/5800 is trying to acquire lock:
ffffc9000de390d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423

but task is already holding lock:
ffffc9000dd310d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&rb->spinlock);
  lock(&rb->spinlock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks held by syz-executor/5800:
 #0: ffff88802f54c868 (&pipe->mutex/1){+.+.}-{3:3}, at: __pipe_lock fs/pipe.c:103 [inline]
 #0: ffff88802f54c868 (&pipe->mutex/1){+.+.}-{3:3}, at: pipe_read+0x12d/0x12a0 fs/pipe.c:244
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
 #1: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361
 #2: ffffc9000dd310d8 (&rb->spinlock){-.-.}-{2:2}, at: __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline]
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2321 [inline]
 #3: ffffffff8cd2fba0 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0xde/0x3c0 kernel/trace/bpf_trace.c:2361

stack backtrace:
CPU: 1 PID: 5800 Comm: syz-executor Not tainted 6.6.100-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 check_deadlock kernel/locking/lockdep.c:3062 [inline]
 validate_chain kernel/locking/lockdep.c:3856 [inline]
 __lock_acquire+0x5d40/0x7c80 kernel/locking/lockdep.c:5137
 lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xa8/0xf0 kernel/locking/spinlock.c:162
 __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
 ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
 bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:466
 bpf_prog_fe0ed97373b08409+0x2d/0x4a
 bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
 __bpf_prog_run include/linux/filter.h:612 [inline]
 bpf_prog_run include/linux/filter.h:619 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
 bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
 __bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
 __traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
 trace_contention_end+0xe6/0x110 include/trace/events/lock.h:122
 __pv_queued_spin_lock_slowpath+0x7ec/0x9d0 kernel/locking/qspinlock.c:560
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
 queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x24e/0x2c0 kernel/locking/spinlock_debug.c:115
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
 _raw_spin_lock_irqsave+0xb4/0xf0 kernel/locking/spinlock.c:162
 __bpf_ringbuf_reserve+0x1c8/0x5a0 kernel/bpf/ringbuf.c:423
 ____bpf_ringbuf_reserve kernel/bpf/ringbuf.c:474 [inline]
 bpf_ringbuf_reserve+0x5c/0x70 kernel/bpf/ringbuf.c:466
 bpf_prog_fe0ed97373b08409+0x2d/0x4a
 bpf_dispatcher_nop_func include/linux/bpf.h:1213 [inline]
 __bpf_prog_run include/linux/filter.h:612 [inline]
 bpf_prog_run include/linux/filter.h:619 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2322 [inline]
 bpf_trace_run2+0x1d1/0x3c0 kernel/trace/bpf_trace.c:2361
 __bpf_trace_contention_end+0xdd/0x130 include/trace/events/lock.h:122
 __traceiter_contention_end+0x78/0xb0 include/trace/events/lock.h:122
 trace_contention_end+0xc5/0xe0 include/trace/events/lock.h:122
 __mutex_lock_common kernel/locking/mutex.c:612 [inline]
 __mutex_lock+0x2fa/0xcc0 kernel/locking/mutex.c:747
 __pipe_lock fs/pipe.c:103 [inline]
 pipe_read+0x12d/0x12a0 fs/pipe.c:244
 call_read_iter include/linux/fs.h:2012 [inline]
 new_sync_read fs/read_write.c:389 [inline]
 vfs_read+0x431/0x920 fs/read_write.c:470
 ksys_read+0x147/0x250 fs/read_write.c:613
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f46ead8d37d
Code: a8 ff ff ff f7 d8 64 89 02 b8 ff ff ff ff eb b5 e8 a8 48 00 00 0f 1f 84 00 00 00 00 00 80 3d c1 a1 1f 00 00 74 17 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 5b c3 66 2e 0f 1f 84 00 00 00 00 00 48 83 ec
RSP: 002b:00007fffdf522e68 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 000055557e97b4a0 RCX: 00007f46ead8d37d
RDX: 0000000000000004 RSI: 00007fffdf522ed0 RDI: 0000000000000012
RBP: 00007fffdf523160 R08: 0000000000000144 R09: 000055557e976840
R10: 000000a85fd99661 R11: 0000000000000246 R12: 00007fffdf522ed0
R13: 000055557e976838 R14: 00007fffdf522ee0 R15: 000055557e9787b8
 </TASK>

Crashes (6):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/25 20:15 linux-6.6.y dbcb8d8e4163 fb8f743d .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf possible deadlock in __bpf_ringbuf_reserve
2025/07/07 09:05 linux-6.6.y a5df3a702b2c 4f67c4ae .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in __bpf_ringbuf_reserve
2025/07/10 21:25 linux-6.6.y 59a2de10b81a 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in __bpf_ringbuf_reserve
2025/07/06 15:19 linux-6.6.y a5df3a702b2c 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in __bpf_ringbuf_reserve
2025/07/04 18:38 linux-6.6.y 3f5b4c104b7d d869b261 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in __bpf_ringbuf_reserve
2025/07/04 03:51 linux-6.6.y 3f5b4c104b7d 76ad128c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in __bpf_ringbuf_reserve
* Struck through repros no longer work on HEAD.