syzbot


possible deadlock in queue_stack_map_push_elem

Status: upstream: reported C repro on 2024/04/11 18:39
Bug presence: origin:upstream
[Documentation on labels]
Reported-by: syzbot+11cdb632b3437d02da7d@syzkaller.appspotmail.com
First crash: 66d, last: 3d18h
Bug presence (1)
Date Name Commit Repro Result
2024/04/25 upstream (ToT) c942a0cd3603 C [report] possible deadlock in queue_stack_map_push_elem
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in queue_stack_map_push_elem bpf C error 38 11h54m 60d 0/27 upstream: reported C repro on 2024/04/17 21:25
Fix bisection attempts (1)
Created Duration User Patch Repo Result
2024/06/13 08:24 1h29m bisect fix linux-6.1.y job log (0) log

Sample crash report:
============================================
WARNING: possible recursive locking detected
6.1.90-syzkaller #0 Not tainted
--------------------------------------------
syz-executor134/3725 is trying to acquire lock:
ffff88807a4be218 (&qs->lock){-.-.}-{2:2}, at: queue_stack_map_push_elem+0x1ac/0x650 kernel/bpf/queue_stack_maps.c:214

but task is already holding lock:
ffff8880737b5218 (&qs->lock){-.-.}-{2:2}, at: queue_stack_map_push_elem+0x1ac/0x650 kernel/bpf/queue_stack_maps.c:214

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&qs->lock);
  lock(&qs->lock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks held by syz-executor134/3725:
 #0: ffff88807aab13f8 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_cleanup_begin kernel/futex/core.c:1076 [inline]
 #0: ffff88807aab13f8 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312
 #2: ffff8880737b5218 (&qs->lock){-.-.}-{2:2}, at: queue_stack_map_push_elem+0x1ac/0x650 kernel/bpf/queue_stack_maps.c:214
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312

stack backtrace:
CPU: 1 PID: 3725 Comm: syz-executor134 Not tainted 6.1.90-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 print_deadlock_bug kernel/locking/lockdep.c:2983 [inline]
 check_deadlock kernel/locking/lockdep.c:3026 [inline]
 validate_chain+0x4711/0x5950 kernel/locking/lockdep.c:3812
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
 queue_stack_map_push_elem+0x1ac/0x650 kernel/bpf/queue_stack_maps.c:214
 bpf_prog_216c997a1f42e404+0x37/0x3b
 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
 __bpf_prog_run include/linux/filter.h:603 [inline]
 bpf_prog_run include/linux/filter.h:610 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
 bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
 __traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
 trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
 __pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
 queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
 _raw_spin_lock_irqsave+0xdd/0x120 kernel/locking/spinlock.c:162
 queue_stack_map_push_elem+0x1ac/0x650 kernel/bpf/queue_stack_maps.c:214
 bpf_prog_216c997a1f42e404+0x37/0x3b
 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
 __bpf_prog_run include/linux/filter.h:603 [inline]
 bpf_prog_run include/linux/filter.h:610 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
 bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
 __traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
 trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
 __mutex_lock_common kernel/locking/mutex.c:612 [inline]
 __mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
 futex_cleanup_begin kernel/futex/core.c:1076 [inline]
 futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
 exit_mm_release+0x16/0x30 kernel/fork.c:1505
 exit_mm+0xa9/0x300 kernel/exit.c:535
 do_exit+0x9f6/0x26a0 kernel/exit.c:856
 do_group_exit+0x202/0x2b0 kernel/exit.c:1019
 __do_sys_exit_group kernel/exit.c:1030 [inline]
 __se_sys_exit_group kernel/exit.c:1028 [inline]
 __x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1028
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fc9d8adf0b9
Code: 90 49 c7 c0 b8 ff ff ff be e7 00 00 00 ba 3c 00 00 00 eb 12 0f 1f 44 00 00 89 d0 0f 05 48 3d 00 f0 ff ff 77 1c f4 89 f0 0f 05 <48> 3d 00 f0 ff ff 76 e7 f7 d8 64 41 89 00 eb df 0f 1f 80 00 00 00
RSP: 002b:00007ffd5edf0198 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fc9d8adf0b9
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
RBP: 00007fc9d8b5a2b0 R08: ffffffffffffffb8 R09: 00000000000000a0
R10: 00000000000000a0 R11: 0000000000000246 R12: 00007fc9d8b5a2b0
R13: 0000000000000000 R14: 00007fc9d8b5ad20 R15: 00007fc9d8ab0260
 </TASK>

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/08 23:01 linux-6.1.y 909ba1f1b414 20bf80e1 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in queue_stack_map_push_elem
2024/04/26 15:18 linux-6.1.y 6741e066ec76 059e9963 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in queue_stack_map_push_elem
2024/04/11 18:39 linux-6.1.y bf1e3b1cb1e0 95ed9ece .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in queue_stack_map_push_elem
2024/05/14 01:26 linux-6.1.y 909ba1f1b414 fdb4c10c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in queue_stack_map_push_elem
2024/05/14 01:26 linux-6.1.y 909ba1f1b414 fdb4c10c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in queue_stack_map_push_elem
2024/05/07 19:24 linux-6.1.y 909ba1f1b414 cb2dcc0e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in queue_stack_map_push_elem
2024/05/07 05:56 linux-6.1.y 909ba1f1b414 fa7a5cf0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in queue_stack_map_push_elem
2024/05/02 18:59 linux-6.1.y 909ba1f1b414 3ba885bc .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in queue_stack_map_push_elem
* Struck through repros no longer work on HEAD.