syzbot


possible deadlock in __stack_map_get

Status: upstream: reported C repro on 2024/04/12 22:39
Bug presence: origin:upstream
[Documentation on labels]
Reported-by: syzbot+56936a2c1630efed0af2@syzkaller.appspotmail.com
First crash: 175d, last: 15d
Bug presence (1)
Date Name Commit Repro Result
2024/04/29 upstream (ToT) e67572cd2204 C [report] possible deadlock in __stack_map_get
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in __stack_map_get bpf C error 47 12h21m 169d 0/28 upstream: reported C repro on 2024/04/18 20:00
Fix bisection attempts (1)
Created Duration User Patch Repo Result
2024/08/04 07:19 1h22m bisect fix linux-6.1.y OK (0) job log log

Sample crash report:
============================================
WARNING: possible recursive locking detected
6.1.90-syzkaller #0 Not tainted
--------------------------------------------
syz-executor180/3559 is trying to acquire lock:
ffff88807b4ca218 (&qs->lock){-...}-{2:2}, at: __stack_map_get+0x147/0x4a0 kernel/bpf/queue_stack_maps.c:144

but task is already holding lock:
ffff88807ad45218 (&qs->lock){-...}-{2:2}, at: __stack_map_get+0x147/0x4a0 kernel/bpf/queue_stack_maps.c:144

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&qs->lock);
  lock(&qs->lock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks held by syz-executor180/3559:
 #0: ffff8880235b93f8 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_cleanup_begin kernel/futex/core.c:1076 [inline]
 #0: ffff8880235b93f8 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
 #1: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312
 #2: ffff88807ad45218 (&qs->lock){-...}-{2:2}, at: __stack_map_get+0x147/0x4a0 kernel/bpf/queue_stack_maps.c:144
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
 #3: ffffffff8d12ac80 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run2+0x110/0x410 kernel/trace/bpf_trace.c:2312

stack backtrace:
CPU: 0 PID: 3559 Comm: syz-executor180 Not tainted 6.1.90-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 print_deadlock_bug kernel/locking/lockdep.c:2983 [inline]
 check_deadlock kernel/locking/lockdep.c:3026 [inline]
 validate_chain+0x4711/0x5950 kernel/locking/lockdep.c:3812
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
 __stack_map_get+0x147/0x4a0 kernel/bpf/queue_stack_maps.c:144
 bpf_prog_00798911c748094f+0x3a/0x3e
 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
 __bpf_prog_run include/linux/filter.h:603 [inline]
 bpf_prog_run include/linux/filter.h:610 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
 bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
 __traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
 trace_contention_end+0x14c/0x190 include/trace/events/lock.h:122
 __pv_queued_spin_lock_slowpath+0x935/0xc50 kernel/locking/qspinlock.c:560
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
 queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x269/0x370 kernel/locking/spinlock_debug.c:115
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
 _raw_spin_lock_irqsave+0xdd/0x120 kernel/locking/spinlock.c:162
 __stack_map_get+0x147/0x4a0 kernel/bpf/queue_stack_maps.c:144
 bpf_prog_00798911c748094f+0x3a/0x3e
 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
 __bpf_prog_run include/linux/filter.h:603 [inline]
 bpf_prog_run include/linux/filter.h:610 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
 bpf_trace_run2+0x1fd/0x410 kernel/trace/bpf_trace.c:2312
 __traceiter_contention_end+0x74/0xa0 include/trace/events/lock.h:122
 trace_contention_end+0x12f/0x170 include/trace/events/lock.h:122
 __mutex_lock_common kernel/locking/mutex.c:612 [inline]
 __mutex_lock+0x2ed/0xd80 kernel/locking/mutex.c:747
 futex_cleanup_begin kernel/futex/core.c:1076 [inline]
 futex_exit_release+0x30/0x1e0 kernel/futex/core.c:1128
 exit_mm_release+0x16/0x30 kernel/fork.c:1505
 exit_mm+0xa9/0x300 kernel/exit.c:535
 do_exit+0x9f6/0x26a0 kernel/exit.c:856
 do_group_exit+0x202/0x2b0 kernel/exit.c:1019
 __do_sys_exit_group kernel/exit.c:1030 [inline]
 __se_sys_exit_group kernel/exit.c:1028 [inline]
 __x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1028
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f96c267c0b9
Code: 90 49 c7 c0 b8 ff ff ff be e7 00 00 00 ba 3c 00 00 00 eb 12 0f 1f 44 00 00 89 d0 0f 05 48 3d 00 f0 ff ff 77 1c f4 89 f0 0f 05 <48> 3d 00 f0 ff ff 76 e7 f7 d8 64 41 89 00 eb df 0f 1f 80 00 00 00
RSP: 002b:00007fffe8e70798 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f96c267c0b9
RDX: 000000000000003c RSI: 00000000000000e7 RDI: 0000000000000000
RBP: 00007f96c26f72b0 R08: ffffffffffffffb8 R09: 00000000000000a0
R10: 00000000000000a0 R11: 0000000000000246 R12: 00007f96c26f72b0
R13: 0000000000000000 R14: 00007f96c26f7d20 R15: 00007f96c264d260
 </TASK>

Crashes (10):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/09 03:40 linux-6.1.y 909ba1f1b414 20bf80e1 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/04/12 22:38 linux-6.1.y bf1e3b1cb1e0 c8349e48 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/09/20 10:33 linux-6.1.y e526b12bf916 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/08/21 08:37 linux-6.1.y ee5e09825b81 9f0ab3fb .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/06/24 14:59 linux-6.1.y eb44d83053d6 edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/06/06 14:37 linux-6.1.y 88690811da69 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/06/06 14:37 linux-6.1.y 88690811da69 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/05/26 15:58 linux-6.1.y 88690811da69 a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/05/20 02:07 linux-6.1.y 4078fa637fcd c0f1611a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
2024/04/23 11:02 linux-6.1.y 6741e066ec76 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_map_get
* Struck through repros no longer work on HEAD.