syzbot


possible deadlock in __stack_depot_save (2)

Status: upstream: reported on 2025/07/08 19:17
Reported-by: syzbot+17db7085bdcb3d565081@syzkaller.appspotmail.com
First crash: 29d, last: 22d
Similar bugs (1)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in __stack_depot_save 4 1 136d 136d 0/3 auto-obsoleted due to no activity on 2025/07/02 05:52

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.1.145-syzkaller #0 Not tainted
------------------------------------------------------
syz.3.78/4674 is trying to acquire lock:
ffffffff8d1cbde8 (depot_lock){-.-.}-{2:2}, at: __stack_depot_save+0x1e4/0x460 lib/stackdepot.c:479

but task is already holding lock:
ffff8880308c8a38 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xc4/0xe90 kernel/bpf/lpm_trie.c:335

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&trie->lock){-.-.}-{2:2}:
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
       trie_delete_elem+0x90/0x690 kernel/bpf/lpm_trie.c:467
       0xffffffffa00009aa
       bpf_dispatcher_nop_func include/linux/bpf.h:1001 [inline]
       __bpf_prog_run include/linux/filter.h:603 [inline]
       bpf_prog_run include/linux/filter.h:610 [inline]
       __bpf_trace_run kernel/trace/bpf_trace.c:2285 [inline]
       bpf_trace_run2+0x1cd/0x3b0 kernel/trace/bpf_trace.c:2324
       trace_contention_end+0x13f/0x190 include/trace/events/lock.h:122
       __pv_queued_spin_lock_slowpath+0x7e8/0x9c0 kernel/locking/qspinlock.c:560
       pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
       queued_spin_lock_slowpath+0x43/0x50 arch/x86/include/asm/qspinlock.h:51
       queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
       do_raw_spin_lock+0x217/0x280 kernel/locking/spinlock_debug.c:115
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
       _raw_spin_lock_irqsave+0xb0/0xf0 kernel/locking/spinlock.c:162
       __stack_depot_save+0x1e4/0x460 lib/stackdepot.c:479
       kasan_save_stack mm/kasan/common.c:46 [inline]
       kasan_set_track+0x60/0x70 mm/kasan/common.c:52
       __kasan_slab_alloc+0x6b/0x80 mm/kasan/common.c:328
       kasan_slab_alloc include/linux/kasan.h:201 [inline]
       slab_post_alloc_hook+0x4b/0x480 mm/slab.h:737
       slab_alloc_node mm/slub.c:3398 [inline]
       slab_alloc mm/slub.c:3406 [inline]
       __kmem_cache_alloc_lru mm/slub.c:3413 [inline]
       kmem_cache_alloc+0x123/0x2f0 mm/slub.c:3422
       kmem_cache_zalloc include/linux/slab.h:689 [inline]
       fill_pool lib/debugobjects.c:169 [inline]
       debug_objects_fill_pool+0x30c/0x650 lib/debugobjects.c:607
       __debug_object_init+0x29/0x420 lib/debugobjects.c:617
       init_cgroup_housekeeping+0x67d/0x790 kernel/cgroup/cgroup.c:2041
       cgroup_create kernel/cgroup/cgroup.c:5617 [inline]
       cgroup_mkdir+0x50d/0xeb0 kernel/cgroup/cgroup.c:5745
       kernfs_iop_mkdir+0x24c/0x3d0 fs/kernfs/dir.c:1221
       vfs_mkdir+0x387/0x570 fs/namei.c:4106
       do_mkdirat+0x1d0/0x430 fs/namei.c:4131
       __do_sys_mkdirat fs/namei.c:4146 [inline]
       __se_sys_mkdirat fs/namei.c:4144 [inline]
       __x64_sys_mkdirat+0x85/0x90 fs/namei.c:4144
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (depot_lock){-.-.}-{2:2}:
       check_prev_add kernel/locking/lockdep.c:3090 [inline]
       check_prevs_add kernel/locking/lockdep.c:3209 [inline]
       validate_chain kernel/locking/lockdep.c:3825 [inline]
       __lock_acquire+0x2cf8/0x7c50 kernel/locking/lockdep.c:5049
       lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
       __stack_depot_save+0x1e4/0x460 lib/stackdepot.c:479
       save_stack+0x101/0x1e0 mm/page_owner.c:128
       __set_page_owner+0x19/0x60 mm/page_owner.c:190
       set_page_owner include/linux/page_owner.h:31 [inline]
       post_alloc_hook+0x173/0x1a0 mm/page_alloc.c:2532
       prep_new_page mm/page_alloc.c:2539 [inline]
       get_page_from_freelist+0x1a26/0x1ac0 mm/page_alloc.c:4328
       __alloc_pages+0x1df/0x4e0 mm/page_alloc.c:5614
       __alloc_pages_node include/linux/gfp.h:237 [inline]
       alloc_pages_node include/linux/gfp.h:260 [inline]
       __kmalloc_large_node+0x8c/0x1e0 mm/slab_common.c:1077
       __do_kmalloc_node mm/slab_common.c:924 [inline]
       __kmalloc_node+0x10e/0x240 mm/slab_common.c:943
       kmalloc_node include/linux/slab.h:589 [inline]
       bpf_map_kmalloc_node+0xb8/0x1a0 kernel/bpf/syscall.c:452
       lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
       trie_update_elem+0x160/0xe90 kernel/bpf/lpm_trie.c:338
       bpf_map_update_value+0x5a0/0x670 kernel/bpf/syscall.c:226
       map_update_elem+0x4d7/0x680 kernel/bpf/syscall.c:1466
       __sys_bpf+0x454/0x6d0 kernel/bpf/syscall.c:5008
       __do_sys_bpf kernel/bpf/syscall.c:5124 [inline]
       __se_sys_bpf kernel/bpf/syscall.c:5122 [inline]
       __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5122
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&trie->lock);
                               lock(depot_lock);
                               lock(&trie->lock);
  lock(depot_lock);

 *** DEADLOCK ***

2 locks held by syz.3.78/4674:
 #0: ffffffff8cb2ae20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2ae20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2ae20 (rcu_read_lock){....}-{1:2}, at: bpf_map_update_value+0x379/0x670 kernel/bpf/syscall.c:225
 #1: ffff8880308c8a38 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xc4/0xe90 kernel/bpf/lpm_trie.c:335

stack backtrace:
CPU: 1 PID: 4674 Comm: syz.3.78 Not tainted 6.1.145-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
 check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2170
 check_prev_add kernel/locking/lockdep.c:3090 [inline]
 check_prevs_add kernel/locking/lockdep.c:3209 [inline]
 validate_chain kernel/locking/lockdep.c:3825 [inline]
 __lock_acquire+0x2cf8/0x7c50 kernel/locking/lockdep.c:5049
 lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162
 __stack_depot_save+0x1e4/0x460 lib/stackdepot.c:479
 save_stack+0x101/0x1e0 mm/page_owner.c:128
 __set_page_owner+0x19/0x60 mm/page_owner.c:190
 set_page_owner include/linux/page_owner.h:31 [inline]
 post_alloc_hook+0x173/0x1a0 mm/page_alloc.c:2532
 prep_new_page mm/page_alloc.c:2539 [inline]
 get_page_from_freelist+0x1a26/0x1ac0 mm/page_alloc.c:4328
 __alloc_pages+0x1df/0x4e0 mm/page_alloc.c:5614
 __alloc_pages_node include/linux/gfp.h:237 [inline]
 alloc_pages_node include/linux/gfp.h:260 [inline]
 __kmalloc_large_node+0x8c/0x1e0 mm/slab_common.c:1077
 __do_kmalloc_node mm/slab_common.c:924 [inline]
 __kmalloc_node+0x10e/0x240 mm/slab_common.c:943
 kmalloc_node include/linux/slab.h:589 [inline]
 bpf_map_kmalloc_node+0xb8/0x1a0 kernel/bpf/syscall.c:452
 lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
 trie_update_elem+0x160/0xe90 kernel/bpf/lpm_trie.c:338
 bpf_map_update_value+0x5a0/0x670 kernel/bpf/syscall.c:226
 map_update_elem+0x4d7/0x680 kernel/bpf/syscall.c:1466
 __sys_bpf+0x454/0x6d0 kernel/bpf/syscall.c:5008
 __do_sys_bpf kernel/bpf/syscall.c:5124 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5122 [inline]
 __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5122
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f2f68f8e929
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2f69d1f038 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f2f691b5fa0 RCX: 00007f2f68f8e929
RDX: 0000000000000020 RSI: 0000200000000080 RDI: 0000000000000002
RBP: 00007f2f69010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f2f691b5fa0 R15: 00007ffe166c5538
 </TASK>

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/16 11:53 linux-6.1.y f2198ea7eb3e 124ec9cc .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in __stack_depot_save
2025/07/16 11:52 linux-6.1.y f2198ea7eb3e 124ec9cc .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in __stack_depot_save
2025/07/10 04:14 linux-6.1.y 04d1ccaa9c28 956bd956 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_depot_save
2025/07/08 19:16 linux-6.1.y 04d1ccaa9c28 4d9fdfa4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in __stack_depot_save
* Struck through repros no longer work on HEAD.