syzbot


possible deadlock in bch2_btree_cache_scan

Status: upstream: reported on 2024/10/24 17:03
Subsystems: bcachefs
[Documentation on labels]
Reported-by: syzbot+3d89e46a004eafb88bc6@syzkaller.appspotmail.com
First crash: 46d, last: 5d18h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [bcachefs?] possible deadlock in bch2_btree_cache_scan 0 (1) 2024/10/24 17:03

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.12.0-syzkaller-11677-g2ba9f676d0a2 #0 Not tainted
------------------------------------------------------
kswapd0/80 is trying to acquire lock:
ffff888053301c50 (&bc->lock){+.+.}-{4:4}, at: bch2_btree_cache_scan+0x184/0xec0 fs/bcachefs/btree_cache.c:480

but task is already holding lock:
ffffffff8ea3f680 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6864 [inline]
ffffffff8ea3f680 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x36f0 mm/vmscan.c:7246

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (fs_reclaim){+.+.}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __fs_reclaim_acquire mm/page_alloc.c:3851 [inline]
       fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3865
       might_alloc include/linux/sched/mm.h:318 [inline]
       slab_pre_alloc_hook mm/slub.c:4055 [inline]
       slab_alloc_node mm/slub.c:4133 [inline]
       __do_kmalloc_node mm/slub.c:4282 [inline]
       __kmalloc_noprof+0xae/0x4c0 mm/slub.c:4295
       kmalloc_noprof include/linux/slab.h:905 [inline]
       kzalloc_noprof include/linux/slab.h:1037 [inline]
       pcpu_mem_zalloc mm/percpu.c:510 [inline]
       pcpu_alloc_chunk mm/percpu.c:1443 [inline]
       pcpu_create_chunk+0x57/0xbc0 mm/percpu-vm.c:338
       pcpu_balance_populated mm/percpu.c:2076 [inline]
       pcpu_balance_workfn+0xc4d/0xd40 mm/percpu.c:2213
       process_one_work kernel/workqueue.c:3229 [inline]
       process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
       worker_thread+0x870/0xd30 kernel/workqueue.c:3391
       kthread+0x2f0/0x390 kernel/kthread.c:389
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #1 (pcpu_alloc_mutex){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       pcpu_alloc_noprof+0x293/0x1760 mm/percpu.c:1795
       __six_lock_init+0x104/0x150 fs/bcachefs/six.c:869
       bch2_btree_lock_init+0x38/0x100 fs/bcachefs/btree_locking.c:12
       bch2_btree_node_mem_alloc+0x565/0x16f0 fs/bcachefs/btree_cache.c:805
       __bch2_btree_node_alloc fs/bcachefs/btree_update_interior.c:321 [inline]
       bch2_btree_reserve_get+0x2df/0x1890 fs/bcachefs/btree_update_interior.c:549
       bch2_btree_update_start+0xe56/0x14e0 fs/bcachefs/btree_update_interior.c:1247
       bch2_btree_split_leaf+0x123/0x840 fs/bcachefs/btree_update_interior.c:1856
       bch2_trans_commit_error+0x212/0x1380 fs/bcachefs/btree_trans_commit.c:942
       __bch2_trans_commit+0x7ead/0x93c0 fs/bcachefs/btree_trans_commit.c:1140
       wb_flush_one fs/bcachefs/btree_write_buffer.c:183 [inline]
       bch2_btree_write_buffer_flush_locked+0x2af9/0x5a00 fs/bcachefs/btree_write_buffer.c:379
       btree_write_buffer_flush_seq+0x1b23/0x1cc0 fs/bcachefs/btree_write_buffer.c:517
       bch2_btree_write_buffer_journal_flush+0xc7/0x150 fs/bcachefs/btree_write_buffer.c:533
       journal_flush_pins+0x5f7/0xb20 fs/bcachefs/journal_reclaim.c:565
       journal_flush_done+0x8e/0x260 fs/bcachefs/journal_reclaim.c:819
       bch2_journal_flush_pins+0x225/0x3a0 fs/bcachefs/journal_reclaim.c:852
       bch2_journal_flush_all_pins fs/bcachefs/journal_reclaim.h:76 [inline]
       bch2_journal_replay+0x270f/0x2a40 fs/bcachefs/recovery.c:383
       bch2_run_recovery_pass+0xf0/0x1e0 fs/bcachefs/recovery_passes.c:191
       bch2_run_recovery_passes+0x3a7/0x880 fs/bcachefs/recovery_passes.c:244
       bch2_fs_recovery+0x25cc/0x39d0 fs/bcachefs/recovery.c:861
       bch2_fs_start+0x356/0x5b0 fs/bcachefs/super.c:1037
       bch2_fs_get_tree+0xd68/0x1710 fs/bcachefs/fs.c:2170
       vfs_get_tree+0x90/0x2b0 fs/super.c:1814
       do_new_mount+0x2be/0xb40 fs/namespace.c:3507
       do_mount fs/namespace.c:3847 [inline]
       __do_sys_mount fs/namespace.c:4057 [inline]
       __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&bc->lock){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       bch2_btree_cache_scan+0x184/0xec0 fs/bcachefs/btree_cache.c:480
       do_shrink_slab+0x72d/0x1160 mm/shrinker.c:437
       shrink_slab+0x1093/0x14d0 mm/shrinker.c:664
       shrink_one+0x43b/0x850 mm/vmscan.c:4836
       shrink_many mm/vmscan.c:4897 [inline]
       lru_gen_shrink_node mm/vmscan.c:4975 [inline]
       shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
       kswapd_shrink_node mm/vmscan.c:6785 [inline]
       balance_pgdat mm/vmscan.c:6977 [inline]
       kswapd+0x1ca9/0x36f0 mm/vmscan.c:7246
       kthread+0x2f0/0x390 kernel/kthread.c:389
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

Chain exists of:
  &bc->lock --> pcpu_alloc_mutex --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(pcpu_alloc_mutex);
                               lock(fs_reclaim);
  lock(&bc->lock);

 *** DEADLOCK ***

1 lock held by kswapd0/80:
 #0: ffffffff8ea3f680 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6864 [inline]
 #0: ffffffff8ea3f680 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x36f0 mm/vmscan.c:7246

stack backtrace:
CPU: 0 UID: 0 PID: 80 Comm: kswapd0 Not tainted 6.12.0-syzkaller-11677-g2ba9f676d0a2 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 __mutex_lock_common kernel/locking/mutex.c:585 [inline]
 __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
 bch2_btree_cache_scan+0x184/0xec0 fs/bcachefs/btree_cache.c:480
 do_shrink_slab+0x72d/0x1160 mm/shrinker.c:437
 shrink_slab+0x1093/0x14d0 mm/shrinker.c:664
 shrink_one+0x43b/0x850 mm/vmscan.c:4836
 shrink_many mm/vmscan.c:4897 [inline]
 lru_gen_shrink_node mm/vmscan.c:4975 [inline]
 shrink_node+0x37c5/0x3e50 mm/vmscan.c:5956
 kswapd_shrink_node mm/vmscan.c:6785 [inline]
 balance_pgdat mm/vmscan.c:6977 [inline]
 kswapd+0x1ca9/0x36f0 mm/vmscan.c:7246
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (19):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/30 08:44 upstream 2ba9f676d0a2 68914665 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/24 04:09 upstream 9f16d5e6f220 68da6d95 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/18 05:52 upstream f66d6acccbc0 cfe3a04a .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/17 17:38 upstream 4a5df3796467 cfe3a04a .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/16 12:16 upstream f868cd251776 cfe3a04a .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/16 04:15 upstream f868cd251776 cfe3a04a .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/13 05:17 upstream 3022e9d00ebe 62026c85 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/12 04:39 upstream 2d5404caa8c7 75bb1b32 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/10 20:48 upstream a9cda7c0ffed 6b856513 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/05 10:38 upstream 557329bcecc2 509da429 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/05 10:38 upstream 557329bcecc2 509da429 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/11/04 15:00 upstream 59b723cd2adb 0754ea12 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/10/30 15:14 upstream c1e939a21eb1 f3a00767 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/10/30 15:14 upstream c1e939a21eb1 f3a00767 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/10/29 03:42 upstream e42b1a9a2557 66aeb999 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/10/27 16:59 upstream 850925a8133c 65e8686b .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/10/27 16:59 upstream 850925a8133c 65e8686b .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/10/20 19:34 upstream 715ca9dd687f cd6fc0a3 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
2024/10/20 16:51 upstream 715ca9dd687f cd6fc0a3 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in bch2_btree_cache_scan
* Struck through repros no longer work on HEAD.