Title | Replies (including bot) | Last reply |
---|---|---|
[syzbot] [bcachefs?] possible deadlock in bch2_clear_folio_bits | 0 (1) | 2025/10/03 21:38 |
syzbot |
sign-in | mailing list | source | docs |
Title | Replies (including bot) | Last reply |
---|---|---|
[syzbot] [bcachefs?] possible deadlock in bch2_clear_folio_bits | 0 (1) | 2025/10/03 21:38 |
====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ kswapd1/78 is trying to acquire lock: ffff88805299d558 (&inode->ei_quota_lock){+.+.}-{4:4}, at: bch2_i_sectors_acct fs/bcachefs/fs-io.h:137 [inline] ffff88805299d558 (&inode->ei_quota_lock){+.+.}-{4:4}, at: bch2_clear_folio_bits+0x506/0x830 fs/bcachefs/fs-io-pagecache.c:513 but task is already holding lock: ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7012 [inline] ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2830 mm/vmscan.c:7386 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 __fs_reclaim_acquire mm/page_alloc.c:4234 [inline] fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4248 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4142 [inline] slab_alloc_node mm/slub.c:4220 [inline] __kmalloc_cache_noprof+0x41/0x3d0 mm/slub.c:4402 kmalloc_noprof include/linux/slab.h:905 [inline] kzalloc_noprof include/linux/slab.h:1039 [inline] genradix_alloc_node include/linux/generic-radix-tree.h:101 [inline] __genradix_ptr_alloc+0x199/0x4a0 lib/generic-radix-tree.c:44 bch2_quota_transfer+0x300/0xa50 fs/bcachefs/quota.c:343 bch2_fs_quota_transfer+0x27d/0x4f0 fs/bcachefs/fs.c:183 bch2_set_projid fs/bcachefs/fs.h:166 [inline] bch2_fileattr_set+0x515/0x6f0 fs/bcachefs/fs.c:1728 vfs_fileattr_set+0x92f/0xb90 fs/file_attr.c:298 ioctl_fssetxattr+0x1ed/0x270 fs/file_attr.c:372 do_vfs_ioctl+0x81d/0x1430 fs/ioctl.c:567 __do_sys_ioctl fs/ioctl.c:596 [inline] __se_sys_ioctl+0x82/0x170 fs/ioctl.c:584 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&inode->ei_quota_lock){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/mutex.c:598 [inline] __mutex_lock+0x187/0x1350 kernel/locking/mutex.c:760 bch2_i_sectors_acct fs/bcachefs/fs-io.h:137 [inline] bch2_clear_folio_bits+0x506/0x830 fs/bcachefs/fs-io-pagecache.c:513 bch2_release_folio+0xf7/0x150 fs/bcachefs/fs-io-pagecache.c:672 shrink_folio_list+0x20ac/0x4cd0 mm/vmscan.c:1518 evict_folios+0x471e/0x57c0 mm/vmscan.c:4744 try_to_shrink_lruvec+0x8a3/0xb50 mm/vmscan.c:4907 shrink_one+0x21b/0x7c0 mm/vmscan.c:4952 shrink_many mm/vmscan.c:5015 [inline] lru_gen_shrink_node mm/vmscan.c:5093 [inline] shrink_node+0x314e/0x3760 mm/vmscan.c:6078 kswapd_shrink_node mm/vmscan.c:6938 [inline] balance_pgdat mm/vmscan.c:7121 [inline] kswapd+0x147c/0x2830 mm/vmscan.c:7386 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x439/0x7d0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&inode->ei_quota_lock); lock(fs_reclaim); lock(&inode->ei_quota_lock); *** DEADLOCK *** 1 lock held by kswapd1/78: #0: ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7012 [inline] #0: ffffffff8e4419a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2830 mm/vmscan.c:7386 stack backtrace: CPU: 0 UID: 0 PID: 78 Comm: kswapd1 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043 check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/mutex.c:598 [inline] __mutex_lock+0x187/0x1350 kernel/locking/mutex.c:760 bch2_i_sectors_acct fs/bcachefs/fs-io.h:137 [inline] bch2_clear_folio_bits+0x506/0x830 fs/bcachefs/fs-io-pagecache.c:513 bch2_release_folio+0xf7/0x150 fs/bcachefs/fs-io-pagecache.c:672 shrink_folio_list+0x20ac/0x4cd0 mm/vmscan.c:1518 evict_folios+0x471e/0x57c0 mm/vmscan.c:4744 try_to_shrink_lruvec+0x8a3/0xb50 mm/vmscan.c:4907 shrink_one+0x21b/0x7c0 mm/vmscan.c:4952 shrink_many mm/vmscan.c:5015 [inline] lru_gen_shrink_node mm/vmscan.c:5093 [inline] shrink_node+0x314e/0x3760 mm/vmscan.c:6078 kswapd_shrink_node mm/vmscan.c:6938 [inline] balance_pgdat mm/vmscan.c:7121 [inline] kswapd+0x147c/0x2830 mm/vmscan.c:7386 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x439/0x7d0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 </TASK>
Time | Kernel | Commit | Syzkaller | Config | Log | Report | Syz repro | C repro | VM info | Assets (help?) | Manager | Title |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2025/09/29 21:29 | upstream | e5f0a698b34e | 86341da6 | .config | console log | report | [disk image (non-bootable)] [vmlinux] [kernel image] | ci-snapshot-upstream-root | possible deadlock in bch2_clear_folio_bits |