Title | Replies (including bot) | Last reply |
---|---|---|
[syzbot] [ocfs2?] possible deadlock in ocfs2_nfs_sync_lock (2) | 0 (1) | 2025/10/08 19:52 |
syzbot |
sign-in | mailing list | source | docs |
Title | Replies (including bot) | Last reply |
---|---|---|
[syzbot] [ocfs2?] possible deadlock in ocfs2_nfs_sync_lock (2) | 0 (1) | 2025/10/08 19:52 |
====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ kswapd0/77 is trying to acquire lock: ffff888032ee4bd0 (&osb->nfs_sync_rwlock){.+.+}-{4:4}, at: ocfs2_nfs_sync_lock+0x106/0x250 fs/ocfs2/dlmglue.c:2875 but task is already holding lock: ffffffff8de44f40 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline] ffffffff8de44f40 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 __fs_reclaim_acquire mm/page_alloc.c:4269 [inline] fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4283 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4897 [inline] slab_alloc_node mm/slub.c:5221 [inline] __kmalloc_cache_noprof+0x40/0x6f0 mm/slub.c:5719 kmalloc_noprof include/linux/slab.h:957 [inline] kzalloc_noprof include/linux/slab.h:1094 [inline] ocfs2_reserve_new_metadata_blocks+0x113/0x940 fs/ocfs2/suballoc.c:968 ocfs2_mknod+0xe08/0x2050 fs/ocfs2/namei.c:350 ocfs2_mkdir+0x191/0x440 fs/ocfs2/namei.c:659 vfs_mkdir+0x303/0x510 fs/namei.c:4453 do_mkdirat+0x247/0x590 fs/namei.c:4486 __do_sys_mkdirat fs/namei.c:4503 [inline] __se_sys_mkdirat fs/namei.c:4501 [inline] __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:4501 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #2 (&ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE]){+.+.}-{4:4}: lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 down_write+0x96/0x1f0 kernel/locking/rwsem.c:1590 inode_lock include/linux/fs.h:979 [inline] ocfs2_remove_inode fs/ocfs2/inode.c:731 [inline] ocfs2_wipe_inode fs/ocfs2/inode.c:894 [inline] ocfs2_delete_inode fs/ocfs2/inode.c:1155 [inline] ocfs2_evict_inode+0x153d/0x4100 fs/ocfs2/inode.c:1295 evict+0x504/0x9c0 fs/inode.c:810 ocfs2_dentry_iput+0x247/0x370 fs/ocfs2/dcache.c:407 __dentry_kill+0x209/0x660 fs/dcache.c:669 dput+0x19f/0x2b0 fs/dcache.c:911 ovl_check_rename_whiteout fs/overlayfs/super.c:619 [inline] ovl_make_workdir fs/overlayfs/super.c:714 [inline] ovl_get_workdir+0xac3/0x17b0 fs/overlayfs/super.c:837 ovl_fill_super+0x154a/0x3da0 fs/overlayfs/super.c:1468 vfs_get_super fs/super.c:1324 [inline] get_tree_nodev+0xb8/0x150 fs/super.c:1343 vfs_get_tree+0x8f/0x2b0 fs/super.c:1751 fc_mount fs/namespace.c:1208 [inline] do_new_mount_fc fs/namespace.c:3651 [inline] do_new_mount+0x302/0xa10 fs/namespace.c:3727 do_mount fs/namespace.c:4050 [inline] __do_sys_mount fs/namespace.c:4238 [inline] __se_sys_mount+0x313/0x410 fs/namespace.c:4215 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #1 (&ocfs2_sysfile_lock_key[ORPHAN_DIR_SYSTEM_INODE]){+.+.}-{4:4}: lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 down_write+0x96/0x1f0 kernel/locking/rwsem.c:1590 inode_lock include/linux/fs.h:979 [inline] ocfs2_wipe_inode fs/ocfs2/inode.c:852 [inline] ocfs2_delete_inode fs/ocfs2/inode.c:1155 [inline] ocfs2_evict_inode+0xf86/0x4100 fs/ocfs2/inode.c:1295 evict+0x504/0x9c0 fs/inode.c:810 ocfs2_dentry_iput+0x247/0x370 fs/ocfs2/dcache.c:407 __dentry_kill+0x209/0x660 fs/dcache.c:669 dput+0x19f/0x2b0 fs/dcache.c:911 ovl_check_rename_whiteout fs/overlayfs/super.c:619 [inline] ovl_make_workdir fs/overlayfs/super.c:714 [inline] ovl_get_workdir+0xac3/0x17b0 fs/overlayfs/super.c:837 ovl_fill_super+0x154a/0x3da0 fs/overlayfs/super.c:1468 vfs_get_super fs/super.c:1324 [inline] get_tree_nodev+0xb8/0x150 fs/super.c:1343 vfs_get_tree+0x8f/0x2b0 fs/super.c:1751 fc_mount fs/namespace.c:1208 [inline] do_new_mount_fc fs/namespace.c:3651 [inline] do_new_mount+0x302/0xa10 fs/namespace.c:3727 do_mount fs/namespace.c:4050 [inline] __do_sys_mount fs/namespace.c:4238 [inline] __se_sys_mount+0x313/0x410 fs/namespace.c:4215 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&osb->nfs_sync_rwlock){.+.+}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 down_read+0x46/0x2e0 kernel/locking/rwsem.c:1537 ocfs2_nfs_sync_lock+0x106/0x250 fs/ocfs2/dlmglue.c:2875 ocfs2_delete_inode fs/ocfs2/inode.c:1106 [inline] ocfs2_evict_inode+0x390/0x4100 fs/ocfs2/inode.c:1295 evict+0x504/0x9c0 fs/inode.c:810 ocfs2_dentry_iput+0x247/0x370 fs/ocfs2/dcache.c:407 __dentry_kill+0x209/0x660 fs/dcache.c:669 shrink_kill+0xa9/0x2c0 fs/dcache.c:1114 shrink_dentry_list+0x2e0/0x5e0 fs/dcache.c:1141 prune_dcache_sb+0x10e/0x180 fs/dcache.c:1222 super_cache_scan+0x369/0x4b0 fs/super.c:222 do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437 shrink_slab_memcg mm/shrinker.c:550 [inline] shrink_slab+0x7ef/0x10d0 mm/shrinker.c:628 shrink_one+0x28a/0x7c0 mm/vmscan.c:4955 shrink_many mm/vmscan.c:5016 [inline] lru_gen_shrink_node mm/vmscan.c:5094 [inline] shrink_node+0x315d/0x3780 mm/vmscan.c:6081 kswapd_shrink_node mm/vmscan.c:6941 [inline] balance_pgdat mm/vmscan.c:7124 [inline] kswapd+0x147c/0x2800 mm/vmscan.c:7389 kthread+0x70e/0x8a0 kernel/kthread.c:463 ret_from_fork+0x436/0x7d0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 other info that might help us debug this: Chain exists of: &osb->nfs_sync_rwlock --> &ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE] --> fs_reclaim Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE]); lock(fs_reclaim); rlock(&osb->nfs_sync_rwlock); *** DEADLOCK *** 2 locks held by kswapd0/77: #0: ffffffff8de44f40 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:7015 [inline] #0: ffffffff8de44f40 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x951/0x2800 mm/vmscan.c:7389 #1: ffff88803b1de0e0 (&type->s_umount_key#51){.+.+}-{4:4}, at: super_trylock_shared fs/super.c:562 [inline] #1: ffff88803b1de0e0 (&type->s_umount_key#51){.+.+}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:197 stack backtrace: CPU: 0 UID: 0 PID: 77 Comm: kswapd0 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: <TASK> dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_circular_bug+0x2ee/0x310 kernel/locking/lockdep.c:2043 check_noncircular+0x134/0x160 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain+0xb9b/0x2140 kernel/locking/lockdep.c:3908 __lock_acquire+0xab9/0xd20 kernel/locking/lockdep.c:5237 lock_acquire+0x120/0x360 kernel/locking/lockdep.c:5868 down_read+0x46/0x2e0 kernel/locking/rwsem.c:1537 ocfs2_nfs_sync_lock+0x106/0x250 fs/ocfs2/dlmglue.c:2875 ocfs2_delete_inode fs/ocfs2/inode.c:1106 [inline] ocfs2_evict_inode+0x390/0x4100 fs/ocfs2/inode.c:1295 evict+0x504/0x9c0 fs/inode.c:810 ocfs2_dentry_iput+0x247/0x370 fs/ocfs2/dcache.c:407 __dentry_kill+0x209/0x660 fs/dcache.c:669 shrink_kill+0xa9/0x2c0 fs/dcache.c:1114 shrink_dentry_list+0x2e0/0x5e0 fs/dcache.c:1141 prune_dcache_sb+0x10e/0x180 fs/dcache.c:1222 super_cache_scan+0x369/0x4b0 fs/super.c:222 do_shrink_slab+0x6ef/0x1110 mm/shrinker.c:437 shrink_slab_memcg mm/shrinker.c:550 [inline] shrink_slab+0x7ef/0x10d0 mm/shrinker.c:628 shrink_one+0x28a/0x7c0 mm/vmscan.c:4955 shrink_many mm/vmscan.c:5016 [inline] lru_gen_shrink_node mm/vmscan.c:5094 [inline] shrink_node+0x315d/0x3780 mm/vmscan.c:6081 kswapd_shrink_node mm/vmscan.c:6941 [inline] balance_pgdat mm/vmscan.c:7124 [inline] kswapd+0x147c/0x2800 mm/vmscan.c:7389 kthread+0x70e/0x8a0 kernel/kthread.c:463 ret_from_fork+0x436/0x7d0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 </TASK>
Time | Kernel | Commit | Syzkaller | Config | Log | Report | Syz repro | C repro | VM info | Assets (help?) | Manager | Title |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2025/10/04 19:49 | upstream | cbf33b8e0b36 | 49379ee0 | .config | console log | report | [disk image (non-bootable)] [vmlinux] [kernel image] | ci-snapshot-upstream-root | possible deadlock in ocfs2_nfs_sync_lock |