====================================================== WARNING: possible circular locking dependency detected 6.9.0-rc4-syzkaller-00274-g3b68086599f8 #0 Not tainted ------------------------------------------------------ kswapd0/87 is trying to acquire lock: ffff888060445658 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_reclaim_inode fs/xfs/xfs_icache.c:945 [inline] ffff888060445658 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1631 [inline] ffff888060445658 (&xfs_dir_ilock_class){++++}-{3:3}, at: xfs_icwalk_ag+0x120e/0x1ad0 fs/xfs/xfs_icache.c:1713 but task is already holding lock: ffffffff8e428e80 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6782 [inline] ffffffff8e428e80 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb20/0x30c0 mm/vmscan.c:7164 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754 __fs_reclaim_acquire mm/page_alloc.c:3698 [inline] fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3712 might_alloc include/linux/sched/mm.h:312 [inline] prepare_alloc_pages+0x147/0x5d0 mm/page_alloc.c:4346 __alloc_pages+0x166/0x6c0 mm/page_alloc.c:4564 alloc_pages_mpol+0x3e8/0x680 mm/mempolicy.c:2264 stack_depot_save_flags+0x666/0x830 lib/stackdepot.c:635 kasan_save_stack mm/kasan/common.c:48 [inline] kasan_save_track+0x51/0x80 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:370 [inline] __kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:387 kasan_kmalloc include/linux/kasan.h:211 [inline] __do_kmalloc_node mm/slub.c:3966 [inline] __kmalloc+0x233/0x4a0 mm/slub.c:3979 kmalloc include/linux/slab.h:632 [inline] kzalloc include/linux/slab.h:749 [inline] xfs_dabuf_map+0x18b/0xc50 fs/xfs/libxfs/xfs_da_btree.c:2547 xfs_da_read_buf+0x19b/0x470 fs/xfs/libxfs/xfs_da_btree.c:2670 xfs_dir3_block_read+0x92/0x1a0 fs/xfs/libxfs/xfs_dir2_block.c:145 xfs_dir2_block_lookup_int+0x109/0x7d0 fs/xfs/libxfs/xfs_dir2_block.c:700 xfs_dir2_block_lookup+0x19a/0x630 fs/xfs/libxfs/xfs_dir2_block.c:650 xfs_dir_lookup+0x633/0xaf0 fs/xfs/libxfs/xfs_dir2.c:399 xfs_lookup+0x298/0x550 fs/xfs/xfs_inode.c:640 xfs_vn_lookup+0x192/0x290 fs/xfs/xfs_iops.c:303 lookup_open fs/namei.c:3475 [inline] open_last_lookups fs/namei.c:3566 [inline] path_openat+0x1033/0x3240 fs/namei.c:3796 do_filp_open+0x235/0x490 fs/namei.c:3826 do_sys_openat2+0x13e/0x1d0 fs/open.c:1406 do_sys_open fs/open.c:1421 [inline] __do_sys_openat fs/open.c:1437 [inline] __se_sys_openat fs/open.c:1432 [inline] __x64_sys_openat+0x247/0x2a0 fs/open.c:1432 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&xfs_dir_ilock_class){++++}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754 down_write_nested+0x3d/0x50 kernel/locking/rwsem.c:1695 xfs_reclaim_inode fs/xfs/xfs_icache.c:945 [inline] xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1631 [inline] xfs_icwalk_ag+0x120e/0x1ad0 fs/xfs/xfs_icache.c:1713 xfs_icwalk fs/xfs/xfs_icache.c:1762 [inline] xfs_reclaim_inodes_nr+0x257/0x360 fs/xfs/xfs_icache.c:1011 super_cache_scan+0x40f/0x4b0 fs/super.c:227 do_shrink_slab+0x705/0x1160 mm/shrinker.c:435 shrink_slab+0x1092/0x14d0 mm/shrinker.c:662 shrink_node_memcgs mm/vmscan.c:5875 [inline] shrink_node+0x11f5/0x2d60 mm/vmscan.c:5908 kswapd_shrink_node mm/vmscan.c:6704 [inline] balance_pgdat mm/vmscan.c:6895 [inline] kswapd+0x1a25/0x30c0 mm/vmscan.c:7164 kthread+0x2f0/0x390 kernel/kthread.c:388 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim ); lock(&xfs_dir_ilock_class ); lock(fs_reclaim); lock(&xfs_dir_ilock_class); *** DEADLOCK *** 2 locks held by kswapd0/87: #0: ffffffff8e428e80 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6782 [inline] #0: ffffffff8e428e80 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb20/0x30c0 mm/vmscan.c:7164 #1: ffff8881afec00e0 (&type->s_umount_key#68){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline] #1: ffff8881afec00e0 (&type->s_umount_key#68){++++}-{3:3}, at: super_cache_scan+0x94/0x4b0 fs/super.c:196 stack backtrace: CPU: 1 PID: 87 Comm: kswapd0 Not tainted 6.9.0-rc4-syzkaller-00274-g3b68086599f8 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754 down_write_nested+0x3d/0x50 kernel/locking/rwsem.c:1695 xfs_reclaim_inode fs/xfs/xfs_icache.c:945 [inline] xfs_icwalk_process_inode fs/xfs/xfs_icache.c:1631 [inline] xfs_icwalk_ag+0x120e/0x1ad0 fs/xfs/xfs_icache.c:1713 xfs_icwalk fs/xfs/xfs_icache.c:1762 [inline] xfs_reclaim_inodes_nr+0x257/0x360 fs/xfs/xfs_icache.c:1011 super_cache_scan+0x40f/0x4b0 fs/super.c:227 do_shrink_slab+0x705/0x1160 mm/shrinker.c:435 shrink_slab+0x1092/0x14d0 mm/shrinker.c:662 shrink_node_memcgs mm/vmscan.c:5875 [inline] shrink_node+0x11f5/0x2d60 mm/vmscan.c:5908 kswapd_shrink_node mm/vmscan.c:6704 [inline] balance_pgdat mm/vmscan.c:6895 [inline] kswapd+0x1a25/0x30c0 mm/vmscan.c:7164 kthread+0x2f0/0x390 kernel/kthread.c:388 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244