====================================================== WARNING: possible circular locking dependency detected 6.11.0-syzkaller-07341-gbaeb9a7d8b60 #0 Not tainted ------------------------------------------------------ kswapd0/79 is trying to acquire lock: ffff888000b4e930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline] ffff888000b4e930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578 but task is already holding lock: ffffffff8ea36960 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6821 [inline] ffffffff8ea36960 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x3720 mm/vmscan.c:7203 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5822 __fs_reclaim_acquire mm/page_alloc.c:3825 [inline] fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3839 might_alloc include/linux/sched/mm.h:334 [inline] slab_pre_alloc_hook mm/slub.c:4037 [inline] slab_alloc_node mm/slub.c:4115 [inline] kmem_cache_alloc_noprof+0x3d/0x2a0 mm/slub.c:4142 inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline] inotify_update_watch fs/notify/inotify/inotify_user.c:647 [inline] __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline] __se_sys_inotify_add_watch+0x728/0x1060 fs/notify/inotify/inotify_user.c:729 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&group->mark_mutex){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3158 [inline] check_prevs_add kernel/locking/lockdep.c:3277 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3901 __lock_acquire+0x1384/0x2050 kernel/locking/lockdep.c:5199 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5822 __mutex_lock_common kernel/locking/mutex.c:608 [inline] __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752 fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline] fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578 fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934 fsnotify_inoderemove include/linux/fsnotify.h:264 [inline] dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:408 __dentry_kill+0x20d/0x630 fs/dcache.c:615 shrink_kill+0xa9/0x2c0 fs/dcache.c:1060 shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1087 prune_dcache_sb+0x10f/0x180 fs/dcache.c:1168 super_cache_scan+0x34f/0x4b0 fs/super.c:221 do_shrink_slab+0x701/0x1160 mm/shrinker.c:435 shrink_slab+0x1093/0x14d0 mm/shrinker.c:662 shrink_one+0x43b/0x850 mm/vmscan.c:4795 shrink_many mm/vmscan.c:4856 [inline] lru_gen_shrink_node mm/vmscan.c:4934 [inline] shrink_node+0x3799/0x3de0 mm/vmscan.c:5914 kswapd_shrink_node mm/vmscan.c:6742 [inline] balance_pgdat mm/vmscan.c:6934 [inline] kswapd+0x1cbc/0x3720 mm/vmscan.c:7203 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&group->mark_mutex); lock(fs_reclaim); lock(&group->mark_mutex); *** DEADLOCK *** 2 locks held by kswapd0/79: #0: ffffffff8ea36960 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6821 [inline] #0: ffffffff8ea36960 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x3720 mm/vmscan.c:7203 #1: ffff88804b33c0e0 (&type->s_umount_key#47){.+.+}-{3:3}, at: super_trylock_shared fs/super.c:562 [inline] #1: ffff88804b33c0e0 (&type->s_umount_key#47){.+.+}-{3:3}, at: super_cache_scan+0x94/0x4b0 fs/super.c:196 stack backtrace: CPU: 0 UID: 0 PID: 79 Comm: kswapd0 Not tainted 6.11.0-syzkaller-07341-gbaeb9a7d8b60 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:93 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2203 check_prev_add kernel/locking/lockdep.c:3158 [inline] check_prevs_add kernel/locking/lockdep.c:3277 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3901 __lock_acquire+0x1384/0x2050 kernel/locking/lockdep.c:5199 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5822 __mutex_lock_common kernel/locking/mutex.c:608 [inline] __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752 fsnotify_group_lock include/linux/fsnotify_backend.h:270 [inline] fsnotify_destroy_mark+0x38/0x3c0 fs/notify/mark.c:578 fsnotify_destroy_marks+0x14a/0x660 fs/notify/mark.c:934 fsnotify_inoderemove include/linux/fsnotify.h:264 [inline] dentry_unlink_inode+0x2e0/0x430 fs/dcache.c:408 __dentry_kill+0x20d/0x630 fs/dcache.c:615 shrink_kill+0xa9/0x2c0 fs/dcache.c:1060 shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1087 prune_dcache_sb+0x10f/0x180 fs/dcache.c:1168 super_cache_scan+0x34f/0x4b0 fs/super.c:221 do_shrink_slab+0x701/0x1160 mm/shrinker.c:435 shrink_slab+0x1093/0x14d0 mm/shrinker.c:662 shrink_one+0x43b/0x850 mm/vmscan.c:4795 shrink_many mm/vmscan.c:4856 [inline] lru_gen_shrink_node mm/vmscan.c:4934 [inline] shrink_node+0x3799/0x3de0 mm/vmscan.c:5914 kswapd_shrink_node mm/vmscan.c:6742 [inline] balance_pgdat mm/vmscan.c:6934 [inline] kswapd+0x1cbc/0x3720 mm/vmscan.c:7203 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244