syzbot


possible deadlock in fsnotify_destroy_marks

Status: auto-obsoleted due to no activity on 2023/07/21 05:20
Subsystems: fs
[Documentation on labels]
Reported-by: syzbot+3923629753e45a250a8c@syzkaller.appspotmail.com
First crash: 367d, last: 367d
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in fsnotify_destroy_marks (2) fs 2 51d 57d 0/26 moderation: reported on 2024/02/26 07:19

Sample crash report:
WARNING: possible circular locking dependency detected
6.3.0-rc7-syzkaller-00181-g8e41e0a57566 #0 Not tainted
------------------------------------------------------
kswapd0/109 is trying to acquire lock:
ffff888029196930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
ffff888029196930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
ffff888029196930 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_marks+0x16a/0x4b0 fs/notify/mark.c:854

but task is already holding lock:
ffffffff8c8e11c0 (fs_reclaim){+.+.}-{0:0}, at: set_task_reclaim_state mm/vmscan.c:200 [inline]
ffffffff8c8e11c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x170/0x1ac0 mm/vmscan.c:7338

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:4717 [inline]
       fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:4731
       might_alloc include/linux/sched/mm.h:271 [inline]
       slab_pre_alloc_hook mm/slab.h:728 [inline]
       slab_alloc_node mm/slab.c:3241 [inline]
       slab_alloc mm/slab.c:3266 [inline]
       __kmem_cache_alloc_lru mm/slab.c:3443 [inline]
       kmem_cache_alloc+0x3d/0x3f0 mm/slab.c:3452
       inotify_new_watch fs/notify/inotify/inotify_user.c:600 [inline]
       inotify_update_watch+0x530/0xc30 fs/notify/inotify/inotify_user.c:648
       __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:787 [inline]
       __se_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:730 [inline]
       __x64_sys_inotify_add_watch+0x2bf/0x350 fs/notify/inotify/inotify_user.c:730
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&group->mark_mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3098 [inline]
       check_prevs_add kernel/locking/lockdep.c:3217 [inline]
       validate_chain kernel/locking/lockdep.c:3832 [inline]
       __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
       lock_acquire kernel/locking/lockdep.c:5669 [inline]
       lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x12f/0x1350 kernel/locking/mutex.c:747
       fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
       fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
       fsnotify_destroy_marks+0x16a/0x4b0 fs/notify/mark.c:854
       fsnotify_inoderemove include/linux/fsnotify.h:193 [inline]
       dentry_unlink_inode+0x3aa/0x460 fs/dcache.c:397
       __dentry_kill+0x3c0/0x640 fs/dcache.c:607
       shrink_dentry_list+0x12c/0x4f0 fs/dcache.c:1201
       prune_dcache_sb+0xeb/0x150 fs/dcache.c:1282
       super_cache_scan+0x33a/0x590 fs/super.c:104
       do_shrink_slab+0x428/0xaa0 mm/vmscan.c:853
       shrink_slab_memcg mm/vmscan.c:922 [inline]
       shrink_slab+0x388/0x660 mm/vmscan.c:1001
       shrink_one+0x502/0x810 mm/vmscan.c:5343
       shrink_many mm/vmscan.c:5394 [inline]
       lru_gen_shrink_node mm/vmscan.c:5511 [inline]
       shrink_node+0x2064/0x35f0 mm/vmscan.c:6459
       kswapd_shrink_node mm/vmscan.c:7262 [inline]
       balance_pgdat+0xa02/0x1ac0 mm/vmscan.c:7452
       kswapd+0x677/0xd60 mm/vmscan.c:7712
       kthread+0x2e8/0x3a0 kernel/kthread.c:376
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&group->mark_mutex);
                               lock(fs_reclaim);
  lock(&group->mark_mutex);

 *** DEADLOCK ***

3 locks held by kswapd0/109:
 #0: ffffffff8c8e11c0 (fs_reclaim){+.+.}-{0:0}, at: set_task_reclaim_state mm/vmscan.c:200 [inline]
 #0: ffffffff8c8e11c0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x170/0x1ac0 mm/vmscan.c:7338
 #1: ffffffff8c897f30 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab_memcg mm/vmscan.c:895 [inline]
 #1: ffffffff8c897f30 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0x2a0/0x660 mm/vmscan.c:1001
 #2: ffff8880429940e0 (&type->s_umount_key#50){++++}-{3:3}, at: trylock_super fs/super.c:414 [inline]
 #2: ffff8880429940e0 (&type->s_umount_key#50){++++}-{3:3}, at: super_cache_scan+0x70/0x590 fs/super.c:79

stack backtrace:
CPU: 1 PID: 109 Comm: kswapd0 Not tainted 6.3.0-rc7-syzkaller-00181-g8e41e0a57566 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2178
 check_prev_add kernel/locking/lockdep.c:3098 [inline]
 check_prevs_add kernel/locking/lockdep.c:3217 [inline]
 validate_chain kernel/locking/lockdep.c:3832 [inline]
 __lock_acquire+0x2ec7/0x5d40 kernel/locking/lockdep.c:5056
 lock_acquire kernel/locking/lockdep.c:5669 [inline]
 lock_acquire+0x1af/0x520 kernel/locking/lockdep.c:5634
 __mutex_lock_common kernel/locking/mutex.c:603 [inline]
 __mutex_lock+0x12f/0x1350 kernel/locking/mutex.c:747
 fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
 fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
 fsnotify_destroy_marks+0x16a/0x4b0 fs/notify/mark.c:854
 fsnotify_inoderemove include/linux/fsnotify.h:193 [inline]
 dentry_unlink_inode+0x3aa/0x460 fs/dcache.c:397
 __dentry_kill+0x3c0/0x640 fs/dcache.c:607
 shrink_dentry_list+0x12c/0x4f0 fs/dcache.c:1201
 prune_dcache_sb+0xeb/0x150 fs/dcache.c:1282
 super_cache_scan+0x33a/0x590 fs/super.c:104
 do_shrink_slab+0x428/0xaa0 mm/vmscan.c:853
 shrink_slab_memcg mm/vmscan.c:922 [inline]
 shrink_slab+0x388/0x660 mm/vmscan.c:1001
 shrink_one+0x502/0x810 mm/vmscan.c:5343
 shrink_many mm/vmscan.c:5394 [inline]
 lru_gen_shrink_node mm/vmscan.c:5511 [inline]
 shrink_node+0x2064/0x35f0 mm/vmscan.c:6459
 kswapd_shrink_node mm/vmscan.c:7262 [inline]
 balance_pgdat+0xa02/0x1ac0 mm/vmscan.c:7452
 kswapd+0x677/0xd60 mm/vmscan.c:7712
 kthread+0x2e8/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/04/22 05:19 upstream 8e41e0a57566 2b32bd34 .config console log report info ci-qemu-upstream possible deadlock in fsnotify_destroy_marks
* Struck through repros no longer work on HEAD.