syzbot


possible deadlock in fsnotify_destroy_marks (2)

Status: moderation: reported on 2024/02/26 07:19
Subsystems: fs
[Documentation on labels]
Reported-by: syzbot+1db1c99d9f675fcae3f2@syzkaller.appspotmail.com
First crash: 65d, last: 54d
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in fsnotify_destroy_marks fs 1 371d 367d 0/26 auto-obsoleted due to no activity on 2023/07/21 05:20

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.8.0-rc6-syzkaller-00250-g04b8076df253 #0 Not tainted
------------------------------------------------------
kswapd0/109 is trying to acquire lock:
ffff88801e216130 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
ffff88801e216130 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
ffff88801e216130 (&group->mark_mutex){+.+.}-{3:3}, at: fsnotify_destroy_marks+0x149/0x4a0 fs/notify/mark.c:818

but task is already holding lock:
ffffffff8d720440 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x160/0x1a90 mm/vmscan.c:6771

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:3692 [inline]
       fs_reclaim_acquire+0x104/0x150 mm/page_alloc.c:3706
       might_alloc include/linux/sched/mm.h:303 [inline]
       slab_pre_alloc_hook mm/slub.c:3761 [inline]
       slab_alloc_node mm/slub.c:3842 [inline]
       kmem_cache_alloc+0x4f/0x320 mm/slub.c:3867
       inotify_new_watch fs/notify/inotify/inotify_user.c:599 [inline]
       inotify_update_watch+0x527/0xc10 fs/notify/inotify/inotify_user.c:647
       __do_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:786 [inline]
       __se_sys_inotify_add_watch fs/notify/inotify/inotify_user.c:729 [inline]
       __x64_sys_inotify_add_watch+0x2e9/0x380 fs/notify/inotify/inotify_user.c:729
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xd5/0x270 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x6f/0x77

-> #0 (&group->mark_mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x244f/0x3b40 kernel/locking/lockdep.c:5137
       lock_acquire kernel/locking/lockdep.c:5754 [inline]
       lock_acquire+0x1ae/0x520 kernel/locking/lockdep.c:5719
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x175/0x9d0 kernel/locking/mutex.c:752
       fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
       fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
       fsnotify_destroy_marks+0x149/0x4a0 fs/notify/mark.c:818
       fsnotify_inoderemove include/linux/fsnotify.h:233 [inline]
       dentry_unlink_inode+0x38f/0x440 fs/dcache.c:396
       __dentry_kill+0x1d0/0x600 fs/dcache.c:603
       shrink_kill fs/dcache.c:1048 [inline]
       shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
       prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
       super_cache_scan+0x32a/0x550 fs/super.c:221
       do_shrink_slab+0x426/0x1120 mm/shrinker.c:435
       shrink_slab_memcg mm/shrinker.c:548 [inline]
       shrink_slab+0xa87/0x1310 mm/shrinker.c:626
       shrink_one+0x493/0x7b0 mm/vmscan.c:4767
       shrink_many mm/vmscan.c:4828 [inline]
       lru_gen_shrink_node mm/vmscan.c:4929 [inline]
       shrink_node+0x21d0/0x3790 mm/vmscan.c:5888
       kswapd_shrink_node mm/vmscan.c:6693 [inline]
       balance_pgdat+0x9d2/0x1a90 mm/vmscan.c:6883
       kswapd+0x5be/0xc00 mm/vmscan.c:7143
       kthread+0x2c6/0x3b0 kernel/kthread.c:388
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:243

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&group->mark_mutex);
                               lock(fs_reclaim);
  lock(&group->mark_mutex);

 *** DEADLOCK ***

2 locks held by kswapd0/109:
 #0: ffffffff8d720440 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x160/0x1a90 mm/vmscan.c:6771
 #1: ffff88801c7f20e0 (&type->s_umount_key#64){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
 #1: ffff88801c7f20e0 (&type->s_umount_key#64){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196

stack backtrace:
CPU: 2 PID: 109 Comm: kswapd0 Not tainted 6.8.0-rc6-syzkaller-00250-g04b8076df253 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106
 check_noncircular+0x31b/0x400 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x244f/0x3b40 kernel/locking/lockdep.c:5137
 lock_acquire kernel/locking/lockdep.c:5754 [inline]
 lock_acquire+0x1ae/0x520 kernel/locking/lockdep.c:5719
 __mutex_lock_common kernel/locking/mutex.c:608 [inline]
 __mutex_lock+0x175/0x9d0 kernel/locking/mutex.c:752
 fsnotify_group_lock include/linux/fsnotify_backend.h:266 [inline]
 fsnotify_destroy_mark fs/notify/mark.c:496 [inline]
 fsnotify_destroy_marks+0x149/0x4a0 fs/notify/mark.c:818
 fsnotify_inoderemove include/linux/fsnotify.h:233 [inline]
 dentry_unlink_inode+0x38f/0x440 fs/dcache.c:396
 __dentry_kill+0x1d0/0x600 fs/dcache.c:603
 shrink_kill fs/dcache.c:1048 [inline]
 shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
 prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
 super_cache_scan+0x32a/0x550 fs/super.c:221
 do_shrink_slab+0x426/0x1120 mm/shrinker.c:435
 shrink_slab_memcg mm/shrinker.c:548 [inline]
 shrink_slab+0xa87/0x1310 mm/shrinker.c:626
 shrink_one+0x493/0x7b0 mm/vmscan.c:4767
 shrink_many mm/vmscan.c:4828 [inline]
 lru_gen_shrink_node mm/vmscan.c:4929 [inline]
 shrink_node+0x21d0/0x3790 mm/vmscan.c:5888
 kswapd_shrink_node mm/vmscan.c:6693 [inline]
 balance_pgdat+0x9d2/0x1a90 mm/vmscan.c:6883
 kswapd+0x5be/0xc00 mm/vmscan.c:7143
 kthread+0x2c6/0x3b0 kernel/kthread.c:388
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:243
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/03/03 13:16 upstream 04b8076df253 25905f5d .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in fsnotify_destroy_marks
2024/02/22 07:11 upstream 39133352cbed 345111b5 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in fsnotify_destroy_marks
* Struck through repros no longer work on HEAD.