syzbot


possible deadlock in fs_reclaim_acquire (4)

Status: auto-obsoleted due to no activity on 2023/06/19 22:26
Subsystems: reiserfs
[Documentation on labels]
Reported-by: syzbot+9b9bbfea68487857fb9c@syzkaller.appspotmail.com
First crash: 472d, last: 393d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] possible deadlock in fs_reclaim_acquire (4) 0 (1) 2022/12/06 16:13
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in fs_reclaim_acquire (3) kernel C error 10538 959d 989d 0/26 closed as dup on 2021/07/07 11:37
upstream possible deadlock in fs_reclaim_acquire perf 1 1899d 1898d 0/26 auto-closed as invalid on 2019/07/04 21:33
upstream possible deadlock in fs_reclaim_acquire (2) ext4 3 1116d 1132d 0/26 auto-closed as invalid on 2021/06/26 23:23

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.2.0-rc6-syzkaller-00050-g9f266ccaa2f5 #0 Not tainted
------------------------------------------------------
kthreadd/2 is trying to acquire lock:
ffff8880253fd090 (&sbi->lock){+.+.}-{3:3}, at: reiserfs_write_lock+0x79/0x100 fs/reiserfs/lock.c:27

but task is already holding lock:
ffffffff8c8cef00 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4747 [inline]
ffffffff8c8cef00 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4772 [inline]
ffffffff8c8cef00 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x808/0x23d0 mm/page_alloc.c:5178

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:4674 [inline]
       fs_reclaim_acquire+0x11d/0x160 mm/page_alloc.c:4688
       might_alloc include/linux/sched/mm.h:271 [inline]
       slab_pre_alloc_hook mm/slab.h:720 [inline]
       slab_alloc_node mm/slab.c:3247 [inline]
       slab_alloc mm/slab.c:3272 [inline]
       __kmem_cache_alloc_lru mm/slab.c:3449 [inline]
       kmem_cache_alloc_lru+0x3e/0x7b0 mm/slab.c:3465
       __d_alloc+0x32/0x980 fs/dcache.c:1769
       d_alloc_anon fs/dcache.c:1868 [inline]
       d_make_root+0x49/0x110 fs/dcache.c:2069
       reiserfs_fill_super+0x12eb/0x2e90 fs/reiserfs/super.c:2083
       mount_bdev+0x351/0x410 fs/super.c:1359
       legacy_get_tree+0x109/0x220 fs/fs_context.c:610
       vfs_get_tree+0x8d/0x2f0 fs/super.c:1489
       do_new_mount fs/namespace.c:3145 [inline]
       path_mount+0x132a/0x1e20 fs/namespace.c:3475
       do_mount fs/namespace.c:3488 [inline]
       __do_sys_mount fs/namespace.c:3697 [inline]
       __se_sys_mount fs/namespace.c:3674 [inline]
       __x64_sys_mount+0x283/0x300 fs/namespace.c:3674
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&sbi->lock){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3097 [inline]
       check_prevs_add kernel/locking/lockdep.c:3216 [inline]
       validate_chain kernel/locking/lockdep.c:3831 [inline]
       __lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5055
       lock_acquire kernel/locking/lockdep.c:5668 [inline]
       lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747
       reiserfs_write_lock+0x79/0x100 fs/reiserfs/lock.c:27
       reiserfs_evict_inode+0x30b/0x540 fs/reiserfs/inode.c:55
       evict+0x2ed/0x6b0 fs/inode.c:664
       iput_final fs/inode.c:1747 [inline]
       iput.part.0+0x59b/0x880 fs/inode.c:1773
       iput+0x5c/0x80 fs/inode.c:1763
       dentry_unlink_inode+0x2b1/0x460 fs/dcache.c:401
       __dentry_kill+0x3c0/0x640 fs/dcache.c:607
       shrink_dentry_list+0x240/0x800 fs/dcache.c:1201
       prune_dcache_sb+0xeb/0x150 fs/dcache.c:1282
       super_cache_scan+0x33a/0x590 fs/super.c:104
       do_shrink_slab+0x464/0xce0 mm/vmscan.c:843
       shrink_slab_memcg mm/vmscan.c:912 [inline]
       shrink_slab+0x388/0x660 mm/vmscan.c:991
       shrink_node_memcgs mm/vmscan.c:6140 [inline]
       shrink_node+0x95d/0x1f40 mm/vmscan.c:6169
       shrink_zones mm/vmscan.c:6407 [inline]
       do_try_to_free_pages+0x3b4/0x17a0 mm/vmscan.c:6469
       try_to_free_pages+0x2e5/0x960 mm/vmscan.c:6704
       __perform_reclaim mm/page_alloc.c:4750 [inline]
       __alloc_pages_direct_reclaim mm/page_alloc.c:4772 [inline]
       __alloc_pages_slowpath.constprop.0+0x8b6/0x23d0 mm/page_alloc.c:5178
       __alloc_pages+0x4aa/0x5b0 mm/page_alloc.c:5562
       __alloc_pages_node include/linux/gfp.h:237 [inline]
       kmem_getpages mm/slab.c:1363 [inline]
       cache_grow_begin+0x94/0x390 mm/slab.c:2576
       fallback_alloc+0x1e2/0x2d0 mm/slab.c:3119
       __do_cache_alloc mm/slab.c:3223 [inline]
       slab_alloc_node mm/slab.c:3256 [inline]
       kmem_cache_alloc_node+0x181/0x590 mm/slab.c:3534
       alloc_task_struct_node kernel/fork.c:171 [inline]
       dup_task_struct kernel/fork.c:979 [inline]
       copy_process+0x3aa/0x7520 kernel/fork.c:2097
       kernel_clone+0xeb/0x990 kernel/fork.c:2681
       kernel_thread+0xb9/0xf0 kernel/fork.c:2741
       create_kthread kernel/kthread.c:399 [inline]
       kthreadd+0x4f2/0x750 kernel/kthread.c:746
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&sbi->lock);
                               lock(fs_reclaim);
  lock(&sbi->lock);

 *** DEADLOCK ***

3 locks held by kthreadd/2:
 #0: ffffffff8c8cef00 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4747 [inline]
 #0: ffffffff8c8cef00 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4772 [inline]
 #0: ffffffff8c8cef00 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x808/0x23d0 mm/page_alloc.c:5178
 #1: ffffffff8c88caf0 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab_memcg mm/vmscan.c:885 [inline]
 #1: ffffffff8c88caf0 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0x2a0/0x660 mm/vmscan.c:991
 #2: ffff88801ade80e0 (&type->s_umount_key#68){.+.+}-{3:3}, at: trylock_super fs/super.c:415 [inline]
 #2: ffff88801ade80e0 (&type->s_umount_key#68){.+.+}-{3:3}, at: super_cache_scan+0x70/0x590 fs/super.c:79

stack backtrace:
CPU: 1 PID: 2 Comm: kthreadd Not tainted 6.2.0-rc6-syzkaller-00050-g9f266ccaa2f5 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106
 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2177
 check_prev_add kernel/locking/lockdep.c:3097 [inline]
 check_prevs_add kernel/locking/lockdep.c:3216 [inline]
 validate_chain kernel/locking/lockdep.c:3831 [inline]
 __lock_acquire+0x2a43/0x56d0 kernel/locking/lockdep.c:5055
 lock_acquire kernel/locking/lockdep.c:5668 [inline]
 lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633
 __mutex_lock_common kernel/locking/mutex.c:603 [inline]
 __mutex_lock+0x12f/0x1360 kernel/locking/mutex.c:747
 reiserfs_write_lock+0x79/0x100 fs/reiserfs/lock.c:27
 reiserfs_evict_inode+0x30b/0x540 fs/reiserfs/inode.c:55
 evict+0x2ed/0x6b0 fs/inode.c:664
 iput_final fs/inode.c:1747 [inline]
 iput.part.0+0x59b/0x880 fs/inode.c:1773
 iput+0x5c/0x80 fs/inode.c:1763
 dentry_unlink_inode+0x2b1/0x460 fs/dcache.c:401
 __dentry_kill+0x3c0/0x640 fs/dcache.c:607
 shrink_dentry_list+0x240/0x800 fs/dcache.c:1201
 prune_dcache_sb+0xeb/0x150 fs/dcache.c:1282
 super_cache_scan+0x33a/0x590 fs/super.c:104
 do_shrink_slab+0x464/0xce0 mm/vmscan.c:843
 shrink_slab_memcg mm/vmscan.c:912 [inline]
 shrink_slab+0x388/0x660 mm/vmscan.c:991
 shrink_node_memcgs mm/vmscan.c:6140 [inline]
 shrink_node+0x95d/0x1f40 mm/vmscan.c:6169
 shrink_zones mm/vmscan.c:6407 [inline]
 do_try_to_free_pages+0x3b4/0x17a0 mm/vmscan.c:6469
 try_to_free_pages+0x2e5/0x960 mm/vmscan.c:6704
 __perform_reclaim mm/page_alloc.c:4750 [inline]
 __alloc_pages_direct_reclaim mm/page_alloc.c:4772 [inline]
 __alloc_pages_slowpath.constprop.0+0x8b6/0x23d0 mm/page_alloc.c:5178
 __alloc_pages+0x4aa/0x5b0 mm/page_alloc.c:5562
 __alloc_pages_node include/linux/gfp.h:237 [inline]
 kmem_getpages mm/slab.c:1363 [inline]
 cache_grow_begin+0x94/0x390 mm/slab.c:2576
 fallback_alloc+0x1e2/0x2d0 mm/slab.c:3119
 __do_cache_alloc mm/slab.c:3223 [inline]
 slab_alloc_node mm/slab.c:3256 [inline]
 kmem_cache_alloc_node+0x181/0x590 mm/slab.c:3534
 alloc_task_struct_node kernel/fork.c:171 [inline]
 dup_task_struct kernel/fork.c:979 [inline]
 copy_process+0x3aa/0x7520 kernel/fork.c:2097
 kernel_clone+0xeb/0x990 kernel/fork.c:2681
 kernel_thread+0xb9/0xf0 kernel/fork.c:2741
 create_kthread kernel/kthread.c:399 [inline]
 kthreadd+0x4f2/0x750 kernel/kthread.c:746
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/02/02 07:12 upstream 9f266ccaa2f5 7374c4e5 .config console log report info ci-qemu-upstream possible deadlock in fs_reclaim_acquire
2022/12/02 16:06 upstream a4412fdd49dc e080de16 .config console log report info ci-qemu-upstream possible deadlock in fs_reclaim_acquire
2023/02/19 22:26 upstream 925cf0457d7e bcdf85f8 .config console log report info ci-qemu-upstream-386 possible deadlock in fs_reclaim_acquire
* Struck through repros no longer work on HEAD.