syzbot


possible deadlock in nilfs_dirty_inode (3)

Status: upstream: reported on 2024/04/29 12:03
Subsystems: nilfs
[Documentation on labels]
Reported-by: syzbot+ca73f5a22aec76875d85@syzkaller.appspotmail.com
First crash: 26d, last: 16h45m
Discussions (4)
Title Replies (including bot) Last reply
[syzbot] [nilfs?] possible deadlock in nilfs_evict_inode (2) 1 (2) 2024/05/18 19:20
[syzbot] [nilfs?] possible deadlock in nilfs_transaction_begin 1 (2) 2024/05/18 19:16
[syzbot] Monthly nilfs report (May 2024) 0 (1) 2024/05/06 13:18
[syzbot] [nilfs?] possible deadlock in nilfs_dirty_inode (3) 0 (1) 2024/04/29 12:03
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in nilfs_dirty_inode (2) nilfs 1 153d 151d 0/26 auto-obsoleted due to no activity on 2024/03/29 20:19
upstream possible deadlock in nilfs_dirty_inode nilfs 1 334d 330d 0/26 auto-obsoleted due to no activity on 2023/09/30 14:44

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.9.0-syzkaller-09868-g6e51b4b5bbc0 #0 Not tainted
------------------------------------------------------
kswapd0/111 is trying to acquire lock:
ffff88801cc78610 (sb_internal#5){.+.+}-{0:0}, at: nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153

but task is already holding lock:
ffffffff8dd3a860 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1970 mm/vmscan.c:6798

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:3783 [inline]
       fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3797
       might_alloc include/linux/sched/mm.h:334 [inline]
       prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4431
       __alloc_pages_noprof+0x194/0x2460 mm/page_alloc.c:4649
       alloc_pages_mpol_noprof+0x275/0x610 mm/mempolicy.c:2265
       folio_alloc_noprof+0x1e/0xc0 mm/mempolicy.c:2343
       filemap_alloc_folio_noprof+0x3ba/0x490 mm/filemap.c:1008
       __filemap_get_folio+0x51e/0xa80 mm/filemap.c:1950
       pagecache_get_page+0x2c/0x250 mm/folio-compat.c:87
       block_write_begin+0x38/0x4a0 fs/buffer.c:2232
       nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
       page_symlink+0x356/0x450 fs/namei.c:5236
       nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
       vfs_symlink fs/namei.c:4489 [inline]
       vfs_symlink+0x3e8/0x630 fs/namei.c:4473
       do_symlinkat+0x263/0x310 fs/namei.c:4515
       __do_sys_symlinkat fs/namei.c:4531 [inline]
       __se_sys_symlinkat fs/namei.c:4528 [inline]
       __ia32_sys_symlinkat+0x97/0xc0 fs/namei.c:4528
       do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
       __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
       do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
       entry_SYSENTER_compat_after_hwframe+0x84/0x8e

-> #1 (&nilfs->ns_segctor_sem){++++}-{3:3}:
       down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
       nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
       nilfs_mkdir+0xb5/0x380 fs/nilfs2/namei.c:212
       vfs_mkdir+0x57d/0x820 fs/namei.c:4131
       do_mkdirat+0x301/0x3a0 fs/namei.c:4154
       __do_sys_mkdirat fs/namei.c:4169 [inline]
       __se_sys_mkdirat fs/namei.c:4167 [inline]
       __ia32_sys_mkdirat+0x84/0xb0 fs/namei.c:4167
       do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
       __do_fast_syscall_32+0x75/0x120 arch/x86/entry/common.c:386
       do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
       entry_SYSENTER_compat_after_hwframe+0x84/0x8e

-> #0 (sb_internal#5){.+.+}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
       lock_acquire kernel/locking/lockdep.c:5754 [inline]
       lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1661 [inline]
       sb_start_intwrite include/linux/fs.h:1844 [inline]
       nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
       nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
       __mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2486
       mark_inode_dirty_sync include/linux/fs.h:2426 [inline]
       iput.part.0+0x5b/0x7f0 fs/inode.c:1764
       iput+0x5c/0x80 fs/inode.c:1757
       dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
       __dentry_kill+0x1d0/0x600 fs/dcache.c:603
       shrink_kill fs/dcache.c:1048 [inline]
       shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
       prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
       super_cache_scan+0x32a/0x550 fs/super.c:221
       do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
       shrink_slab_memcg mm/shrinker.c:548 [inline]
       shrink_slab+0xa87/0x1310 mm/shrinker.c:626
       shrink_one+0x493/0x7c0 mm/vmscan.c:4790
       shrink_many mm/vmscan.c:4851 [inline]
       lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951
       shrink_node mm/vmscan.c:5910 [inline]
       kswapd_shrink_node mm/vmscan.c:6720 [inline]
       balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911
       kswapd+0x5ea/0xbf0 mm/vmscan.c:7180
       kthread+0x2c1/0x3a0 kernel/kthread.c:389
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

Chain exists of:
  sb_internal#5 --> &nilfs->ns_segctor_sem --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&nilfs->ns_segctor_sem);
                               lock(fs_reclaim);
  rlock(sb_internal#5);

 *** DEADLOCK ***

2 locks held by kswapd0/111:
 #0: ffffffff8dd3a860 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1970 mm/vmscan.c:6798
 #1: ffff88801cc780e0 (&type->s_umount_key#84){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
 #1: ffff88801cc780e0 (&type->s_umount_key#84){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196

stack backtrace:
CPU: 3 PID: 111 Comm: kswapd0 Not tainted 6.9.0-syzkaller-09868-g6e51b4b5bbc0 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
 lock_acquire kernel/locking/lockdep.c:5754 [inline]
 lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
 percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
 __sb_start_write include/linux/fs.h:1661 [inline]
 sb_start_intwrite include/linux/fs.h:1844 [inline]
 nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
 nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
 __mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2486
 mark_inode_dirty_sync include/linux/fs.h:2426 [inline]
 iput.part.0+0x5b/0x7f0 fs/inode.c:1764
 iput+0x5c/0x80 fs/inode.c:1757
 dentry_unlink_inode+0x295/0x440 fs/dcache.c:400
 __dentry_kill+0x1d0/0x600 fs/dcache.c:603
 shrink_kill fs/dcache.c:1048 [inline]
 shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
 prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
 super_cache_scan+0x32a/0x550 fs/super.c:221
 do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
 shrink_slab_memcg mm/shrinker.c:548 [inline]
 shrink_slab+0xa87/0x1310 mm/shrinker.c:626
 shrink_one+0x493/0x7c0 mm/vmscan.c:4790
 shrink_many mm/vmscan.c:4851 [inline]
 lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951
 shrink_node mm/vmscan.c:5910 [inline]
 kswapd_shrink_node mm/vmscan.c:6720 [inline]
 balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911
 kswapd+0x5ea/0xbf0 mm/vmscan.c:7180
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/21 04:47 upstream 6e51b4b5bbc0 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/16 12:58 upstream 3c999d1ae3c7 ef5d53ed .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/09 01:44 upstream 6d7ddd805123 20bf80e1 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/04/25 12:01 upstream e88c4cfcb7b8 8bdc0f22 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
* Struck through repros no longer work on HEAD.