syzbot


possible deadlock in nilfs_dirty_inode (3)

Status: closed as dup on 2024/07/04 16:38
Subsystems: nilfs
[Documentation on labels]
Reported-by: syzbot+ca73f5a22aec76875d85@syzkaller.appspotmail.com
First crash: 211d, last: 155d
Duplicate of
Title Repro Cause bisect Fix bisect Count Last Reported
possible deadlock in nilfs_transaction_begin nilfs 16 154d 188d
Discussions (5)
Title Replies (including bot) Last reply
[syzbot] [nilfs?] possible deadlock in nilfs_evict_inode (2) 2 (3) 2024/07/04 16:40
[syzbot] [nilfs?] possible deadlock in nilfs_dirty_inode (3) 1 (2) 2024/07/04 16:37
[syzbot] Monthly nilfs report (Jun 2024) 0 (1) 2024/06/10 20:56
[syzbot] [nilfs?] possible deadlock in nilfs_transaction_begin 1 (2) 2024/05/18 19:16
[syzbot] Monthly nilfs report (May 2024) 0 (1) 2024/05/06 13:18
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in nilfs_dirty_inode (2) nilfs 1 338d 337d 0/28 auto-obsoleted due to no activity on 2024/03/29 20:19
upstream possible deadlock in nilfs_dirty_inode nilfs 1 519d 515d 0/28 auto-obsoleted due to no activity on 2023/09/30 14:44
upstream possible deadlock in nilfs_dirty_inode (4) nilfs C 2 24d 34d 28/28 fixed on 2024/11/14 10:09

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.10.0-rc3-syzkaller-00044-g2ccbdf43d5e7 #0 Not tainted
------------------------------------------------------
kswapd0/111 is trying to acquire lock:
ffff888071844610 (sb_internal#5){.+.+}-{0:0}, at: nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153

but task is already holding lock:
ffffffff8dd3aac0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1970 mm/vmscan.c:6798

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:3801 [inline]
       fs_reclaim_acquire+0x102/0x160 mm/page_alloc.c:3815
       might_alloc include/linux/sched/mm.h:334 [inline]
       prepare_alloc_pages.constprop.0+0x155/0x560 mm/page_alloc.c:4449
       __alloc_pages_noprof+0x194/0x2460 mm/page_alloc.c:4667
       alloc_pages_mpol_noprof+0x275/0x610 mm/mempolicy.c:2265
       folio_alloc_noprof+0x1e/0xc0 mm/mempolicy.c:2343
       filemap_alloc_folio_noprof+0x3ba/0x490 mm/filemap.c:1008
       __filemap_get_folio+0x51e/0xa80 mm/filemap.c:1950
       pagecache_get_page+0x2c/0x250 mm/folio-compat.c:87
       block_write_begin+0x38/0x4a0 fs/buffer.c:2232
       nilfs_write_begin+0x9f/0x1a0 fs/nilfs2/inode.c:262
       page_symlink+0x356/0x450 fs/namei.c:5236
       nilfs_symlink+0x23c/0x3c0 fs/nilfs2/namei.c:153
       vfs_symlink fs/namei.c:4489 [inline]
       vfs_symlink+0x3e8/0x660 fs/namei.c:4473
       do_symlinkat+0x263/0x310 fs/namei.c:4515
       __do_sys_symlinkat fs/namei.c:4531 [inline]
       __se_sys_symlinkat fs/namei.c:4528 [inline]
       __ia32_sys_symlinkat+0x97/0xc0 fs/namei.c:4528
       do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
       __do_fast_syscall_32+0x73/0x120 arch/x86/entry/common.c:386
       do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
       entry_SYSENTER_compat_after_hwframe+0x84/0x8e

-> #1 (&nilfs->ns_segctor_sem){++++}-{3:3}:
       down_read+0x9a/0x330 kernel/locking/rwsem.c:1526
       nilfs_transaction_begin+0x326/0xa40 fs/nilfs2/segment.c:223
       nilfs_symlink+0x114/0x3c0 fs/nilfs2/namei.c:140
       vfs_symlink fs/namei.c:4489 [inline]
       vfs_symlink+0x3e8/0x660 fs/namei.c:4473
       do_symlinkat+0x263/0x310 fs/namei.c:4515
       __do_sys_symlinkat fs/namei.c:4531 [inline]
       __se_sys_symlinkat fs/namei.c:4528 [inline]
       __ia32_sys_symlinkat+0x97/0xc0 fs/namei.c:4528
       do_syscall_32_irqs_on arch/x86/entry/common.c:165 [inline]
       __do_fast_syscall_32+0x73/0x120 arch/x86/entry/common.c:386
       do_fast_syscall_32+0x32/0x80 arch/x86/entry/common.c:411
       entry_SYSENTER_compat_after_hwframe+0x84/0x8e

-> #0 (sb_internal#5){.+.+}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
       lock_acquire kernel/locking/lockdep.c:5754 [inline]
       lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1655 [inline]
       sb_start_intwrite include/linux/fs.h:1838 [inline]
       nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
       nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
       __mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2486
       mark_inode_dirty_sync include/linux/fs.h:2409 [inline]
       iput.part.0+0x5b/0x7f0 fs/inode.c:1764
       iput+0x5c/0x80 fs/inode.c:1757
       dentry_unlink_inode+0x295/0x480 fs/dcache.c:400
       __dentry_kill+0x1d0/0x600 fs/dcache.c:603
       shrink_kill fs/dcache.c:1048 [inline]
       shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
       prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
       super_cache_scan+0x32a/0x550 fs/super.c:221
       do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
       shrink_slab_memcg mm/shrinker.c:548 [inline]
       shrink_slab+0xa87/0x1310 mm/shrinker.c:626
       shrink_one+0x493/0x7c0 mm/vmscan.c:4790
       shrink_many mm/vmscan.c:4851 [inline]
       lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951
       shrink_node mm/vmscan.c:5910 [inline]
       kswapd_shrink_node mm/vmscan.c:6720 [inline]
       balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911
       kswapd+0x5ea/0xbf0 mm/vmscan.c:7180
       kthread+0x2c1/0x3a0 kernel/kthread.c:389
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

Chain exists of:
  sb_internal#5 --> &nilfs->ns_segctor_sem --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&nilfs->ns_segctor_sem);
                               lock(fs_reclaim);
  rlock(sb_internal#5);

 *** DEADLOCK ***

2 locks held by kswapd0/111:
 #0: ffffffff8dd3aac0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x166/0x1970 mm/vmscan.c:6798
 #1: ffff8880718440e0 (&type->s_umount_key#77){++++}-{3:3}, at: super_trylock_shared fs/super.c:562 [inline]
 #1: ffff8880718440e0 (&type->s_umount_key#77){++++}-{3:3}, at: super_cache_scan+0x96/0x550 fs/super.c:196

stack backtrace:
CPU: 3 PID: 111 Comm: kswapd0 Not tainted 6.10.0-rc3-syzkaller-00044-g2ccbdf43d5e7 #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x2478/0x3b30 kernel/locking/lockdep.c:5137
 lock_acquire kernel/locking/lockdep.c:5754 [inline]
 lock_acquire+0x1b1/0x560 kernel/locking/lockdep.c:5719
 percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
 __sb_start_write include/linux/fs.h:1655 [inline]
 sb_start_intwrite include/linux/fs.h:1838 [inline]
 nilfs_transaction_begin+0x21b/0xa40 fs/nilfs2/segment.c:220
 nilfs_dirty_inode+0x1a4/0x270 fs/nilfs2/inode.c:1153
 __mark_inode_dirty+0x1f0/0xe70 fs/fs-writeback.c:2486
 mark_inode_dirty_sync include/linux/fs.h:2409 [inline]
 iput.part.0+0x5b/0x7f0 fs/inode.c:1764
 iput+0x5c/0x80 fs/inode.c:1757
 dentry_unlink_inode+0x295/0x480 fs/dcache.c:400
 __dentry_kill+0x1d0/0x600 fs/dcache.c:603
 shrink_kill fs/dcache.c:1048 [inline]
 shrink_dentry_list+0x140/0x5d0 fs/dcache.c:1075
 prune_dcache_sb+0xeb/0x150 fs/dcache.c:1156
 super_cache_scan+0x32a/0x550 fs/super.c:221
 do_shrink_slab+0x44f/0x11c0 mm/shrinker.c:435
 shrink_slab_memcg mm/shrinker.c:548 [inline]
 shrink_slab+0xa87/0x1310 mm/shrinker.c:626
 shrink_one+0x493/0x7c0 mm/vmscan.c:4790
 shrink_many mm/vmscan.c:4851 [inline]
 lru_gen_shrink_node+0x89f/0x1750 mm/vmscan.c:4951
 shrink_node mm/vmscan.c:5910 [inline]
 kswapd_shrink_node mm/vmscan.c:6720 [inline]
 balance_pgdat+0x1105/0x1970 mm/vmscan.c:6911
 kswapd+0x5ea/0xbf0 mm/vmscan.c:7180
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (16):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/06/20 06:30 upstream 2ccbdf43d5e7 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/06/16 11:41 upstream 2ccbdf43d5e7 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/06/10 21:14 upstream 83a7eefedc9b c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/06/06 23:55 upstream d30d0e49da71 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/06/06 23:55 upstream d30d0e49da71 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/27 19:42 upstream 2bfcfd584ff5 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/27 04:02 upstream 1613e604df0c c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/27 04:02 upstream 1613e604df0c c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/27 02:52 upstream 1613e604df0c c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/25 17:52 upstream 56fb6f92854f c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/25 08:55 upstream 56fb6f92854f c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/25 08:55 upstream 56fb6f92854f c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/21 04:47 upstream 6e51b4b5bbc0 c2e07261 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/16 12:58 upstream 3c999d1ae3c7 ef5d53ed .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/05/09 01:44 upstream 6d7ddd805123 20bf80e1 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
2024/04/25 12:01 upstream e88c4cfcb7b8 8bdc0f22 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in nilfs_dirty_inode
* Struck through repros no longer work on HEAD.