syzbot


possible deadlock in jffs2_do_clear_inode

Status: upstream: reported on 2024/04/10 05:53
Subsystems: jffs2
[Documentation on labels]
Reported-by: syzbot+88a60d3f927e2460d4ac@syzkaller.appspotmail.com
First crash: 24d, last: 16d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [jffs2?] possible deadlock in jffs2_do_clear_inode 0 (1) 2024/04/10 05:53

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.8.0-syzkaller-08951-gfe46a7dd189e #0 Not tainted
------------------------------------------------------
kswapd0/87 is trying to acquire lock:
ffff8880777c91f0 (&f->sem){+.+.}-{3:3}, at: jffs2_do_clear_inode+0x64/0x3b0 fs/jffs2/readinode.c:1419

but task is already holding lock:
ffffffff8e21dda0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6774 [inline]
ffffffff8e21dda0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb39/0x2f50 mm/vmscan.c:7146

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (fs_reclaim){+.+.}-{0:0}:
       lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
       __fs_reclaim_acquire mm/page_alloc.c:3692 [inline]
       fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3706
       might_alloc include/linux/sched/mm.h:303 [inline]
       slab_pre_alloc_hook mm/slub.c:3746 [inline]
       slab_alloc_node mm/slub.c:3827 [inline]
       kmem_cache_alloc+0x48/0x340 mm/slub.c:3852
       jffs2_do_read_inode+0x37e/0x700 fs/jffs2/readinode.c:1372
       jffs2_iget+0x277/0x1130 fs/jffs2/fs.c:277
       jffs2_do_fill_super+0x57a/0xb60 fs/jffs2/fs.c:577
       mtd_get_sb+0x191/0x3c0 drivers/mtd/mtdsuper.c:57
       get_tree_mtd+0x659/0x820 drivers/mtd/mtdsuper.c:141
       vfs_get_tree+0x90/0x2a0 fs/super.c:1779
       do_new_mount+0x2be/0xb40 fs/namespace.c:3352
       do_mount fs/namespace.c:3692 [inline]
       __do_sys_mount fs/namespace.c:3898 [inline]
       __se_sys_mount+0x2d9/0x3c0 fs/namespace.c:3875
       do_syscall_64+0xfb/0x240
       entry_SYSCALL_64_after_hwframe+0x6d/0x75

-> #0 (&f->sem){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
       __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
       lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       jffs2_do_clear_inode+0x64/0x3b0 fs/jffs2/readinode.c:1419
       evict+0x2a8/0x630 fs/inode.c:667
       dispose_list fs/inode.c:700 [inline]
       prune_icache_sb+0x239/0x2f0 fs/inode.c:885
       super_cache_scan+0x38c/0x4b0 fs/super.c:223
       do_shrink_slab+0x6d0/0x1140 mm/shrinker.c:435
       shrink_slab_memcg mm/shrinker.c:548 [inline]
       shrink_slab+0x883/0x14d0 mm/shrinker.c:626
       shrink_node_memcgs mm/vmscan.c:5869 [inline]
       shrink_node+0x1208/0x2960 mm/vmscan.c:5902
       kswapd_shrink_node mm/vmscan.c:6696 [inline]
       balance_pgdat mm/vmscan.c:6886 [inline]
       kswapd+0x1aac/0x2f50 mm/vmscan.c:7146
       kthread+0x2f0/0x390 kernel/kthread.c:388
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&f->sem);
                               lock(fs_reclaim);
  lock(&f->sem);

 *** DEADLOCK ***

2 locks held by kswapd0/87:
 #0: ffffffff8e21dda0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6774 [inline]
 #0: ffffffff8e21dda0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb39/0x2f50 mm/vmscan.c:7146
 #1: ffff888078bd60e0 (&type->s_umount_key#60){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
 #1: ffff888078bd60e0 (&type->s_umount_key#60){++++}-{3:3}, at: super_cache_scan+0x94/0x4b0 fs/super.c:196

stack backtrace:
CPU: 1 PID: 87 Comm: kswapd0 Not tainted 6.8.0-syzkaller-08951-gfe46a7dd189e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:608 [inline]
 __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
 jffs2_do_clear_inode+0x64/0x3b0 fs/jffs2/readinode.c:1419
 evict+0x2a8/0x630 fs/inode.c:667
 dispose_list fs/inode.c:700 [inline]
 prune_icache_sb+0x239/0x2f0 fs/inode.c:885
 super_cache_scan+0x38c/0x4b0 fs/super.c:223
 do_shrink_slab+0x6d0/0x1140 mm/shrinker.c:435
 shrink_slab_memcg mm/shrinker.c:548 [inline]
 shrink_slab+0x883/0x14d0 mm/shrinker.c:626
 shrink_node_memcgs mm/vmscan.c:5869 [inline]
 shrink_node+0x1208/0x2960 mm/vmscan.c:5902
 kswapd_shrink_node mm/vmscan.c:6696 [inline]
 balance_pgdat mm/vmscan.c:6886 [inline]
 kswapd+0x1aac/0x2f50 mm/vmscan.c:7146
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
 </TASK>

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/04/13 21:35 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in jffs2_do_clear_inode
2024/04/13 14:32 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in jffs2_do_clear_inode
2024/04/06 05:43 upstream e8b0ccb2a787 ca620dd8 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in jffs2_do_clear_inode
* Struck through repros no longer work on HEAD.