syzbot


possible deadlock in ext4_evict_inode (3)

Status: upstream: reported on 2024/02/15 21:18
Subsystems: ext4
[Documentation on labels]
Reported-by: syzbot+295234f4b13c00852ba4@syzkaller.appspotmail.com
First crash: 283d, last: 85d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ext4?] possible deadlock in ext4_evict_inode (3) 0 (1) 2024/02/15 21:18
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in ext4_evict_inode (2) ext4 18 615d 791d 0/28 auto-obsoleted due to no activity on 2023/07/14 13:24
linux-6.1 possible deadlock in ext4_evict_inode (3) 1 231d 231d 0/3 auto-obsoleted due to no activity on 2024/07/12 10:45
linux-5.15 possible deadlock in ext4_evict_inode (2) 2 253d 258d 0/3 auto-obsoleted due to no activity on 2024/06/20 13:51
upstream possible deadlock in ext4_evict_inode ext4 syz error error 38 2228d 2267d 15/28 fixed on 2020/05/22 17:31
linux-6.1 possible deadlock in ext4_evict_inode 2 498d 608d 0/3 auto-obsoleted due to no activity on 2023/10/19 10:40
linux-6.1 possible deadlock in ext4_evict_inode (2) 1 351d 351d 0/3 auto-obsoleted due to no activity on 2024/03/15 02:58
linux-5.15 possible deadlock in ext4_evict_inode 2 600d 604d 0/3 auto-obsoleted due to no activity on 2023/07/30 00:10

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.11.0-rc5-syzkaller-00015-g3e9bff3bbe13 #0 Not tainted
------------------------------------------------------
kswapd0/91 is trying to acquire lock:
ffff888069318610 (sb_internal){++++}-{0:0}, at: __sb_start_write include/linux/fs.h:1675 [inline]
ffff888069318610 (sb_internal){++++}-{0:0}, at: sb_start_intwrite include/linux/fs.h:1858 [inline]
ffff888069318610 (sb_internal){++++}-{0:0}, at: ext4_evict_inode+0x2f4/0xf50 fs/ext4/inode.c:212

but task is already holding lock:
ffffffff8e82e420 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
ffffffff8e82e420 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb06/0x2e80 mm/vmscan.c:7223

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (fs_reclaim){+.+.}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       __fs_reclaim_acquire mm/page_alloc.c:3818 [inline]
       fs_reclaim_acquire+0x88/0x140 mm/page_alloc.c:3832
       might_alloc include/linux/sched/mm.h:334 [inline]
       slab_pre_alloc_hook mm/slub.c:3939 [inline]
       slab_alloc_node mm/slub.c:4017 [inline]
       __kmalloc_cache_noprof+0x3d/0x2c0 mm/slub.c:4184
       kmalloc_noprof include/linux/slab.h:681 [inline]
       kzalloc_noprof include/linux/slab.h:807 [inline]
       assoc_array_insert+0xfe/0x33e0 lib/assoc_array.c:980
       __key_link_begin+0xe5/0x1f0 security/keys/keyring.c:1314
       __key_create_or_update+0x570/0xc70 security/keys/key.c:874
       key_create_or_update+0x42/0x60 security/keys/key.c:1018
       x509_load_certificate_list+0x149/0x270 crypto/asymmetric_keys/x509_loader.c:31
       do_one_initcall+0x248/0x880 init/main.c:1267
       do_initcall_level+0x157/0x210 init/main.c:1329
       do_initcalls+0x3f/0x80 init/main.c:1345
       kernel_init_freeable+0x435/0x5d0 init/main.c:1578
       kernel_init+0x1d/0x2b0 init/main.c:1467
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #1 (&type->lock_class){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       down_write+0x99/0x220 kernel/locking/rwsem.c:1579
       keyring_clear+0xb2/0x350 security/keys/keyring.c:1655
       fscrypt_put_master_key+0xc8/0x190 fs/crypto/keyring.c:79
       put_crypt_info+0x275/0x320 fs/crypto/keysetup.c:548
       fscrypt_put_encryption_info+0x40/0x60 fs/crypto/keysetup.c:753
       ext4_clear_inode+0x15b/0x1c0 fs/ext4/super.c:1524
       ext4_free_inode+0x392/0xfc0 fs/ext4/ialloc.c:278
       ext4_evict_inode+0xbef/0xf50 fs/ext4/inode.c:303
       evict+0x532/0x950 fs/inode.c:704
       d_delete_notify include/linux/fsnotify.h:332 [inline]
       vfs_rmdir+0x3d7/0x510 fs/namei.c:4306
       do_rmdir+0x3b5/0x580 fs/namei.c:4352
       __do_sys_unlinkat fs/namei.c:4528 [inline]
       __se_sys_unlinkat fs/namei.c:4522 [inline]
       __x64_sys_unlinkat+0xde/0xf0 fs/namei.c:4522
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (sb_internal){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3133 [inline]
       check_prevs_add kernel/locking/lockdep.c:3252 [inline]
       validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
       __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       percpu_down_read+0x44/0x1b0 include/linux/percpu-rwsem.h:51
       __sb_start_write include/linux/fs.h:1675 [inline]
       sb_start_intwrite include/linux/fs.h:1858 [inline]
       ext4_evict_inode+0x2f4/0xf50 fs/ext4/inode.c:212
       evict+0x532/0x950 fs/inode.c:704
       __dentry_kill+0x20d/0x630 fs/dcache.c:610
       shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
       shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
       prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
       super_cache_scan+0x34f/0x4b0 fs/super.c:221
       do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
       shrink_slab_memcg mm/shrinker.c:548 [inline]
       shrink_slab+0x878/0x14d0 mm/shrinker.c:626
       shrink_node_memcgs mm/vmscan.c:5910 [inline]
       shrink_node+0x130f/0x2df0 mm/vmscan.c:5948
       kswapd_shrink_node mm/vmscan.c:6762 [inline]
       balance_pgdat mm/vmscan.c:6954 [inline]
       kswapd+0x191b/0x2e80 mm/vmscan.c:7223
       kthread+0x2f0/0x390 kernel/kthread.c:389
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

Chain exists of:
  sb_internal --> &type->lock_class --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&type->lock_class);
                               lock(fs_reclaim);
  rlock(sb_internal);

 *** DEADLOCK ***

2 locks held by kswapd0/91:
 #0: ffffffff8e82e420 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6841 [inline]
 #0: ffffffff8e82e420 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb06/0x2e80 mm/vmscan.c:7223
 #1: ffff8880693180e0 (&type->s_umount_key#32){++++}-{3:3}, at: super_trylock_shared fs/super.c:562 [inline]
 #1: ffff8880693180e0 (&type->s_umount_key#32){++++}-{3:3}, at: super_cache_scan+0x94/0x4b0 fs/super.c:196

stack backtrace:
CPU: 0 UID: 0 PID: 91 Comm: kswapd0 Not tainted 6.11.0-rc5-syzkaller-00015-g3e9bff3bbe13 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:93 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2186
 check_prev_add kernel/locking/lockdep.c:3133 [inline]
 check_prevs_add kernel/locking/lockdep.c:3252 [inline]
 validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
 __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
 percpu_down_read+0x44/0x1b0 include/linux/percpu-rwsem.h:51
 __sb_start_write include/linux/fs.h:1675 [inline]
 sb_start_intwrite include/linux/fs.h:1858 [inline]
 ext4_evict_inode+0x2f4/0xf50 fs/ext4/inode.c:212
 evict+0x532/0x950 fs/inode.c:704
 __dentry_kill+0x20d/0x630 fs/dcache.c:610
 shrink_kill+0xa9/0x2c0 fs/dcache.c:1055
 shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1082
 prune_dcache_sb+0x10f/0x180 fs/dcache.c:1163
 super_cache_scan+0x34f/0x4b0 fs/super.c:221
 do_shrink_slab+0x701/0x1160 mm/shrinker.c:435
 shrink_slab_memcg mm/shrinker.c:548 [inline]
 shrink_slab+0x878/0x14d0 mm/shrinker.c:626
 shrink_node_memcgs mm/vmscan.c:5910 [inline]
 shrink_node+0x130f/0x2df0 mm/vmscan.c:5948
 kswapd_shrink_node mm/vmscan.c:6762 [inline]
 balance_pgdat mm/vmscan.c:6954 [inline]
 kswapd+0x191b/0x2e80 mm/vmscan.c:7223
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (5):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/08/27 07:54 upstream 3e9bff3bbe13 9aee4e0b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ext4_evict_inode
2024/06/04 00:25 upstream f06ce441457d 0aba2352 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in ext4_evict_inode
2024/03/26 04:15 upstream fe46a7dd189e 0ea90952 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in ext4_evict_inode
2024/06/03 12:06 linux-next 861a3cb5a2a8 0aba2352 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in ext4_evict_inode
2024/02/11 21:13 linux-next 445a555e0623 77b23aa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in ext4_evict_inode
* Struck through repros no longer work on HEAD.