syzbot


possible deadlock in ext4_evict_inode (3)

Status: upstream: reported on 2024/02/15 21:18
Subsystems: ext4
[Documentation on labels]
Reported-by: syzbot+295234f4b13c00852ba4@syzkaller.appspotmail.com
First crash: 75d, last: 32d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ext4?] possible deadlock in ext4_evict_inode (3) 0 (1) 2024/02/15 21:18
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in ext4_evict_inode (2) ext4 18 407d 584d 0/26 auto-obsoleted due to no activity on 2023/07/14 13:24
linux-6.1 possible deadlock in ext4_evict_inode (3) 1 23d 23d 0/3 upstream: reported on 2024/04/03 10:45
linux-5.15 possible deadlock in ext4_evict_inode (2) 2 45d 50d 0/3 upstream: reported on 2024/03/07 16:36
upstream possible deadlock in ext4_evict_inode ext4 syz error error 38 2021d 2059d 15/26 fixed on 2020/05/22 17:31
linux-6.1 possible deadlock in ext4_evict_inode 2 290d 400d 0/3 auto-obsoleted due to no activity on 2023/10/19 10:40
linux-6.1 possible deadlock in ext4_evict_inode (2) 1 143d 143d 0/3 auto-obsoleted due to no activity on 2024/03/15 02:58
linux-5.15 possible deadlock in ext4_evict_inode 2 392d 396d 0/3 auto-obsoleted due to no activity on 2023/07/30 00:10

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.8.0-syzkaller-08951-gfe46a7dd189e #0 Not tainted
------------------------------------------------------
kswapd0/88 is trying to acquire lock:
ffff888029518610 (sb_internal){.+.+}-{0:0}, at: __sb_start_write include/linux/fs.h:1662 [inline]
ffff888029518610 (sb_internal){.+.+}-{0:0}, at: sb_start_intwrite include/linux/fs.h:1845 [inline]
ffff888029518610 (sb_internal){.+.+}-{0:0}, at: ext4_evict_inode+0x2e4/0xf30 fs/ext4/inode.c:212

but task is already holding lock:
ffffffff8e21f720 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6774 [inline]
ffffffff8e21f720 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb3f/0x36e0 mm/vmscan.c:7146

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (fs_reclaim){+.+.}-{0:0}:
       lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
       __fs_reclaim_acquire mm/page_alloc.c:3692 [inline]
       fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3706
       might_alloc include/linux/sched/mm.h:303 [inline]
       slab_pre_alloc_hook mm/slub.c:3746 [inline]
       slab_alloc_node mm/slub.c:3827 [inline]
       __do_kmalloc_node mm/slub.c:3965 [inline]
       __kmalloc_node+0xbf/0x4e0 mm/slub.c:3973
       kmalloc_node include/linux/slab.h:648 [inline]
       kvmalloc_node+0x72/0x190 mm/util.c:634
       kvmalloc include/linux/slab.h:766 [inline]
       ext4_xattr_inode_cache_find fs/ext4/xattr.c:1535 [inline]
       ext4_xattr_inode_lookup_create fs/ext4/xattr.c:1577 [inline]
       ext4_xattr_set_entry+0x200e/0x3fd0 fs/ext4/xattr.c:1719
       ext4_xattr_block_set+0xb15/0x35e0 fs/ext4/xattr.c:2039
       ext4_xattr_move_to_block fs/ext4/xattr.c:2667 [inline]
       ext4_xattr_make_inode_space fs/ext4/xattr.c:2742 [inline]
       ext4_expand_extra_isize_ea+0x12d7/0x1cf0 fs/ext4/xattr.c:2834
       __ext4_expand_extra_isize+0x2fb/0x3e0 fs/ext4/inode.c:5789
       ext4_try_to_expand_extra_isize fs/ext4/inode.c:5832 [inline]
       __ext4_mark_inode_dirty+0x53e/0x870 fs/ext4/inode.c:5910
       ext4_delete_inline_entry+0x49a/0x620 fs/ext4/inline.c:1753
       ext4_delete_entry+0x13f/0x5c0 fs/ext4/namei.c:2719
       __ext4_unlink+0x565/0xb30 fs/ext4/namei.c:3273
       ext4_unlink+0x1af/0x560 fs/ext4/namei.c:3321
       vfs_unlink+0x367/0x600 fs/namei.c:4338
       do_unlinkat+0x4ae/0x830 fs/namei.c:4402
       __do_sys_unlinkat fs/namei.c:4445 [inline]
       __se_sys_unlinkat fs/namei.c:4438 [inline]
       __x64_sys_unlinkat+0xce/0xf0 fs/namei.c:4438
       do_syscall_64+0xfd/0x240
       entry_SYSCALL_64_after_hwframe+0x6d/0x75

-> #2 (&ei->xattr_sem){++++}-{3:3}:
       lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1579
       ext4_write_lock_xattr fs/ext4/xattr.h:155 [inline]
       ext4_xattr_set_handle+0x26b/0x1780 fs/ext4/xattr.c:2371
       ext4_xattr_set+0x241/0x3d0 fs/ext4/xattr.c:2558
       __vfs_setxattr+0x46a/0x4a0 fs/xattr.c:200
       __vfs_setxattr_noperm+0x12e/0x5e0 fs/xattr.c:234
       vfs_setxattr+0x221/0x430 fs/xattr.c:321
       do_setxattr fs/xattr.c:629 [inline]
       setxattr+0x25d/0x2f0 fs/xattr.c:652
       __do_sys_fsetxattr fs/xattr.c:708 [inline]
       __se_sys_fsetxattr+0x19e/0x220 fs/xattr.c:697
       do_syscall_64+0xfd/0x240
       entry_SYSCALL_64_after_hwframe+0x6d/0x75

-> #1 (jbd2_handle){++++}-{0:0}:
       lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
       start_this_handle+0x1fc7/0x2200 fs/jbd2/transaction.c:463
       jbd2__journal_start+0x306/0x620 fs/jbd2/transaction.c:520
       __ext4_journal_start_sb+0x215/0x5b0 fs/ext4/ext4_jbd2.c:112
       ext4_sample_last_mounted fs/ext4/file.c:837 [inline]
       ext4_file_open+0x53e/0x760 fs/ext4/file.c:866
       do_dentry_open+0x909/0x15a0 fs/open.c:955
       do_open fs/namei.c:3642 [inline]
       path_openat+0x2860/0x3240 fs/namei.c:3799
       do_filp_open+0x235/0x490 fs/namei.c:3826
       do_sys_openat2+0x13e/0x1d0 fs/open.c:1406
       do_sys_open fs/open.c:1421 [inline]
       __do_sys_openat fs/open.c:1437 [inline]
       __se_sys_openat fs/open.c:1432 [inline]
       __x64_sys_openat+0x247/0x2a0 fs/open.c:1432
       do_syscall_64+0xfd/0x240
       entry_SYSCALL_64_after_hwframe+0x6d/0x75

-> #0 (sb_internal){.+.+}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
       __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
       lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
       percpu_down_read+0x44/0x1b0 include/linux/percpu-rwsem.h:51
       __sb_start_write include/linux/fs.h:1662 [inline]
       sb_start_intwrite include/linux/fs.h:1845 [inline]
       ext4_evict_inode+0x2e4/0xf30 fs/ext4/inode.c:212
       evict+0x2aa/0x630 fs/inode.c:667
       __dentry_kill+0x20d/0x630 fs/dcache.c:603
       shrink_kill+0xa9/0x2c0 fs/dcache.c:1048
       shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1075
       prune_dcache_sb+0x10f/0x180 fs/dcache.c:1156
       super_cache_scan+0x34f/0x4b0 fs/super.c:221
       do_shrink_slab+0x6d2/0x1140 mm/shrinker.c:435
       shrink_slab_memcg mm/shrinker.c:548 [inline]
       shrink_slab+0x883/0x14d0 mm/shrinker.c:626
       shrink_one+0x423/0x7f0 mm/vmscan.c:4767
       shrink_many mm/vmscan.c:4828 [inline]
       lru_gen_shrink_node mm/vmscan.c:4929 [inline]
       shrink_node+0x37b8/0x3e70 mm/vmscan.c:5888
       kswapd_shrink_node mm/vmscan.c:6696 [inline]
       balance_pgdat mm/vmscan.c:6886 [inline]
       kswapd+0x17d1/0x36e0 mm/vmscan.c:7146
       kthread+0x2f2/0x390 kernel/kthread.c:388
       ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243

other info that might help us debug this:

Chain exists of:
  sb_internal --> &ei->xattr_sem --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&ei->xattr_sem);
                               lock(fs_reclaim);
  rlock(sb_internal);

 *** DEADLOCK ***

2 locks held by kswapd0/88:
 #0: ffffffff8e21f720 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6774 [inline]
 #0: ffffffff8e21f720 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xb3f/0x36e0 mm/vmscan.c:7146
 #1: ffff8880295180e0 (&type->s_umount_key#33){++++}-{3:3}, at: super_trylock_shared fs/super.c:561 [inline]
 #1: ffff8880295180e0 (&type->s_umount_key#33){++++}-{3:3}, at: super_cache_scan+0x94/0x4b0 fs/super.c:196

stack backtrace:
CPU: 0 PID: 88 Comm: kswapd0 Not tainted 6.8.0-syzkaller-08951-gfe46a7dd189e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1e4/0x530 kernel/locking/lockdep.c:5754
 percpu_down_read+0x44/0x1b0 include/linux/percpu-rwsem.h:51
 __sb_start_write include/linux/fs.h:1662 [inline]
 sb_start_intwrite include/linux/fs.h:1845 [inline]
 ext4_evict_inode+0x2e4/0xf30 fs/ext4/inode.c:212
 evict+0x2aa/0x630 fs/inode.c:667
 __dentry_kill+0x20d/0x630 fs/dcache.c:603
 shrink_kill+0xa9/0x2c0 fs/dcache.c:1048
 shrink_dentry_list+0x2c0/0x5b0 fs/dcache.c:1075
 prune_dcache_sb+0x10f/0x180 fs/dcache.c:1156
 super_cache_scan+0x34f/0x4b0 fs/super.c:221
 do_shrink_slab+0x6d2/0x1140 mm/shrinker.c:435
 shrink_slab_memcg mm/shrinker.c:548 [inline]
 shrink_slab+0x883/0x14d0 mm/shrinker.c:626
 shrink_one+0x423/0x7f0 mm/vmscan.c:4767
 shrink_many mm/vmscan.c:4828 [inline]
 lru_gen_shrink_node mm/vmscan.c:4929 [inline]
 shrink_node+0x37b8/0x3e70 mm/vmscan.c:5888
 kswapd_shrink_node mm/vmscan.c:6696 [inline]
 balance_pgdat mm/vmscan.c:6886 [inline]
 kswapd+0x17d1/0x36e0 mm/vmscan.c:7146
 kthread+0x2f2/0x390 kernel/kthread.c:388
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/03/26 04:15 upstream fe46a7dd189e 0ea90952 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in ext4_evict_inode
2024/02/11 21:13 linux-next 445a555e0623 77b23aa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in ext4_evict_inode
* Struck through repros no longer work on HEAD.