syzbot


possible deadlock in start_this_handle (2)

Status: auto-closed as invalid on 2021/07/13 16:11
Subsystems: ext4
[Documentation on labels]
Reported-by: syzbot+bfdded10ab7dcd7507ae@syzkaller.appspotmail.com
First crash: 1166d, last: 1129d
Discussions (3)
Title Replies (including bot) Last reply
possible deadlock in start_this_handle (2) 29 (30) 2021/03/20 10:02
Re: possible deadlock in fs_reclaim_acquire (2) 4 (4) 2021/02/11 12:43
Re: possible deadlock in fs_reclaim_acquire (2) 1 (1) 2021/02/11 12:12
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in start_this_handle (3) ext4 8 456d 644d 22/26 fixed on 2023/02/24 13:50
upstream possible deadlock in start_this_handle (4) fscrypt ext4 23 35d 414d 0/26 upstream: reported on 2023/03/01 00:02
upstream possible deadlock in start_this_handle ext4 8 2011d 2050d 0/26 auto-closed as invalid on 2019/04/13 16:27

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
5.12.0-rc1-syzkaller #0 Not tainted
------------------------------------------------------
kswapd0/2198 is trying to acquire lock:
ffff88801f13e8e0 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0xf86/0x1380 fs/jbd2/transaction.c:444

but task is already holding lock:
ffffffff8c08bf40 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x0/0x30 mm/page_alloc.c:5198

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:4327 [inline]
       fs_reclaim_acquire+0x117/0x150 mm/page_alloc.c:4341
       might_alloc include/linux/sched/mm.h:193 [inline]
       slab_pre_alloc_hook mm/slab.h:497 [inline]
       slab_alloc_node mm/slab.c:3221 [inline]
       kmem_cache_alloc_node_trace+0x48/0x550 mm/slab.c:3612
       __do_kmalloc_node mm/slab.c:3634 [inline]
       __kmalloc_node+0x38/0x60 mm/slab.c:3642
       kmalloc_node include/linux/slab.h:577 [inline]
       kvmalloc_node+0xb4/0xf0 mm/util.c:587
       kvmalloc include/linux/mm.h:785 [inline]
       ext4_xattr_inode_cache_find fs/ext4/xattr.c:1465 [inline]
       ext4_xattr_inode_lookup_create fs/ext4/xattr.c:1508 [inline]
       ext4_xattr_set_entry+0x1ce6/0x3750 fs/ext4/xattr.c:1649
       ext4_xattr_ibody_set+0x78/0x2b0 fs/ext4/xattr.c:2224
       ext4_xattr_set_handle+0x8f4/0x13e0 fs/ext4/xattr.c:2380
       ext4_xattr_set+0x13a/0x340 fs/ext4/xattr.c:2493
       __vfs_setxattr+0x115/0x180 fs/xattr.c:180
       __vfs_setxattr_noperm+0x125/0x4c0 fs/xattr.c:214
       __vfs_setxattr_locked+0x1cf/0x260 fs/xattr.c:274
       vfs_setxattr+0x13f/0x330 fs/xattr.c:300
       setxattr+0x218/0x2b0 fs/xattr.c:573
       path_setxattr+0x197/0x1c0 fs/xattr.c:593
       __do_sys_setxattr fs/xattr.c:609 [inline]
       __se_sys_setxattr fs/xattr.c:605 [inline]
       __x64_sys_setxattr+0xc0/0x160 fs/xattr.c:605
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xae

-> #1 (&ei->xattr_sem){++++}-{3:3}:
       down_write+0x92/0x150 kernel/locking/rwsem.c:1406
       ext4_write_lock_xattr fs/ext4/xattr.h:142 [inline]
       ext4_xattr_set_handle+0x15c/0x13e0 fs/ext4/xattr.c:2308
       ext4_initxattrs+0xb5/0x120 fs/ext4/xattr_security.c:44
       security_inode_init_security+0x1c4/0x370 security/security.c:1054
       __ext4_new_inode+0x396a/0x5570 fs/ext4/ialloc.c:1318
       ext4_create+0x2d6/0x4d0 fs/ext4/namei.c:2622
       lookup_open.isra.0+0xfef/0x13d0 fs/namei.c:3219
       open_last_lookups fs/namei.c:3289 [inline]
       path_openat+0x9b4/0x27e0 fs/namei.c:3495
       do_filp_open+0x17e/0x3c0 fs/namei.c:3525
       do_sys_openat2+0x16d/0x420 fs/open.c:1187
       do_sys_open fs/open.c:1203 [inline]
       __do_sys_open fs/open.c:1211 [inline]
       __se_sys_open fs/open.c:1207 [inline]
       __x64_sys_open+0x119/0x1c0 fs/open.c:1207
       do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
       entry_SYSCALL_64_after_hwframe+0x44/0xae

-> #0 (jbd2_handle){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:2936 [inline]
       check_prevs_add kernel/locking/lockdep.c:3059 [inline]
       validate_chain kernel/locking/lockdep.c:3674 [inline]
       __lock_acquire+0x2b14/0x54c0 kernel/locking/lockdep.c:4900
       lock_acquire kernel/locking/lockdep.c:5510 [inline]
       lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
       start_this_handle+0xfb9/0x1380 fs/jbd2/transaction.c:446
       jbd2__journal_start+0x399/0x930 fs/jbd2/transaction.c:503
       __ext4_journal_start_sb+0x227/0x4a0 fs/ext4/ext4_jbd2.c:105
       __ext4_journal_start fs/ext4/ext4_jbd2.h:320 [inline]
       ext4_dirty_inode+0x9d/0x110 fs/ext4/inode.c:5944
       __mark_inode_dirty+0x6e3/0x10f0 fs/fs-writeback.c:2274
       mark_inode_dirty_sync include/linux/fs.h:2272 [inline]
       iput.part.0+0x57/0x810 fs/inode.c:1677
       iput+0x58/0x70 fs/inode.c:1670
       dentry_unlink_inode+0x2b1/0x3d0 fs/dcache.c:374
       __dentry_kill+0x3c0/0x640 fs/dcache.c:580
       shrink_dentry_list+0x144/0x480 fs/dcache.c:1174
       prune_dcache_sb+0xe7/0x140 fs/dcache.c:1255
       super_cache_scan+0x336/0x590 fs/super.c:105
       do_shrink_slab+0x3e4/0x9f0 mm/vmscan.c:512
       shrink_slab+0x16f/0x5d0 mm/vmscan.c:673
       shrink_node_memcgs mm/vmscan.c:2655 [inline]
       shrink_node+0x8d1/0x1de0 mm/vmscan.c:2770
       kswapd_shrink_node mm/vmscan.c:3513 [inline]
       balance_pgdat+0x745/0x1270 mm/vmscan.c:3671
       kswapd+0x5b6/0xdb0 mm/vmscan.c:3928
       kthread+0x3b1/0x4a0 kernel/kthread.c:292
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294

other info that might help us debug this:

Chain exists of:
  jbd2_handle --> &ei->xattr_sem --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&ei->xattr_sem);
                               lock(fs_reclaim);
  lock(jbd2_handle);

 *** DEADLOCK ***

3 locks held by kswapd0/2198:
 #0: ffffffff8c08bf40 (fs_reclaim){+.+.}-{0:0}, at: __fs_reclaim_acquire+0x0/0x30 mm/page_alloc.c:5198
 #1: ffffffff8c0531b0 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0xc7/0x5d0 mm/vmscan.c:663
 #2: ffff88801451a0e0 (&type->s_umount_key#51){++++}-{3:3}, at: trylock_super fs/super.c:418 [inline]
 #2: ffff88801451a0e0 (&type->s_umount_key#51){++++}-{3:3}, at: super_cache_scan+0x6c/0x590 fs/super.c:80

stack backtrace:
CPU: 1 PID: 2198 Comm: kswapd0 Not tainted 5.12.0-rc1-syzkaller #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
Call Trace:
 __dump_stack lib/dump_stack.c:79 [inline]
 dump_stack+0xfa/0x151 lib/dump_stack.c:120
 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2127
 check_prev_add kernel/locking/lockdep.c:2936 [inline]
 check_prevs_add kernel/locking/lockdep.c:3059 [inline]
 validate_chain kernel/locking/lockdep.c:3674 [inline]
 __lock_acquire+0x2b14/0x54c0 kernel/locking/lockdep.c:4900
 lock_acquire kernel/locking/lockdep.c:5510 [inline]
 lock_acquire+0x1ab/0x730 kernel/locking/lockdep.c:5475
 start_this_handle+0xfb9/0x1380 fs/jbd2/transaction.c:446
 jbd2__journal_start+0x399/0x930 fs/jbd2/transaction.c:503
 __ext4_journal_start_sb+0x227/0x4a0 fs/ext4/ext4_jbd2.c:105
 __ext4_journal_start fs/ext4/ext4_jbd2.h:320 [inline]
 ext4_dirty_inode+0x9d/0x110 fs/ext4/inode.c:5944
 __mark_inode_dirty+0x6e3/0x10f0 fs/fs-writeback.c:2274
 mark_inode_dirty_sync include/linux/fs.h:2272 [inline]
 iput.part.0+0x57/0x810 fs/inode.c:1677
 iput+0x58/0x70 fs/inode.c:1670
 dentry_unlink_inode+0x2b1/0x3d0 fs/dcache.c:374
 __dentry_kill+0x3c0/0x640 fs/dcache.c:580
 shrink_dentry_list+0x144/0x480 fs/dcache.c:1174
 prune_dcache_sb+0xe7/0x140 fs/dcache.c:1255
 super_cache_scan+0x336/0x590 fs/super.c:105
 do_shrink_slab+0x3e4/0x9f0 mm/vmscan.c:512
 shrink_slab+0x16f/0x5d0 mm/vmscan.c:673
 shrink_node_memcgs mm/vmscan.c:2655 [inline]
 shrink_node+0x8d1/0x1de0 mm/vmscan.c:2770
 kswapd_shrink_node mm/vmscan.c:3513 [inline]
 balance_pgdat+0x745/0x1270 mm/vmscan.c:3671
 kswapd+0x5b6/0xdb0 mm/vmscan.c:3928
 kthread+0x3b1/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2021/03/10 18:14 upstream 280d542f6ffa 764067f3 .config console log report info ci-qemu-upstream possible deadlock in start_this_handle
2021/03/07 13:07 upstream 280d542f6ffa c599ed12 .config console log report info ci-qemu-upstream possible deadlock in start_this_handle
2021/02/20 23:50 upstream f40ddce88593 3e5ed8b4 .config console log report info ci-qemu-upstream possible deadlock in start_this_handle
2021/02/19 18:57 upstream f40ddce88593 f689d40a .config console log report info ci-qemu-upstream possible deadlock in start_this_handle
2021/03/15 16:11 upstream 280d542f6ffa fdb2bb2c .config console log report info ci-qemu-upstream-386 possible deadlock in start_this_handle
2021/02/21 18:58 upstream 55f62bc87347 a659b3f1 .config console log report info ci-qemu-upstream-386 possible deadlock in start_this_handle
2021/02/19 20:00 upstream f40ddce88593 f689d40a .config console log report info ci-qemu-upstream-386 possible deadlock in start_this_handle
2021/02/06 13:29 upstream 1e0d27fce010 23a562df .config console log report info ci-qemu-upstream-386 possible deadlock in start_this_handle
* Struck through repros no longer work on HEAD.