syzbot


possible deadlock in vfs_rename

Status: upstream: reported on 2024/08/12 16:18
Reported-by: syzbot+c9126040cb95ff6a324e@syzkaller.appspotmail.com
First crash: 130d, last: 95d
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 possible deadlock in vfs_rename 7 56d 227d 0/3 upstream: reported on 2024/05/08 00:59
upstream possible deadlock in vfs_rename kernel 1 682d 682d 0/28 closed as invalid on 2023/02/08 16:28
upstream possible deadlock in vfs_rename (2) ntfs3 2 25d 21d 0/28 upstream: reported on 2024/11/30 00:07

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.1.106-syzkaller #0 Not tainted
------------------------------------------------------
syz.4.103/4131 is trying to acquire lock:
ffff8880595e1020 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
ffff8880595e1020 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: vfs_rename+0x814/0x10f0 fs/namei.c:4841

but task is already holding lock:
ffff8880595e1d60 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:793 [inline]
ffff8880595e1d60 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: vfs_rename+0x7a2/0x10f0 fs/namei.c:4839

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}:
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
       down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689
       inode_lock_nested include/linux/fs.h:793 [inline]
       xattr_rmdir fs/reiserfs/xattr.c:106 [inline]
       delete_one_xattr+0x102/0x2f0 fs/reiserfs/xattr.c:338
       reiserfs_for_each_xattr+0x9b2/0xb40 fs/reiserfs/xattr.c:311
       reiserfs_delete_xattrs+0x1b/0x80 fs/reiserfs/xattr.c:364
       reiserfs_evict_inode+0x20c/0x460 fs/reiserfs/inode.c:53
       evict+0x2a4/0x620 fs/inode.c:666
       d_delete_notify include/linux/fsnotify.h:267 [inline]
       vfs_rmdir+0x381/0x4b0 fs/namei.c:4206
       do_rmdir+0x3a2/0x590 fs/namei.c:4254
       __do_sys_unlinkat fs/namei.c:4434 [inline]
       __se_sys_unlinkat fs/namei.c:4428 [inline]
       __x64_sys_unlinkat+0xdc/0xf0 fs/namei.c:4428
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #1 (&type->i_mutex_dir_key#8/3){+.+.}-{3:3}:
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
       down_write_nested+0x39/0x60 kernel/locking/rwsem.c:1689
       inode_lock_nested include/linux/fs.h:793 [inline]
       open_xa_root fs/reiserfs/xattr.c:127 [inline]
       open_xa_dir+0x132/0x610 fs/reiserfs/xattr.c:152
       xattr_lookup+0x24/0x280 fs/reiserfs/xattr.c:395
       reiserfs_xattr_set_handle+0xf8/0xdc0 fs/reiserfs/xattr.c:533
       reiserfs_xattr_set+0x44e/0x570 fs/reiserfs/xattr.c:633
       __vfs_setxattr+0x3e7/0x420 fs/xattr.c:182
       __vfs_setxattr_noperm+0x12a/0x5e0 fs/xattr.c:216
       vfs_setxattr+0x21d/0x420 fs/xattr.c:309
       ovl_do_setxattr fs/overlayfs/overlayfs.h:252 [inline]
       ovl_setxattr fs/overlayfs/overlayfs.h:264 [inline]
       ovl_make_workdir fs/overlayfs/super.c:1435 [inline]
       ovl_get_workdir+0xdfe/0x17b0 fs/overlayfs/super.c:1539
       ovl_fill_super+0x1b85/0x2a20 fs/overlayfs/super.c:2095
       mount_nodev+0x52/0xe0 fs/super.c:1489
       legacy_get_tree+0xeb/0x180 fs/fs_context.c:632
       vfs_get_tree+0x88/0x270 fs/super.c:1573
       do_new_mount+0x2ba/0xb40 fs/namespace.c:3051
       do_mount fs/namespace.c:3394 [inline]
       __do_sys_mount fs/namespace.c:3602 [inline]
       __se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3579
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&type->i_mutex_dir_key#8){++++}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3090 [inline]
       check_prevs_add kernel/locking/lockdep.c:3209 [inline]
       validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
       __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
       down_write+0x36/0x60 kernel/locking/rwsem.c:1573
       inode_lock include/linux/fs.h:758 [inline]
       vfs_rename+0x814/0x10f0 fs/namei.c:4841
       do_renameat2+0xde0/0x1440 fs/namei.c:5029
       __do_sys_renameat2 fs/namei.c:5062 [inline]
       __se_sys_renameat2 fs/namei.c:5059 [inline]
       __x64_sys_renameat2+0xce/0xe0 fs/namei.c:5059
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Chain exists of:
  &type->i_mutex_dir_key#8 --> &type->i_mutex_dir_key#8/3 --> &type->i_mutex_dir_key#8/2

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&type->i_mutex_dir_key#8/2);
                               lock(&type->i_mutex_dir_key#8/3);
                               lock(&type->i_mutex_dir_key#8/2);
  lock(&type->i_mutex_dir_key#8);

 *** DEADLOCK ***

5 locks held by syz.4.103/4131:
 #0: ffff8880211e8460 (sb_writers#13){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
 #1: ffff8880211e8748 (&type->s_vfs_rename_key#2){+.+.}-{3:3}, at: lock_rename fs/namei.c:3040 [inline]
 #1: ffff8880211e8748 (&type->s_vfs_rename_key#2){+.+.}-{3:3}, at: do_renameat2+0x5a0/0x1440 fs/namei.c:4968
 #2: ffff8880727eb7e0 (&type->i_mutex_dir_key#8/1){+.+.}-{3:3}, at: lock_rename fs/namei.c:3041 [inline]
 #2: ffff8880727eb7e0 (&type->i_mutex_dir_key#8/1){+.+.}-{3:3}, at: do_renameat2+0x61e/0x1440 fs/namei.c:4968
 #3: ffff8880595e16c0 (&type->i_mutex_dir_key#8/5){+.+.}-{3:3}, at: do_renameat2+0x65a/0x1440 fs/namei.c:4968
 #4: ffff8880595e1d60 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:793 [inline]
 #4: ffff8880595e1d60 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: vfs_rename+0x7a2/0x10f0 fs/namei.c:4839

stack backtrace:
CPU: 1 PID: 4131 Comm: syz.4.103 Not tainted 6.1.106-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2170
 check_prev_add kernel/locking/lockdep.c:3090 [inline]
 check_prevs_add kernel/locking/lockdep.c:3209 [inline]
 validate_chain+0x1661/0x5950 kernel/locking/lockdep.c:3825
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
 down_write+0x36/0x60 kernel/locking/rwsem.c:1573
 inode_lock include/linux/fs.h:758 [inline]
 vfs_rename+0x814/0x10f0 fs/namei.c:4841
 do_renameat2+0xde0/0x1440 fs/namei.c:5029
 __do_sys_renameat2 fs/namei.c:5062 [inline]
 __se_sys_renameat2 fs/namei.c:5059 [inline]
 __x64_sys_renameat2+0xce/0xe0 fs/namei.c:5059
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7efdc7d79e79
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007efdc8ab0038 EFLAGS: 00000246 ORIG_RAX: 000000000000013c
RAX: ffffffffffffffda RBX: 00007efdc7f16058 RCX: 00007efdc7d79e79
RDX: 0000000000000005 RSI: 00000000200001c0 RDI: 0000000000000005
RBP: 00007efdc7de793e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000020000140 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007efdc7f16058 R15: 00007ffe0e82fab8
 </TASK>
REISERFS warning (device loop4): vs-13060 reiserfs_update_sd_size: stat data of object [1 2 0x0 SD] (nlink == 1) not found (pos 2)

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/08/23 02:28 linux-6.1.y ee5e09825b81 ce8a9099 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in vfs_rename
2024/08/19 12:47 linux-6.1.y ee5e09825b81 9f0ab3fb .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in vfs_rename
2024/08/17 03:17 linux-6.1.y 117ac406ba90 dbc93b08 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in vfs_rename
2024/08/15 06:10 linux-6.1.y 117ac406ba90 e4bacdaf .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in vfs_rename
2024/08/12 16:17 linux-6.1.y 36790ef5e00b 842184b3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in vfs_rename
2024/09/16 19:28 linux-6.1.y 5f55cad62cc9 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in vfs_rename
2024/09/11 08:53 linux-6.1.y 5ca5b389fddf 8ab55d0e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in vfs_rename
2024/09/11 08:52 linux-6.1.y 5ca5b389fddf 8ab55d0e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in vfs_rename
* Struck through repros no longer work on HEAD.