syzbot


possible deadlock in vfs_rename

Status: upstream: reported on 2025/11/16 07:36
Reported-by: syzbot+23fd3f1f406ce63ed76e@syzkaller.appspotmail.com
First crash: 2d14h, last: 2d14h
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 possible deadlock in vfs_rename 4 13 246d 559d 0/3 auto-obsoleted due to no activity on 2025/06/25 12:29
upstream possible deadlock in vfs_rename kernel 4 1 1014d 1014d 0/29 closed as invalid on 2023/02/08 16:28
linux-6.1 possible deadlock in vfs_rename (2) origin:lts-only 4 C inconclusive 11 2d07h 322d 0/3 upstream: reported C repro on 2024/12/31 00:31
linux-5.15 possible deadlock in vfs_rename (2) 4 1 105d 105d 0/3 auto-obsoleted due to no activity on 2025/11/13 14:59
linux-6.1 possible deadlock in vfs_rename 4 8 428d 463d 0/3 auto-obsoleted due to no activity on 2024/12/25 19:29
upstream possible deadlock in vfs_rename (2) ntfs3 4 3 305d 353d 0/29 auto-obsoleted due to no activity on 2025/04/27 15:12

Sample crash report:
REISERFS (device loop3): Using r5 hash to sort names
REISERFS (device loop3): Created .reiserfs_priv - reserved for xattr storage.
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.3.417/6967 is trying to acquire lock:
ffff888054a22410 (&type->i_mutex_dir_key#13){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
ffff888054a22410 (&type->i_mutex_dir_key#13){++++}-{3:3}, at: vfs_rename+0x6d3/0xec0 fs/namei.c:4845

but task is already holding lock:
ffff888054a21d70 (&type->i_mutex_dir_key#13/2){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
ffff888054a21d70 (&type->i_mutex_dir_key#13/2){+.+.}-{3:3}, at: vfs_rename+0x652/0xec0 fs/namei.c:4843

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&type->i_mutex_dir_key#13/2){+.+.}-{3:3}:
       down_write_nested+0x9e/0x1f0 kernel/locking/rwsem.c:1689
       inode_lock_nested include/linux/fs.h:839 [inline]
       xattr_rmdir fs/reiserfs/xattr.c:107 [inline]
       delete_one_xattr+0xfa/0x300 fs/reiserfs/xattr.c:339
       reiserfs_for_each_xattr+0x800/0x960 fs/reiserfs/xattr.c:312
       reiserfs_delete_xattrs+0x20/0x90 fs/reiserfs/xattr.c:365
       reiserfs_evict_inode+0x232/0x490 fs/reiserfs/inode.c:53
       evict+0x486/0x870 fs/inode.c:705
       d_delete_notify include/linux/fsnotify.h:269 [inline]
       vfs_rmdir+0x39b/0x4d0 fs/namei.c:4217
       do_rmdir+0x29e/0x5c0 fs/namei.c:4263
       __do_sys_unlinkat fs/namei.c:4441 [inline]
       __se_sys_unlinkat fs/namei.c:4435 [inline]
       __x64_sys_unlinkat+0xc4/0xe0 fs/namei.c:4435
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #1 (&type->i_mutex_dir_key#13/3){+.+.}-{3:3}:
       down_write_nested+0x9e/0x1f0 kernel/locking/rwsem.c:1689
       inode_lock_nested include/linux/fs.h:839 [inline]
       open_xa_root fs/reiserfs/xattr.c:128 [inline]
       open_xa_dir+0x122/0x6f0 fs/reiserfs/xattr.c:153
       xattr_lookup+0x22/0x2a0 fs/reiserfs/xattr.c:396
       reiserfs_xattr_set_handle+0xf9/0xd40 fs/reiserfs/xattr.c:535
       reiserfs_xattr_set+0x439/0x550 fs/reiserfs/xattr.c:635
       __vfs_setxattr+0x431/0x470 fs/xattr.c:201
       __vfs_setxattr_noperm+0x12d/0x5e0 fs/xattr.c:235
       vfs_setxattr+0x16c/0x2f0 fs/xattr.c:322
       do_setxattr fs/xattr.c:630 [inline]
       path_setxattr+0x362/0x550 fs/xattr.c:659
       __do_sys_setxattr fs/xattr.c:677 [inline]
       __se_sys_setxattr fs/xattr.c:673 [inline]
       __x64_sys_setxattr+0xbb/0xd0 fs/xattr.c:673
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&type->i_mutex_dir_key#13){++++}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
       lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
       down_write+0x97/0x1f0 kernel/locking/rwsem.c:1573
       inode_lock include/linux/fs.h:804 [inline]
       vfs_rename+0x6d3/0xec0 fs/namei.c:4845
       do_renameat2+0x8a1/0xc70 fs/namei.c:5033
       __do_sys_renameat2 fs/namei.c:5066 [inline]
       __se_sys_renameat2 fs/namei.c:5063 [inline]
       __x64_sys_renameat2+0xd2/0xe0 fs/namei.c:5063
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Chain exists of:
  &type->i_mutex_dir_key#13
 --> &type->i_mutex_dir_key#13/3 --> &type->i_mutex_dir_key#13
/2

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&type->i_mutex_dir_key#13/2);
                               lock(&type->i_mutex_dir_key#13/3);
                               lock(&type->i_mutex_dir_key#13
/2);
  lock(&type->i_mutex_dir_key#13);

 *** DEADLOCK ***

5 locks held by syz.3.417/6967:
 #0: ffff88807a9e4418 (sb_writers#25){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:412
 #1: ffff88807a9e4700 (&type->s_vfs_rename_key#3){+.+.}-{3:3}, at: lock_rename fs/namei.c:3050 [inline]
 #1: ffff88807a9e4700 (&type->s_vfs_rename_key#3){+.+.}-{3:3}, at: do_renameat2+0x35f/0xc70 fs/namei.c:4972
 #2: ffff8880549b6cf0 (&type->i_mutex_dir_key#13/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
 #2: ffff8880549b6cf0 (&type->i_mutex_dir_key#13/1){+.+.}-{3:3}, at: lock_two_directories fs/namei.c:-1 [inline]
 #2: ffff8880549b6cf0 (&type->i_mutex_dir_key#13/1){+.+.}-{3:3}, at: lock_rename fs/namei.c:3051 [inline]
 #2: ffff8880549b6cf0 (&type->i_mutex_dir_key#13/1){+.+.}-{3:3}, at: do_renameat2+0x3f1/0xc70 fs/namei.c:4972
 #3: ffff888054a216d0 (&type->i_mutex_dir_key#13/5){+.+.}-{3:3}, at: lock_rename include/linux/fs.h:-1 [inline]
 #3: ffff888054a216d0 (&type->i_mutex_dir_key#13/5){+.+.}-{3:3}, at: do_renameat2+0x427/0xc70 fs/namei.c:4972
 #4: ffff888054a21d70 (&type->i_mutex_dir_key#13/2){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:839 [inline]
 #4: ffff888054a21d70 (&type->i_mutex_dir_key#13/2){+.+.}-{3:3}, at: vfs_rename+0x652/0xec0 fs/namei.c:4843

stack backtrace:
CPU: 1 PID: 6967 Comm: syz.3.417 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
 lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
 down_write+0x97/0x1f0 kernel/locking/rwsem.c:1573
 inode_lock include/linux/fs.h:804 [inline]
 vfs_rename+0x6d3/0xec0 fs/namei.c:4845
 do_renameat2+0x8a1/0xc70 fs/namei.c:5033
 __do_sys_renameat2 fs/namei.c:5066 [inline]
 __se_sys_renameat2 fs/namei.c:5063 [inline]
 __x64_sys_renameat2+0xd2/0xe0 fs/namei.c:5063
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f054118f6c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f0542037038 EFLAGS: 00000246 ORIG_RAX: 000000000000013c
RAX: ffffffffffffffda RBX: 00007f05413e5fa0 RCX: 00007f054118f6c9
RDX: ffffffffffffff9c RSI: 0000200000001100 RDI: ffffffffffffff9c
RBP: 00007f0541211f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000200000000600 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f05413e6038 R14: 00007f05413e5fa0 R15: 00007fff06d9fc08
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/11/16 07:35 linux-6.6.y 0a805b6ea8cd f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in vfs_rename
* Struck through repros no longer work on HEAD.