syzbot


possible deadlock in vfs_rename

Status: closed as invalid on 2023/02/08 16:28
Subsystems: kernel
[Documentation on labels]
First crash: 598d, last: 598d
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 possible deadlock in vfs_rename 3 50d 143d 0/3 upstream: reported on 2024/05/08 00:59
linux-6.1 possible deadlock in vfs_rename 8 11d 46d 0/3 upstream: reported on 2024/08/12 16:18

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.2.0-rc7-syzkaller-00013-g513c1a3d3f19 #0 Not tainted
------------------------------------------------------
syz-executor.0/27531 is trying to acquire lock:
ffff888031e54520 (&type->i_mutex_dir_key#8){++++}-{3:3}, at: vfs_rename+0x797/0x1190

but task is already holding lock:
ffff888031e537e0 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: lock_rename+0x172/0x1a0

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}:
       lock_acquire+0x20b/0x600
       down_write_nested+0x3d/0x60
       delete_one_xattr+0x106/0x2f0
       reiserfs_for_each_xattr+0x9b0/0xb50
       reiserfs_delete_xattrs+0x1f/0x90
       reiserfs_evict_inode+0x207/0x470
       evict+0x2a4/0x620
       __dentry_kill+0x436/0x650
       dentry_kill+0xbb/0x290
       dput+0x1d8/0x3f0
       ovl_create_or_link+0x10aa/0x1480
       ovl_create_object+0x246/0x370
       vfs_mkdir+0x3ba/0x590
       do_mkdirat+0x237/0x4f0
       __x64_sys_mkdir+0x6e/0x80
       do_syscall_64+0x41/0xc0
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #1
 (&type->i_mutex_dir_key#8/3){+.+.}-{3:3}:
       lock_acquire+0x20b/0x600
       down_write_nested+0x3d/0x60
       open_xa_dir+0x122/0x650
       xattr_lookup+0x24/0x280
       reiserfs_xattr_set_handle+0xfd/0xdc0
       reiserfs_xattr_set+0x428/0x550
       __vfs_setxattr+0x460/0x4a0
       __vfs_setxattr_noperm+0x12e/0x5e0
       vfs_setxattr+0x221/0x420
       ovl_get_workdir+0xcf6/0x16c0
       ovl_fill_super+0x1b8a/0x29c0
       mount_nodev+0x56/0xe0
       legacy_get_tree+0xef/0x190
       vfs_get_tree+0x8c/0x270
       do_new_mount+0x28f/0xae0
       __se_sys_mount+0x2c9/0x3b0
       do_syscall_64+0x41/0xc0
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&type->i_mutex_dir_key#8){++++}-{3:3}:
       validate_chain+0x166b/0x5860
       __lock_acquire+0x125b/0x1f80
       lock_acquire+0x20b/0x600
       down_write+0x3a/0x60
       vfs_rename+0x797/0x1190
       do_renameat2+0xa70/0x1250
       __x64_sys_rename+0x86/0x90
       do_syscall_64+0x41/0xc0
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Chain exists of:
  &type->i_mutex_dir_key#8 --> &type->i_mutex_dir_key#8/3 --> &type->i_mutex_dir_key#8/2

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&type->i_mutex_dir_key#8/2);
                               lock(&type->i_mutex_dir_key#8/3);
                               lock(&type->i_mutex_dir_key#8/2);
  lock(&type->i_mutex_dir_key#8);

 *** DEADLOCK ***

4 locks held by syz-executor.0/27531:
 #0: ffff888076010460 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90
 #1: ffff888076010748 (&type->s_vfs_rename_key#2){+.+.}-{3:3}, at: lock_rename+0x58/0x1a0
 #2: ffff888031e57a20 (&type->i_mutex_dir_key#8/1){+.+.}-{3:3}, at: lock_rename+0xa4/0x1a0
 #3: ffff888031e537e0 (&type->i_mutex_dir_key#8/2){+.+.}-{3:3}, at: lock_rename+0x172/0x1a0

stack backtrace:
CPU: 1 PID: 27531 Comm: syz-executor.0 Not tainted 6.2.0-rc7-syzkaller-00013-g513c1a3d3f19 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/12/2023
Call Trace:
 <TASK>
 dump_stack_lvl+0x1b5/0x2a0
 check_noncircular+0x2d1/0x390
 validate_chain+0x166b/0x5860
 __lock_acquire+0x125b/0x1f80
 lock_acquire+0x20b/0x600
 down_write+0x3a/0x60
 vfs_rename+0x797/0x1190
 do_renameat2+0xa70/0x1250
 __x64_sys_rename+0x86/0x90
 do_syscall_64+0x41/0xc0
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f79b8a8c0f9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f79b9895168 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f79b8bac120 RCX: 00007f79b8a8c0f9
RDX: 0000000000000000 RSI: 00000000200001c0 RDI: 0000000020000140
RBP: 00007f79b8ae7ae9 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc65f77eaf R14: 00007f79b9895300 R15: 0000000000022000
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/02/08 03:21 upstream 513c1a3d3f19 15c3d445 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in vfs_rename
* Struck through repros no longer work on HEAD.