syzbot


possible deadlock in path_openat (4)

Status: upstream: reported on 2025/12/29 11:10
Subsystems: fs
[Documentation on labels]
Reported-by: syzbot+2a72778c820449646330@syzkaller.appspotmail.com
First crash: 13d, last: 13d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [fs?] possible deadlock in path_openat (4) 0 (1) 2025/12/29 11:10
Similar bugs (8)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in path_openat fs 4 C done unreliable 349 2097d 2652d 0/29 auto-obsoleted due to no activity on 2022/09/16 21:43
upstream possible deadlock in path_openat (3) fs 4 1 120d 120d 29/29 fixed on 2025/10/29 21:02
android-49 possible deadlock in path_openat 4 5 2226d 2242d 0/3 auto-closed as invalid on 2020/03/24 08:43
linux-6.1 possible deadlock in path_openat origin:upstream missing-backport 4 C done 186 31d 1022d 0/3 upstream: reported C repro on 2023/03/13 16:18
upstream possible deadlock in path_openat (2) fs 4 C error done 305 699d 1175d 0/29 auto-obsoleted due to no activity on 2024/06/02 07:09
linux-4.14 possible deadlock in path_openat reiserfs 4 C error 327 1033d 2441d 0/1 upstream: reported C repro on 2019/04/24 01:40
linux-5.15 possible deadlock in path_openat missing-backport origin:upstream 4 C done 181 37d 1018d 0/3 upstream: reported C repro on 2023/03/17 13:15
linux-4.19 possible deadlock in path_openat 4 C error 859 1029d 2389d 0/1 upstream: reported C repro on 2019/06/15 07:08

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Tainted: G             L     
------------------------------------------------------
syz.2.849/9100 is trying to acquire lock:
ffff888037026420 (sb_writers#3){.+.+}-{0:0}, at: open_last_lookups fs/namei.c:4529 [inline]
ffff888037026420 (sb_writers#3){.+.+}-{0:0}, at: path_openat+0x183a/0x3140 fs/namei.c:4784

but task is already holding lock:
ffff88802e7520a8 (&ctx->uring_lock){+.+.}-{4:4}, at: __do_sys_io_uring_enter+0xd60/0x1630 io_uring/io_uring.c:3279

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&ctx->uring_lock){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/mutex.c:614 [inline]
       __mutex_lock+0x1aa/0x1ca0 kernel/locking/mutex.c:776
       io_uring_del_tctx_node+0x109/0x350 io_uring/tctx.c:179
       io_uring_clean_tctx+0xc2/0x190 io_uring/tctx.c:195
       io_uring_cancel_generic+0x69c/0x9a0 io_uring/cancel.c:646
       io_uring_task_cancel include/linux/io_uring.h:24 [inline]
       begin_new_exec+0xd1a/0x3770 fs/exec.c:1131
       load_elf_binary+0x8e7/0x4fe0 fs/binfmt_elf.c:1010
       search_binary_handler fs/exec.c:1669 [inline]
       exec_binprm fs/exec.c:1701 [inline]
       bprm_execve fs/exec.c:1753 [inline]
       bprm_execve+0x8c2/0x1620 fs/exec.c:1729
       do_execveat_common.isra.0+0x4a5/0x610 fs/exec.c:1859
       do_execveat fs/exec.c:1944 [inline]
       __do_sys_execveat fs/exec.c:2018 [inline]
       __se_sys_execveat fs/exec.c:2012 [inline]
       __x64_sys_execveat+0xda/0x120 fs/exec.c:2012
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1
 (&sig->cred_guard_mutex
){+.+.}-{4:4}
:
       __mutex_lock_common kernel/locking/mutex.c:614 [inline]
       __mutex_lock+0x1aa/0x1ca0 kernel/locking/mutex.c:776
       proc_pid_attr_write+0x291/0x790 fs/proc/base.c:2837
       vfs_write+0x2a0/0x11d0 fs/read_write.c:684
       ksys_write+0x12a/0x250 fs/read_write.c:738
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (sb_writers#3){.+.+}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain kernel/locking/lockdep.c:3908 [inline]
       __lock_acquire+0x1669/0x2890 kernel/locking/lockdep.c:5237
       lock_acquire kernel/locking/lockdep.c:5868 [inline]
       lock_acquire+0x179/0x330 kernel/locking/lockdep.c:5825
       percpu_down_read_internal include/linux/percpu-rwsem.h:53 [inline]
       percpu_down_read_freezable include/linux/percpu-rwsem.h:83 [inline]
       __sb_start_write include/linux/fs/super.h:19 [inline]
       sb_start_write include/linux/fs/super.h:125 [inline]
       mnt_want_write+0x6f/0x450 fs/namespace.c:499
       open_last_lookups fs/namei.c:4529 [inline]
       path_openat+0x183a/0x3140 fs/namei.c:4784
       do_filp_open+0x20b/0x470 fs/namei.c:4814
       io_openat2+0x206/0x850 io_uring/openclose.c:143
       __io_issue_sqe+0xe8/0x7c0 io_uring/io_uring.c:1792
       io_issue_sqe+0x85/0x1410 io_uring/io_uring.c:1815
       io_queue_sqe io_uring/io_uring.c:2042 [inline]
       io_submit_sqe io_uring/io_uring.c:2320 [inline]
       io_submit_sqes+0xb24/0x28e0 io_uring/io_uring.c:2434
       __do_sys_io_uring_enter+0xd6b/0x1630 io_uring/io_uring.c:3280
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  sb_writers#3 --> &sig->cred_guard_mutex --> &ctx->uring_lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&ctx->uring_lock);
                               lock(&sig->cred_guard_mutex);
                               lock(&ctx->uring_lock);
  rlock(sb_writers#3);

 *** DEADLOCK ***

1 lock held by syz.2.849/9100:
 #0: ffff88802e7520a8 (&ctx->uring_lock){+.+.}-{4:4}, at: __do_sys_io_uring_enter+0xd60/0x1630 io_uring/io_uring.c:3279

stack backtrace:
CPU: 1 UID: 0 PID: 9100 Comm: syz.2.849 Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 print_circular_bug+0x275/0x340 kernel/locking/lockdep.c:2043
 check_noncircular+0x146/0x160 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain kernel/locking/lockdep.c:3908 [inline]
 __lock_acquire+0x1669/0x2890 kernel/locking/lockdep.c:5237
 lock_acquire kernel/locking/lockdep.c:5868 [inline]
 lock_acquire+0x179/0x330 kernel/locking/lockdep.c:5825
 percpu_down_read_internal include/linux/percpu-rwsem.h:53 [inline]
 percpu_down_read_freezable include/linux/percpu-rwsem.h:83 [inline]
 __sb_start_write include/linux/fs/super.h:19 [inline]
 sb_start_write include/linux/fs/super.h:125 [inline]
 mnt_want_write+0x6f/0x450 fs/namespace.c:499
 open_last_lookups fs/namei.c:4529 [inline]
 path_openat+0x183a/0x3140 fs/namei.c:4784
 do_filp_open+0x20b/0x470 fs/namei.c:4814
 io_openat2+0x206/0x850 io_uring/openclose.c:143
 __io_issue_sqe+0xe8/0x7c0 io_uring/io_uring.c:1792
 io_issue_sqe+0x85/0x1410 io_uring/io_uring.c:1815
 io_queue_sqe io_uring/io_uring.c:2042 [inline]
 io_submit_sqe io_uring/io_uring.c:2320 [inline]
 io_submit_sqes+0xb24/0x28e0 io_uring/io_uring.c:2434
 __do_sys_io_uring_enter+0xd6b/0x1630 io_uring/io_uring.c:3280
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f39e378f7c9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f39e46bb038 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa
RAX: ffffffffffffffda RBX: 00007f39e39e5fa0 RCX: 00007f39e378f7c9
RDX: 0000000000000000 RSI: 0000000000003516 RDI: 0000000000000006
RBP: 00007f39e3813f91 R08: 0000000000000000 R09: 00000000fffffdcf
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f39e39e6038 R14: 00007f39e39e5fa0 R15: 00007ffed0e79908
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/16 17:56 upstream 40fbbd64bba6 d1b870e1 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in path_openat
* Struck through repros no longer work on HEAD.