syzbot


possible deadlock in seq_read_iter

Status: upstream: reported syz repro on 2025/07/16 19:15
Bug presence: origin:upstream
[Documentation on labels]
Reported-by: syzbot+591f347698f2c6cff528@syzkaller.appspotmail.com
First crash: 41d, last: 29d
Bug presence (1)
Date Name Commit Repro Result
2025/07/25 upstream (ToT) 327579671a9b syz [report] possible deadlock in kernfs_seq_start
Similar bugs (5)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in seq_read_iter (4) overlayfs autofs 4 69 1d10h 456d 0/29 closed as dup on 2024/05/27 09:33
linux-5.15 possible deadlock in seq_read_iter 4 64 12d 485d 0/3 upstream: reported on 2024/04/28 04:21
upstream possible deadlock in seq_read_iter (2) overlayfs 4 C done done 14 609d 769d 25/29 fixed on 2024/02/02 10:05
upstream possible deadlock in seq_read_iter fs 4 2 1481d 1487d 0/29 auto-closed as invalid on 2021/12/05 03:01
upstream possible deadlock in seq_read_iter (3) overlayfs 4 148 472d 561d 25/29 fixed on 2024/05/23 00:16

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.6.99-syzkaller #0 Not tainted
------------------------------------------------------
syz.0.16/5942 is trying to acquire lock:
ffff8880252878b8 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb1/0xd50 fs/seq_file.c:182

but task is already holding lock:
ffff88802e796068 (&pipe->mutex/1){+.+.}-{3:3}, at: splice_file_to_pipe+0x2a/0x110 fs/splice.c:1230

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&pipe->mutex/1){+.+.}-{3:3}:
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
       __pipe_lock fs/pipe.c:103 [inline]
       pipe_write+0x1c7/0x1af0 fs/pipe.c:444
       __kernel_write_iter+0x274/0x670 fs/read_write.c:517
       __kernel_write+0xf0/0x140 fs/read_write.c:537
       autofs_write fs/autofs/waitq.c:57 [inline]
       autofs_notify_daemon+0x6ff/0xdc0 fs/autofs/waitq.c:164
       autofs_wait+0x1021/0x1a40 fs/autofs/waitq.c:426
       autofs_mount_wait+0x16b/0x320 fs/autofs/root.c:255
       autofs_d_automount+0x392/0x710 fs/autofs/root.c:401
       follow_automount fs/namei.c:1370 [inline]
       __traverse_mounts+0x2fa/0x5a0 fs/namei.c:1415
       traverse_mounts fs/namei.c:1444 [inline]
       handle_mounts fs/namei.c:1547 [inline]
       step_into+0x526/0xf10 fs/namei.c:1840
       lookup_last fs/namei.c:2459 [inline]
       path_lookupat+0x169/0x440 fs/namei.c:2483
       filename_lookup+0x1f4/0x510 fs/namei.c:2512
       kern_path+0x35/0x50 fs/namei.c:2610
       lookup_bdev+0xc1/0x280 block/bdev.c:976
       resume_store+0x16a/0x450 kernel/power/hibernate.c:1187
       kernfs_fop_write_iter+0x37d/0x4d0 fs/kernfs/file.c:334
       call_write_iter include/linux/fs.h:2018 [inline]
       new_sync_write fs/read_write.c:491 [inline]
       vfs_write+0x43b/0x940 fs/read_write.c:584
       ksys_write+0x147/0x250 fs/read_write.c:637
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #2 (&sbi->pipe_mutex){+.+.}-{3:3}:
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
       autofs_write fs/autofs/waitq.c:55 [inline]
       autofs_notify_daemon+0x6ec/0xdc0 fs/autofs/waitq.c:164
       autofs_wait+0x1021/0x1a40 fs/autofs/waitq.c:426
       autofs_mount_wait+0x16b/0x320 fs/autofs/root.c:255
       autofs_d_automount+0x392/0x710 fs/autofs/root.c:401
       follow_automount fs/namei.c:1370 [inline]
       __traverse_mounts+0x2fa/0x5a0 fs/namei.c:1415
       traverse_mounts fs/namei.c:1444 [inline]
       handle_mounts fs/namei.c:1547 [inline]
       step_into+0x526/0xf10 fs/namei.c:1840
       lookup_last fs/namei.c:2459 [inline]
       path_lookupat+0x169/0x440 fs/namei.c:2483
       filename_lookup+0x1f4/0x510 fs/namei.c:2512
       kern_path+0x35/0x50 fs/namei.c:2610
       lookup_bdev+0xc1/0x280 block/bdev.c:976
       resume_store+0x16a/0x450 kernel/power/hibernate.c:1187
       kernfs_fop_write_iter+0x37d/0x4d0 fs/kernfs/file.c:334
       call_write_iter include/linux/fs.h:2018 [inline]
       new_sync_write fs/read_write.c:491 [inline]
       vfs_write+0x43b/0x940 fs/read_write.c:584
       ksys_write+0x147/0x250 fs/read_write.c:637
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #1 (&of->mutex){+.+.}-{3:3}:
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
       kernfs_seq_start+0x55/0x3b0 fs/kernfs/file.c:154
       seq_read_iter+0x3c8/0xd50 fs/seq_file.c:225
       call_read_iter include/linux/fs.h:2012 [inline]
       new_sync_read fs/read_write.c:389 [inline]
       vfs_read+0x431/0x920 fs/read_write.c:470
       ksys_read+0x147/0x250 fs/read_write.c:613
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&p->lock){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain kernel/locking/lockdep.c:3869 [inline]
       __lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
       lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
       seq_read_iter+0xb1/0xd50 fs/seq_file.c:182
       call_read_iter include/linux/fs.h:2012 [inline]
       copy_splice_read+0x3c8/0x860 fs/splice.c:364
       splice_file_to_pipe+0x6e/0x110 fs/splice.c:1233
       do_sendfile+0x572/0xf70 fs/read_write.c:1261
       __do_sys_sendfile64 fs/read_write.c:1322 [inline]
       __se_sys_sendfile64+0x13f/0x190 fs/read_write.c:1308
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

Chain exists of:
  &p->lock --> &sbi->pipe_mutex --> &pipe->mutex/1

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&pipe->mutex/1);
                               lock(&sbi->pipe_mutex);
                               lock(&pipe->mutex/1);
  lock(&p->lock);

 *** DEADLOCK ***

1 lock held by syz.0.16/5942:
 #0: ffff88802e796068 (&pipe->mutex/1){+.+.}-{3:3}, at: splice_file_to_pipe+0x2a/0x110 fs/splice.c:1230

stack backtrace:
CPU: 0 PID: 5942 Comm: syz.0.16 Not tainted 6.6.99-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106
 check_noncircular+0x2bd/0x3c0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain kernel/locking/lockdep.c:3869 [inline]
 __lock_acquire+0x2ddb/0x7c80 kernel/locking/lockdep.c:5137
 lock_acquire+0x197/0x410 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:603 [inline]
 __mutex_lock+0x129/0xcc0 kernel/locking/mutex.c:747
 seq_read_iter+0xb1/0xd50 fs/seq_file.c:182
 call_read_iter include/linux/fs.h:2012 [inline]
 copy_splice_read+0x3c8/0x860 fs/splice.c:364
 splice_file_to_pipe+0x6e/0x110 fs/splice.c:1233
 do_sendfile+0x572/0xf70 fs/read_write.c:1261
 __do_sys_sendfile64 fs/read_write.c:1322 [inline]
 __se_sys_sendfile64+0x13f/0x190 fs/read_write.c:1308
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f4e04d8e9a9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f4e05b68038 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f4e04fb6080 RCX: 00007f4e04d8e9a9
RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000000
RBP: 00007f4e04e10d69 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000004 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f4e04fb6080 R15: 00007fff7478f678
 </TASK>

Crashes (7):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/20 22:15 linux-6.6.y d96eb99e2f0e 7117feec .config console log report syz / log [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in seq_read_iter
2025/07/20 18:43 linux-6.6.y d96eb99e2f0e 7117feec .config console log report syz / log [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in seq_read_iter
2025/07/20 15:36 linux-6.6.y d96eb99e2f0e 7117feec .config console log report syz / log [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in seq_read_iter
2025/07/20 12:24 linux-6.6.y d96eb99e2f0e 7117feec .config console log report syz / log [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in seq_read_iter
2025/07/20 09:19 linux-6.6.y d96eb99e2f0e 7117feec .config console log report syz / log [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in seq_read_iter
2025/07/28 19:41 linux-6.6.y dbcb8d8e4163 6654ea9c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in seq_read_iter
2025/07/16 19:15 linux-6.6.y 9247f4e6573a 124ec9cc .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan possible deadlock in seq_read_iter
* Struck through repros no longer work on HEAD.