syzbot


possible deadlock in iter_file_splice_write (3)

Status: upstream: reported on 2025/08/27 15:45
Reported-by: syzbot+b4a92a5e742bab76d34d@syzkaller.appspotmail.com
First crash: 23h01m, last: 23h01m
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in iter_file_splice_write (3) overlayfs 4 24 490d 532d 0/29 closed as dup on 2024/03/14 09:27
upstream possible deadlock in iter_file_splice_write overlayfs 4 1 1821d 1817d 0/29 auto-closed as invalid on 2020/12/30 21:45
upstream possible deadlock in iter_file_splice_write (4) overlayfs 4 35 21d 449d 0/29 closed as dup on 2024/06/05 11:05
upstream possible deadlock in iter_file_splice_write (2) overlayfs 4 C done 2 1495d 1492d 0/29 closed as dup on 2021/07/28 14:57
linux-5.15 possible deadlock in iter_file_splice_write (2) 4 1 135d 135d 0/3 auto-obsoleted due to no activity on 2025/07/24 12:07
linux-5.15 possible deadlock in iter_file_splice_write 4 1 307d 307d 0/3 auto-obsoleted due to no activity on 2025/02/01 14:50

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
5.15.189-syzkaller #0 Not tainted
------------------------------------------------------
syz.4.6162/748 is trying to acquire lock:
ffff888024333c68 (&pipe->mutex/1){+.+.}-{3:3}, at: iter_file_splice_write+0x195/0xc40 fs/splice.c:635

but task is already holding lock:
ffff8880169e6460 (sb_writers#3){.+.+}-{0:0}, at: __do_splice fs/splice.c:1144 [inline]
ffff8880169e6460 (sb_writers#3){.+.+}-{0:0}, at: __do_sys_splice fs/splice.c:1350 [inline]
ffff8880169e6460 (sb_writers#3){.+.+}-{0:0}, at: __se_sys_splice+0x327/0x410 fs/splice.c:1332

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #9 (sb_writers#3){.+.+}-{0:0}:
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1811 [inline]
       sb_start_write include/linux/fs.h:1881 [inline]
       file_start_write include/linux/fs.h:3042 [inline]
       lo_write_bvec+0x193/0x770 drivers/block/loop.c:315
       lo_write_simple drivers/block/loop.c:338 [inline]
       do_req_filebacked drivers/block/loop.c:656 [inline]
       loop_handle_cmd drivers/block/loop.c:2235 [inline]
       loop_process_work+0x1d62/0x2480 drivers/block/loop.c:2275
       process_one_work+0x863/0x1000 kernel/workqueue.c:2310
       worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457
       kthread+0x436/0x520 kernel/kthread.c:334
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287

-> #8 ((work_completion)(&worker->work)){+.+.}-{0:0}:
       process_one_work+0x7bf/0x1000 kernel/workqueue.c:2286
       worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457
       kthread+0x436/0x520 kernel/kthread.c:334
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287

-> #7 ((wq_completion)loop3){+.+.}-{0:0}:
       flush_workqueue+0x142/0x1380 kernel/workqueue.c:2830
       drain_workqueue+0xcf/0x380 kernel/workqueue.c:2995
       destroy_workqueue+0x7b/0xb20 kernel/workqueue.c:4439
       __loop_clr_fd+0x234/0xb90 drivers/block/loop.c:1384
       blkdev_put_whole block/bdev.c:692 [inline]
       blkdev_put+0x53f/0x7d0 block/bdev.c:957
       btrfs_close_bdev fs/btrfs/volumes.c:1189 [inline]
       btrfs_close_one_device fs/btrfs/volumes.c:1210 [inline]
       close_fs_devices+0x48a/0x930 fs/btrfs/volumes.c:1253
       btrfs_close_devices+0xc2/0x500 fs/btrfs/volumes.c:1268
       close_ctree+0x75f/0x8c0 fs/btrfs/disk-io.c:4538
       generic_shutdown_super+0x130/0x300 fs/super.c:475
       kill_anon_super+0x36/0x70 fs/super.c:1089
       btrfs_kill_super+0x3d/0x50 fs/btrfs/super.c:2390
       deactivate_locked_super+0x93/0xf0 fs/super.c:335
       cleanup_mnt+0x418/0x4d0 fs/namespace.c:1139
       task_work_run+0x125/0x1a0 kernel/task_work.c:188
       tracehook_notify_resume include/linux/tracehook.h:189 [inline]
       exit_to_user_mode_loop+0x10f/0x130 kernel/entry/common.c:181
       exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:214
       __syscall_exit_to_user_mode_work kernel/entry/common.c:296 [inline]
       syscall_exit_to_user_mode+0x16/0x40 kernel/entry/common.c:307
       do_syscall_64+0x58/0xa0 arch/x86/entry/common.c:86
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #6 (&lo->lo_mutex){+.+.}-{3:3}:
       __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
       __mutex_lock kernel/locking/mutex.c:729 [inline]
       mutex_lock_killable_nested+0x17/0x20 kernel/locking/mutex.c:758
       lo_open+0x6a/0x100 drivers/block/loop.c:2056
       blkdev_get_whole+0x90/0x390 block/bdev.c:669
       blkdev_get_by_dev+0x2d0/0xa60 block/bdev.c:827
       blkdev_open+0x12d/0x2c0 block/fops.c:466
       do_dentry_open+0x7ff/0xf80 fs/open.c:826
       do_open fs/namei.c:3608 [inline]
       path_openat+0x2682/0x2f30 fs/namei.c:3742
       do_filp_open+0x1b3/0x3e0 fs/namei.c:3769
       do_sys_openat2+0x142/0x4a0 fs/open.c:1253
       do_sys_open fs/open.c:1269 [inline]
       __do_sys_openat fs/open.c:1285 [inline]
       __se_sys_openat fs/open.c:1280 [inline]
       __x64_sys_openat+0x135/0x160 fs/open.c:1280
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #5 (&disk->open_mutex){+.+.}-{3:3}:
       __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
       __mutex_lock kernel/locking/mutex.c:729 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
       bd_register_pending_holders+0x33/0x310 block/holder.c:161
       device_add_disk+0x5a7/0xd40 block/genhd.c:486
       add_disk include/linux/genhd.h:200 [inline]
       md_alloc+0x809/0xc00 drivers/md/md.c:5782
       blk_probe_dev block/genhd.c:685 [inline]
       blk_request_module+0x26e/0x290 block/genhd.c:-1
       blkdev_get_no_open+0x38/0x1d0 block/bdev.c:740
       blkdev_get_by_dev+0x77/0xa60 block/bdev.c:807
       blkdev_open+0x12d/0x2c0 block/fops.c:466
       do_dentry_open+0x7ff/0xf80 fs/open.c:826
       do_open fs/namei.c:3608 [inline]
       path_openat+0x2682/0x2f30 fs/namei.c:3742
       do_filp_open+0x1b3/0x3e0 fs/namei.c:3769
       do_sys_openat2+0x142/0x4a0 fs/open.c:1253
       do_sys_open fs/open.c:1269 [inline]
       __do_sys_openat fs/open.c:1285 [inline]
       __se_sys_openat fs/open.c:1280 [inline]
       __x64_sys_openat+0x135/0x160 fs/open.c:1280
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #4 (disks_mutex){+.+.}-{3:3}:
       __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
       __mutex_lock kernel/locking/mutex.c:729 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
       md_alloc+0x48/0xc00 drivers/md/md.c:5723
       blk_probe_dev block/genhd.c:685 [inline]
       blk_request_module+0x26e/0x290 block/genhd.c:-1
       blkdev_get_no_open+0x38/0x1d0 block/bdev.c:740
       blkdev_get_by_dev+0x77/0xa60 block/bdev.c:807
       blkdev_open+0x12d/0x2c0 block/fops.c:466
       do_dentry_open+0x7ff/0xf80 fs/open.c:826
       do_open fs/namei.c:3608 [inline]
       path_openat+0x2682/0x2f30 fs/namei.c:3742
       do_filp_open+0x1b3/0x3e0 fs/namei.c:3769
       do_sys_openat2+0x142/0x4a0 fs/open.c:1253
       do_sys_open fs/open.c:1269 [inline]
       __do_sys_openat fs/open.c:1285 [inline]
       __se_sys_openat fs/open.c:1280 [inline]
       __x64_sys_openat+0x135/0x160 fs/open.c:1280
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #3 (major_names_lock){+.+.}-{3:3}:
       __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
       __mutex_lock kernel/locking/mutex.c:729 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
       blk_probe_dev block/genhd.c:682 [inline]
       blk_request_module+0x31/0x290 block/genhd.c:698
       blkdev_get_no_open+0x38/0x1d0 block/bdev.c:740
       blkdev_get_by_dev+0x77/0xa60 block/bdev.c:807
       swsusp_check+0x9b/0x2a0 kernel/power/swap.c:1526
       software_resume+0xc6/0x3b0 kernel/power/hibernate.c:982
       resume_store+0xe4/0x130 kernel/power/hibernate.c:1184
       kernfs_fop_write_iter+0x379/0x4c0 fs/kernfs/file.c:296
       call_write_iter include/linux/fs.h:2172 [inline]
       new_sync_write fs/read_write.c:507 [inline]
       vfs_write+0x712/0xd00 fs/read_write.c:594
       ksys_write+0x14d/0x250 fs/read_write.c:647
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #2 (system_transition_mutex/1){+.+.}-{3:3}:
       __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
       __mutex_lock kernel/locking/mutex.c:729 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
       software_resume+0x7c/0x3b0 kernel/power/hibernate.c:937
       resume_store+0xe4/0x130 kernel/power/hibernate.c:1184
       kernfs_fop_write_iter+0x379/0x4c0 fs/kernfs/file.c:296
       call_write_iter include/linux/fs.h:2172 [inline]
       new_sync_write fs/read_write.c:507 [inline]
       vfs_write+0x712/0xd00 fs/read_write.c:594
       ksys_write+0x14d/0x250 fs/read_write.c:647
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #1 (&of->mutex){+.+.}-{3:3}:
       __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
       __mutex_lock kernel/locking/mutex.c:729 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
       kernfs_fop_write_iter+0x1e5/0x4c0 fs/kernfs/file.c:287
       do_iter_readv_writev+0x497/0x600 fs/read_write.c:-1
       do_iter_write+0x205/0x7b0 fs/read_write.c:855
       iter_file_splice_write+0x65f/0xc40 fs/splice.c:689
       do_splice_from fs/splice.c:767 [inline]
       do_splice+0xe65/0x1640 fs/splice.c:1079
       __do_splice fs/splice.c:1144 [inline]
       __do_sys_splice fs/splice.c:1350 [inline]
       __se_sys_splice+0x327/0x410 fs/splice.c:1332
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

-> #0 (&pipe->mutex/1){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3053 [inline]
       check_prevs_add kernel/locking/lockdep.c:3172 [inline]
       validate_chain kernel/locking/lockdep.c:3788 [inline]
       __lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
       lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
       __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
       __mutex_lock kernel/locking/mutex.c:729 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
       iter_file_splice_write+0x195/0xc40 fs/splice.c:635
       do_splice_from fs/splice.c:767 [inline]
       do_splice+0xe65/0x1640 fs/splice.c:1079
       __do_splice fs/splice.c:1144 [inline]
       __do_sys_splice fs/splice.c:1350 [inline]
       __se_sys_splice+0x327/0x410 fs/splice.c:1332
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x66/0xd0

other info that might help us debug this:

Chain exists of:
  &pipe->mutex/1 --> (work_completion)(&worker->work) --> sb_writers#3

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(sb_writers#3);
                               lock((work_completion)(&worker->work));
                               lock(sb_writers#3);
  lock(&pipe->mutex/1);

 *** DEADLOCK ***

1 lock held by syz.4.6162/748:
 #0: ffff8880169e6460 (sb_writers#3){.+.+}-{0:0}, at: __do_splice fs/splice.c:1144 [inline]
 #0: ffff8880169e6460 (sb_writers#3){.+.+}-{0:0}, at: __do_sys_splice fs/splice.c:1350 [inline]
 #0: ffff8880169e6460 (sb_writers#3){.+.+}-{0:0}, at: __se_sys_splice+0x327/0x410 fs/splice.c:1332

stack backtrace:
CPU: 0 PID: 748 Comm: syz.4.6162 Not tainted 5.15.189-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106
 check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2133
 check_prev_add kernel/locking/lockdep.c:3053 [inline]
 check_prevs_add kernel/locking/lockdep.c:3172 [inline]
 validate_chain kernel/locking/lockdep.c:3788 [inline]
 __lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012
 lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623
 __mutex_lock_common+0x1eb/0x2390 kernel/locking/mutex.c:596
 __mutex_lock kernel/locking/mutex.c:729 [inline]
 mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:743
 iter_file_splice_write+0x195/0xc40 fs/splice.c:635
 do_splice_from fs/splice.c:767 [inline]
 do_splice+0xe65/0x1640 fs/splice.c:1079
 __do_splice fs/splice.c:1144 [inline]
 __do_sys_splice fs/splice.c:1350 [inline]
 __se_sys_splice+0x327/0x410 fs/splice.c:1332
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x66/0xd0
RIP: 0033:0x7f4f50c6dbe9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f4f4eed5038 EFLAGS: 00000246 ORIG_RAX: 0000000000000113
RAX: ffffffffffffffda RBX: 00007f4f50e94fa0 RCX: 00007f4f50c6dbe9
RDX: 0000000000000008 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007f4f50cf0e19 R08: 0000000000200002 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f4f50e95038 R14: 00007f4f50e94fa0 R15: 00007ffc4e6fb018
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/27 15:45 linux-5.15.y c79648372d02 e12e5ba4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in iter_file_splice_write
* Struck through repros no longer work on HEAD.