syzbot


possible deadlock in pipe_lock (6)

Status: upstream: reported on 2024/11/08 04:47
Subsystems: overlayfs
[Documentation on labels]
Reported-by: syzbot+603e6f91a1f6c5af8c02@syzkaller.appspotmail.com
First crash: 144d, last: 1d08h
Discussions (2)
Title Replies (including bot) Last reply
[syzbot] Monthly overlayfs report (Nov 2024) 0 (1) 2024/11/20 13:31
[syzbot] [overlayfs?] possible deadlock in pipe_lock (6) 0 (1) 2024/11/08 04:47
Similar bugs (9)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in pipe_lock fs 50806 2656d 2704d 0/28 closed as invalid on 2018/02/14 14:20
upstream possible deadlock in pipe_lock (4) overlayfs 1 1423d 1419d 0/28 auto-closed as invalid on 2021/07/03 13:02
upstream possible deadlock in pipe_lock (2) overlayfs 3 2167d 2237d 0/28 auto-closed as invalid on 2019/10/18 15:02
linux-4.19 possible deadlock in pipe_lock (2) C error 155 752d 1918d 0/1 upstream: reported C repro on 2019/12/26 07:17
upstream possible deadlock in pipe_lock (5) overlayfs C done 5 1342d 1342d 20/28 fixed on 2021/11/10 00:50
android-49 possible deadlock in pipe_lock 5 1981d 2174d 0/3 auto-closed as invalid on 2020/02/21 12:40
linux-4.19 possible deadlock in pipe_lock C done 2 2160d 2164d 1/1 fixed on 2019/11/29 10:34
android-44 possible deadlock in pipe_lock C 82 1941d 2174d 0/2 public: reported C repro on 2019/04/14 08:51
upstream possible deadlock in pipe_lock (3) overlayfs C inconclusive done 4 1881d 1918d 15/28 fixed on 2020/08/18 22:40

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.14.0-syzkaller-01103-g2df0c02dab82 #0 Not tainted
------------------------------------------------------
syz.7.1625/11879 is trying to acquire lock:
ffff8880802e2c68 (&pipe->mutex){+.+.}-{4:4}, at: pipe_lock fs/pipe.c:92 [inline]
ffff8880802e2c68 (&pipe->mutex){+.+.}-{4:4}, at: pipe_lock+0x64/0x80 fs/pipe.c:89

but task is already holding lock:
ffff88801c68a420 (sb_writers#6){.+.+}-{0:0}, at: __do_splice+0x32a/0x360 fs/splice.c:1430

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (sb_writers#6){.+.+}-{0:0}:
       percpu_down_read include/linux/percpu-rwsem.h:52 [inline]
       __sb_start_write include/linux/fs.h:1775 [inline]
       sb_start_write include/linux/fs.h:1911 [inline]
       mnt_want_write+0x6f/0x450 fs/namespace.c:556
       ovl_create_object+0x12c/0x300 fs/overlayfs/dir.c:628
       lookup_open.isra.0+0x11d0/0x1580 fs/namei.c:3666
       open_last_lookups fs/namei.c:3765 [inline]
       path_openat+0x905/0x2d40 fs/namei.c:4001
       do_filp_open+0x20b/0x470 fs/namei.c:4031
       do_sys_openat2+0x11b/0x1d0 fs/open.c:1429
       do_sys_open fs/open.c:1444 [inline]
       __do_sys_open fs/open.c:1452 [inline]
       __se_sys_open fs/open.c:1448 [inline]
       __x64_sys_open+0x153/0x1e0 fs/open.c:1448
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&ovl_i_mutex_dir_key[depth]){++++}-{4:4}:
       down_read+0x9b/0x480 kernel/locking/rwsem.c:1524
       inode_lock_shared include/linux/fs.h:877 [inline]
       lookup_slow fs/namei.c:1823 [inline]
       walk_component+0x345/0x5b0 fs/namei.c:2128
       lookup_last fs/namei.c:2626 [inline]
       path_lookupat+0x17e/0x780 fs/namei.c:2650
       filename_lookup+0x224/0x5f0 fs/namei.c:2679
       kern_path+0x35/0x50 fs/namei.c:2787
       lookup_bdev+0xd8/0x280 block/bdev.c:1167
       resume_store+0x1d6/0x460 kernel/power/hibernate.c:1247
       kobj_attr_store+0x55/0x80 lib/kobject.c:840
       sysfs_kf_write+0x117/0x170 fs/sysfs/file.c:139
       kernfs_fop_write_iter+0x349/0x510 fs/kernfs/file.c:334
       new_sync_write fs/read_write.c:591 [inline]
       vfs_write+0x5ba/0x1180 fs/read_write.c:684
       ksys_write+0x12a/0x240 fs/read_write.c:736
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&of->mutex){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/mutex.c:587 [inline]
       __mutex_lock+0x19a/0xb00 kernel/locking/mutex.c:732
       kernfs_fop_write_iter+0x287/0x510 fs/kernfs/file.c:325
       iter_file_splice_write+0x91c/0x1150 fs/splice.c:738
       do_splice_from fs/splice.c:935 [inline]
       do_splice+0x1475/0x1fc0 fs/splice.c:1348
       __do_splice+0x32a/0x360 fs/splice.c:1430
       __do_sys_splice fs/splice.c:1633 [inline]
       __se_sys_splice fs/splice.c:1615 [inline]
       __x64_sys_splice+0x187/0x250 fs/splice.c:1615
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&pipe->mutex){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3166 [inline]
       check_prevs_add kernel/locking/lockdep.c:3285 [inline]
       validate_chain kernel/locking/lockdep.c:3909 [inline]
       __lock_acquire+0x1173/0x1ba0 kernel/locking/lockdep.c:5235
       lock_acquire kernel/locking/lockdep.c:5866 [inline]
       lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823
       __mutex_lock_common kernel/locking/mutex.c:587 [inline]
       __mutex_lock+0x19a/0xb00 kernel/locking/mutex.c:732
       pipe_lock fs/pipe.c:92 [inline]
       pipe_lock+0x64/0x80 fs/pipe.c:89
       iter_file_splice_write+0x1ea/0x1150 fs/splice.c:683
       do_splice_from fs/splice.c:935 [inline]
       do_splice+0x1475/0x1fc0 fs/splice.c:1348
       __do_splice+0x32a/0x360 fs/splice.c:1430
       __do_sys_splice fs/splice.c:1633 [inline]
       __se_sys_splice fs/splice.c:1615 [inline]
       __x64_sys_splice+0x187/0x250 fs/splice.c:1615
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &pipe->mutex --> &ovl_i_mutex_dir_key[depth] --> sb_writers#6

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(sb_writers#6);
                               lock(&ovl_i_mutex_dir_key[depth]);
                               lock(sb_writers#6);
  lock(&pipe->mutex);

 *** DEADLOCK ***

1 lock held by syz.7.1625/11879:
 #0: ffff88801c68a420 (sb_writers#6){.+.+}-{0:0}, at: __do_splice+0x32a/0x360 fs/splice.c:1430

stack backtrace:
CPU: 1 UID: 0 PID: 11879 Comm: syz.7.1625 Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 print_circular_bug+0x275/0x350 kernel/locking/lockdep.c:2079
 check_noncircular+0x14c/0x170 kernel/locking/lockdep.c:2211
 check_prev_add kernel/locking/lockdep.c:3166 [inline]
 check_prevs_add kernel/locking/lockdep.c:3285 [inline]
 validate_chain kernel/locking/lockdep.c:3909 [inline]
 __lock_acquire+0x1173/0x1ba0 kernel/locking/lockdep.c:5235
 lock_acquire kernel/locking/lockdep.c:5866 [inline]
 lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823
 __mutex_lock_common kernel/locking/mutex.c:587 [inline]
 __mutex_lock+0x19a/0xb00 kernel/locking/mutex.c:732
 pipe_lock fs/pipe.c:92 [inline]
 pipe_lock+0x64/0x80 fs/pipe.c:89
 iter_file_splice_write+0x1ea/0x1150 fs/splice.c:683
 do_splice_from fs/splice.c:935 [inline]
 do_splice+0x1475/0x1fc0 fs/splice.c:1348
 __do_splice+0x32a/0x360 fs/splice.c:1430
 __do_sys_splice fs/splice.c:1633 [inline]
 __se_sys_splice fs/splice.c:1615 [inline]
 __x64_sys_splice+0x187/0x250 fs/splice.c:1615
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f320778d169
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f32085cf038 EFLAGS: 00000246 ORIG_RAX: 0000000000000113
RAX: ffffffffffffffda RBX: 00007f32079a6080 RCX: 00007f320778d169
RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 00007f320780e2a0 R08: 000000000004ffe6 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f32079a6080 R15: 00007ffd361df648
 </TASK>

Crashes (14):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/26 21:56 upstream 2df0c02dab82 89d30d73 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in pipe_lock
2025/03/03 06:07 upstream 7eb172143d55 c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in pipe_lock
2025/01/21 05:10 upstream ffd294d346d1 6e87cfa2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in pipe_lock
2024/12/25 06:50 upstream 9b2ffa6148b1 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in pipe_lock
2025/03/08 05:49 upstream 21e4543a2e2f 7e3bd60d .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2025/02/03 08:04 upstream 69e858e0b8b2 568559e4 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2025/02/02 18:25 upstream 69e858e0b8b2 568559e4 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2024/11/29 20:00 upstream 7af08b57bcb9 b5d2be89 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2024/11/27 12:36 upstream aaf20f870da0 5df23865 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2024/11/24 15:45 upstream 9f16d5e6f220 68da6d95 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2024/11/08 06:26 upstream 906bd684e4b1 179b040e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2024/11/04 04:35 upstream a33ab3f94f51 f00eed24 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in pipe_lock
2025/01/18 10:00 upstream ad26fc09dabf f2cb035c .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in pipe_lock
2025/01/17 02:52 upstream ce69b4019001 f9e07a6e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in pipe_lock
* Struck through repros no longer work on HEAD.