syzbot


possible deadlock in kernfs_seq_start

Status: upstream: reported C repro on 2024/04/27 11:09
Subsystems: kernfs
[Documentation on labels]
Reported-by: syzbot+4c493dcd5a68168a94b2@syzkaller.appspotmail.com
First crash: 231d, last: 41d
Cause bisection: introduced by (bisect log) :
commit 1e8c813b083c4122dfeaa5c3b11028331026e85d
Author: Christoph Hellwig <hch@lst.de>
Date: Wed May 31 12:55:32 2023 +0000

  PM: hibernate: don't use early_lookup_bdev in resume_store

Crash: possible deadlock in seq_read_iter (log)
Repro: C syz .config
  
Duplicate bugs (2)
duplicates (2):
Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
possible deadlock in seq_read_iter (4) overlayfs 36 12d 197d 0/28 closed as dup on 2024/05/27 09:33
possible deadlock in iter_file_splice_write (4) overlayfs 22 15d 189d 0/28 closed as dup on 2024/06/05 11:05
Discussions (2)
Title Replies (including bot) Last reply
[syzbot] [kernfs?] possible deadlock in kernfs_seq_start 8 (13) 2024/11/08 11:30
[syzbot] Monthly kernfs report (May 2024) 0 (1) 2024/05/08 08:25
Last patch testing requests (4)
Created Duration User Patch Repo Result
2024/11/08 11:10 11m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git dccb07f2914c report log
2024/07/31 06:06 12m retest repro upstream report log
2024/05/22 05:36 21m retest repro upstream report log
2024/05/10 10:27 19m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git dccb07f2914c OK log
Fix bisection attempts (3)
Created Duration User Patch Repo Result
2024/10/30 19:19 2h34m bisect fix upstream OK (0) job log log
2024/09/30 01:03 1h29m bisect fix upstream OK (0) job log log
2024/08/22 11:38 58m bisect fix upstream OK (0) job log log

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0 Not tainted
------------------------------------------------------
syz-executor149/5078 is trying to acquire lock:
ffff88802a978888 (&of->mutex){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154

but task is already holding lock:
ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&p->lock){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
       call_read_iter include/linux/fs.h:2104 [inline]
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xdc0 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (&pipe->mutex){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
       backing_file_splice_write+0x2bc/0x4c0 fs/backing-file.c:289
       ovl_splice_write+0x3cf/0x500 fs/overlayfs/file.c:379
       do_splice_from fs/splice.c:941 [inline]
       do_splice+0xd77/0x1880 fs/splice.c:1354
       __do_splice fs/splice.c:1436 [inline]
       __do_sys_splice fs/splice.c:1652 [inline]
       __se_sys_splice+0x331/0x4a0 fs/splice.c:1634
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (sb_writers#4){.+.+}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1664 [inline]
       sb_start_write+0x4d/0x1c0 include/linux/fs.h:1800
       mnt_want_write+0x3f/0x90 fs/namespace.c:409
       ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:629
       lookup_open fs/namei.c:3497 [inline]
       open_last_lookups fs/namei.c:3566 [inline]
       path_openat+0x1425/0x3240 fs/namei.c:3796
       do_filp_open+0x235/0x490 fs/namei.c:3826
       do_sys_openat2+0x13e/0x1d0 fs/open.c:1406
       do_sys_open fs/open.c:1421 [inline]
       __do_sys_open fs/open.c:1429 [inline]
       __se_sys_open fs/open.c:1425 [inline]
       __x64_sys_open+0x225/0x270 fs/open.c:1425
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
       inode_lock_shared include/linux/fs.h:805 [inline]
       lookup_slow+0x45/0x70 fs/namei.c:1708
       walk_component+0x2e1/0x410 fs/namei.c:2004
       lookup_last fs/namei.c:2461 [inline]
       path_lookupat+0x16f/0x450 fs/namei.c:2485
       filename_lookup+0x256/0x610 fs/namei.c:2514
       kern_path+0x35/0x50 fs/namei.c:2622
       lookup_bdev+0xc5/0x290 block/bdev.c:1136
       resume_store+0x1a0/0x710 kernel/power/hibernate.c:1235
       kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
       call_write_iter include/linux/fs.h:2110 [inline]
       new_sync_write fs/read_write.c:497 [inline]
       vfs_write+0xa84/0xcb0 fs/read_write.c:590
       ksys_write+0x1a0/0x2c0 fs/read_write.c:643
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&of->mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
       __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
       traverse+0x14f/0x550 fs/seq_file.c:106
       seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
       call_read_iter include/linux/fs.h:2104 [inline]
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xdc0 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &of->mutex --> &pipe->mutex --> &p->lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&p->lock);
                               lock(&pipe->mutex);
                               lock(&p->lock);
  lock(&of->mutex);

 *** DEADLOCK ***

2 locks held by syz-executor149/5078:
 #0: ffff88807de89868 (&pipe->mutex){+.+.}-{3:3}, at: splice_file_to_pipe+0x2e/0x500 fs/splice.c:1292
 #1: ffff88802d80b540 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182

stack backtrace:
CPU: 0 PID: 5078 Comm: syz-executor149 Not tainted 6.9.0-rc7-syzkaller-00012-gdccb07f2914c #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x18cb/0x58e0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:608 [inline]
 __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
 kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
 traverse+0x14f/0x550 fs/seq_file.c:106
 seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
 call_read_iter include/linux/fs.h:2104 [inline]
 copy_splice_read+0x662/0xb60 fs/splice.c:365
 do_splice_read fs/splice.c:985 [inline]
 splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
 do_sendfile+0x515/0xdc0 fs/read_write.c:1301
 __do_sys_sendfile64 fs/read_write.c:1362 [inline]
 __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7d8d33f8e9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 18 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f7d8d2b8218 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f7d8d3c9428 RCX: 00007f7d8d33f8e9
RDX: 0000000000000000 RSI: 0000000000000007 RDI: 0000000000000006
RBP: 00007f7d8d3c9420 R08: 00007ffdc82d7c57 R09: 0000000000000000
R10: 0000000000000004 R11: 0000000000000246 R12: 00007f7d8d39606c
R13: 0030656c69662f2e R14: 0079616c7265766f R15: 00007f7d8d39603b
 </TASK>

Crashes (14):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/08 05:35 upstream dccb07f2914c 4cf3f9b3 .config strace log report syz C [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/07/15 15:47 upstream 0c3836482481 c605e6a2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/07/12 23:20 upstream e091caf99f3a eaeb5c15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in kernfs_seq_start
2024/06/17 18:29 upstream 2ccbdf43d5e7 1f11cfd7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/06/17 02:31 upstream 2ccbdf43d5e7 f429ab00 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/06/10 18:30 upstream 83a7eefedc9b 048c640a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/06/07 13:07 upstream 8a92980606e3 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/06/03 04:18 upstream c3f38fa61af7 3113787f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/05/27 22:24 upstream 2bfcfd584ff5 761766e6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/05/27 21:15 upstream 2bfcfd584ff5 761766e6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/05/06 17:20 upstream dd5a440a31fa d884b519 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/04/23 11:07 upstream 71b1543c83d6 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/04/23 11:07 upstream 71b1543c83d6 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
2024/04/23 11:06 upstream 71b1543c83d6 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in kernfs_seq_start
* Struck through repros no longer work on HEAD.