syzbot


possible deadlock in seq_read_iter (2)

Status: fixed on 2024/02/02 10:05
Subsystems: overlayfs
[Documentation on labels]
Reported-by: syzbot+da4f9f61f96525c62cc7@syzkaller.appspotmail.com
Fix commit: da40448ce4eb fs: move file_start_write() into direct_splice_actor()
First crash: 516d, last: 353d
Cause bisection: introduced by (bisect log) :
commit 1e8c813b083c4122dfeaa5c3b11028331026e85d
Author: Christoph Hellwig <hch@lst.de>
Date: Wed May 31 12:55:32 2023 +0000

  PM: hibernate: don't use early_lookup_bdev in resume_store

Crash: possible deadlock in seq_read_iter (log)
Repro: C syz .config
  
Fix bisection: fixed by (bisect log) :
commit da40448ce4eb4de18eb7b0db61dddece32677939
Author: Amir Goldstein <amir73il@gmail.com>
Date: Thu Nov 30 14:16:23 2023 +0000

  fs: move file_start_write() into direct_splice_actor()

  
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] [overlayfs?] possible deadlock in seq_read_iter (2) 6 (13) 2024/01/29 05:07
[syzbot] Monthly overlayfs report (Aug 2023) 0 (1) 2023/08/24 07:15
[syzbot] Monthly overlayfs report (Jul 2023) 0 (1) 2023/07/24 08:31
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in seq_read_iter (4) overlayfs 36 15d 200d 0/28 closed as dup on 2024/05/27 09:33
linux-5.15 possible deadlock in seq_read_iter 36 2d12h 229d 0/3 upstream: reported on 2024/04/28 04:21
upstream possible deadlock in seq_read_iter fs 2 1224d 1230d 0/28 auto-closed as invalid on 2021/12/05 03:01
upstream possible deadlock in seq_read_iter (3) overlayfs 148 215d 305d 25/28 fixed on 2024/05/23 00:16
Last patch testing requests (4)
Created Duration User Patch Repo Result
2024/01/28 07:54 20m hdanton@sina.com https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master OK log
2024/01/27 11:46 1m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 2cf4f94d8e86 error
2024/01/09 03:56 26m retest repro upstream OK log
2023/12/20 03:53 18m amir73il@gmail.com https://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs.rw OK log

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.7.0-rc6-syzkaller-00010-g2cf4f94d8e86 #0 Not tainted
------------------------------------------------------
syz-executor424/7758 is trying to acquire lock:
ffff88801f1ef9e0 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb2/0xd10 fs/seq_file.c:182

but task is already holding lock:
ffff88814cd7a418 (sb_writers#4){.+.+}-{0:0}, at: do_sendfile+0x607/0x1000 fs/read_write.c:1253

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3
 (sb_writers#4){.+.+}-{0:0}:
       lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1635 [inline]
       sb_start_write+0x4d/0x1c0 include/linux/fs.h:1710
       mnt_want_write+0x3f/0x90 fs/namespace.c:404
       ovl_create_object+0x13b/0x360 fs/overlayfs/dir.c:629
       lookup_open fs/namei.c:3477 [inline]
       open_last_lookups fs/namei.c:3546 [inline]
       path_openat+0x13fa/0x3290 fs/namei.c:3776
       do_filp_open+0x234/0x490 fs/namei.c:3809
       do_sys_openat2+0x13e/0x1d0 fs/open.c:1437
       do_sys_open fs/open.c:1452 [inline]
       __do_sys_open fs/open.c:1460 [inline]
       __se_sys_open fs/open.c:1456 [inline]
       __x64_sys_open+0x225/0x270 fs/open.c:1456
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x63/0x6b

-> #2 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
       lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
       inode_lock_shared include/linux/fs.h:812 [inline]
       lookup_slow+0x45/0x70 fs/namei.c:1710
       walk_component+0x2d0/0x400 fs/namei.c:2002
       lookup_last fs/namei.c:2459 [inline]
       path_lookupat+0x16f/0x450 fs/namei.c:2483
       filename_lookup+0x255/0x610 fs/namei.c:2512
       kern_path+0x35/0x50 fs/namei.c:2610
       lookup_bdev+0xc5/0x290 block/bdev.c:979
       resume_store+0x1a0/0x710 kernel/power/hibernate.c:1177
       kernfs_fop_write_iter+0x3b3/0x510 fs/kernfs/file.c:334
       do_iter_readv_writev+0x330/0x4a0
       do_iter_write+0x1f6/0x8d0 fs/read_write.c:860
       iter_file_splice_write+0x86d/0x1010 fs/splice.c:736
       do_splice_from fs/splice.c:933 [inline]
       direct_splice_actor+0xea/0x1c0 fs/splice.c:1142
       splice_direct_to_actor+0x376/0x9e0 fs/splice.c:1088
       do_splice_direct+0x2ac/0x3f0 fs/splice.c:1194
       do_sendfile+0x62c/0x1000 fs/read_write.c:1254
       __do_sys_sendfile64 fs/read_write.c:1322 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1308
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x63/0x6b

-> #1 (&of->mutex){+.+.}-{3:3}:
       lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747
       kernfs_seq_start+0x53/0x3a0 fs/kernfs/file.c:154
       seq_read_iter+0x3d4/0xd10 fs/seq_file.c:225
       call_read_iter include/linux/fs.h:2014 [inline]
       new_sync_read fs/read_write.c:389 [inline]
       vfs_read+0x78b/0xb00 fs/read_write.c:470
       ksys_read+0x1a0/0x2c0 fs/read_write.c:613
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x63/0x6b

-> #0 (&p->lock){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain+0x1909/0x5ab0 kernel/locking/lockdep.c:3869
       __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
       lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747
       seq_read_iter+0xb2/0xd10 fs/seq_file.c:182
       call_read_iter include/linux/fs.h:2014 [inline]
       copy_splice_read+0x4c9/0x9c0 fs/splice.c:364
       splice_direct_to_actor+0x2c4/0x9e0 fs/splice.c:1069
       do_splice_direct+0x2ac/0x3f0 fs/splice.c:1194
       do_sendfile+0x62c/0x1000 fs/read_write.c:1254
       __do_sys_sendfile64 fs/read_write.c:1322 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1308
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x63/0x6b

other info that might help us debug this:

Chain exists of:
  &p->lock --> &ovl_i_mutex_dir_key[depth] --> sb_writers#4

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(sb_writers#4);
                               lock(&ovl_i_mutex_dir_key[depth]);
                               lock(sb_writers#4);
  lock(&p->lock);

 *** DEADLOCK ***

1 lock held by syz-executor424/7758:
 #0: ffff88814cd7a418 (sb_writers#4){.+.+}-{0:0}, at: do_sendfile+0x607/0x1000 fs/read_write.c:1253

stack backtrace:
CPU: 0 PID: 7758 Comm: syz-executor424 Not tainted 6.7.0-rc6-syzkaller-00010-g2cf4f94d8e86 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 check_noncircular+0x366/0x490 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x1909/0x5ab0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:603 [inline]
 __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747
 seq_read_iter+0xb2/0xd10 fs/seq_file.c:182
 call_read_iter include/linux/fs.h:2014 [inline]
 copy_splice_read+0x4c9/0x9c0 fs/splice.c:364
 splice_direct_to_actor+0x2c4/0x9e0 fs/splice.c:1069
 do_splice_direct+0x2ac/0x3f0 fs/splice.c:1194
 do_sendfile+0x62c/0x1000 fs/read_write.c:1254
 __do_sys_sendfile64 fs/read_write.c:1322 [inline]
 __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1308
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7fef41211d49
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 81 18 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fef411d2218 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007fef4129c3e8 RCX: 00007fef41211d49
RDX: 0000000000000000 RSI: 0000000000000004 RDI: 0000000000000003
RBP: 00007fef4129c3e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0001000000201007 R11: 0000000000000246 R12: 00007fef41269060
R13: 0030656c69662f2e R14: 6e6f3d6f6e69782c R15: 0079616c7265766f
 </TASK>

Crashes (14):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/12/19 19:42 upstream 2cf4f94d8e86 3ad490ea .config strace log report syz C [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/12/26 03:56 upstream fbafc3e621c3 fb427a07 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/12/24 09:21 upstream 3f82f1c3a036 fb427a07 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/12/08 05:24 upstream 9ace34a8e446 28b24332 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/11/20 03:29 upstream eb3479bc23fa cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/11/08 02:54 upstream 13d88ac54ddd 83211397 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/10/16 08:06 upstream fbe1bf1e5ff1 f757a323 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/10/04 23:21 upstream cbf3a2cb156a b7d7ff54 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/08/23 21:23 upstream 89bf6209cad6 b81ca3f6 .config console log report info ci2-upstream-fs possible deadlock in seq_read_iter
2023/07/17 13:10 upstream fdf0eaf11452 e5f10889 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in seq_read_iter
2023/07/16 22:21 upstream 20edcec23f92 35d9ecc5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in seq_read_iter
2023/07/16 21:40 upstream 20edcec23f92 35d9ecc5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
2023/07/16 19:33 upstream 831fe284d827 35d9ecc5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in seq_read_iter
2023/07/15 13:09 upstream b6e6cc1f78c7 35d9ecc5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in seq_read_iter
* Struck through repros no longer work on HEAD.