syzbot


possible deadlock in mnt_want_write (4)

Status: upstream: reported on 2024/09/11 14:21
Subsystems: kernfs
[Documentation on labels]
Reported-by: syzbot+8dcad7af57014dff2591@syzkaller.appspotmail.com
First crash: 74d, last: 74d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernfs?] possible deadlock in mnt_want_write (4) 0 (1) 2024/09/11 14:21
Similar bugs (8)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 possible deadlock in mnt_want_write missing-backport C done 117 358d 610d 0/3 auto-obsoleted due to no activity on 2024/10/15 12:27
upstream possible deadlock in mnt_want_write fs C done done 662 1504d 2312d 15/28 fixed on 2020/11/16 12:12
linux-6.1 possible deadlock in mnt_want_write origin:upstream missing-backport C done 56 359d 603d 0/3 upstream: reported C repro on 2023/03/28 13:05
linux-4.19 possible deadlock in mnt_want_write romfs C 730 624d 2042d 0/1 upstream: reported C repro on 2019/04/19 16:54
android-49 possible deadlock in mnt_want_write 1 2295d 2295d 0/3 auto-closed as invalid on 2019/02/22 14:57
upstream possible deadlock in mnt_want_write (2) integrity overlayfs C done 867 386d 1254d 25/28 fixed on 2023/12/21 01:43
upstream possible deadlock in mnt_want_write (3) kernfs 9 221d 324d 0/28 auto-obsoleted due to no activity on 2024/07/22 10:45
linux-4.14 possible deadlock in mnt_want_write ubifs C 10467 625d 2037d 0/1 upstream: reported C repro on 2019/04/25 05:09

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.11.0-rc6-syzkaller-00308-gb31c44928842 #0 Not tainted
------------------------------------------------------
syz.0.571/10510 is trying to acquire lock:
ffff888065736420 (sb_writers#5){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515

but task is already holding lock:
ffff88805de73b38 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: inode_lock include/linux/fs.h:800 [inline]
ffff88805de73b38 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: open_last_lookups fs/namei.c:3644 [inline]
ffff88805de73b38 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: path_openat+0x7fb/0x3470 fs/namei.c:3883

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1526
       inode_lock_shared include/linux/fs.h:810 [inline]
       lookup_slow+0x45/0x70 fs/namei.c:1734
       walk_component+0x2e1/0x410 fs/namei.c:2039
       lookup_last fs/namei.c:2542 [inline]
       path_lookupat+0x16f/0x450 fs/namei.c:2566
       filename_lookup+0x256/0x610 fs/namei.c:2595
       kern_path+0x35/0x50 fs/namei.c:2703
       lookup_bdev+0xc5/0x290 block/bdev.c:1157
       resume_store+0x1a0/0x710 kernel/power/hibernate.c:1235
       kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
       new_sync_write fs/read_write.c:497 [inline]
       vfs_write+0xa72/0xc90 fs/read_write.c:590
       ksys_write+0x1a0/0x2c0 fs/read_write.c:643
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (&of->mutex){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
       traverse+0x14f/0x550 fs/seq_file.c:106
       seq_read_iter+0xc5e/0xd60 fs/seq_file.c:195
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xe20 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&p->lock){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
       copy_splice_read+0x662/0xb60 fs/splice.c:365
       do_splice_read fs/splice.c:985 [inline]
       splice_file_to_pipe+0x299/0x500 fs/splice.c:1295
       do_sendfile+0x515/0xe20 fs/read_write.c:1301
       __do_sys_sendfile64 fs/read_write.c:1362 [inline]
       __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1348
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&pipe->mutex){+.+.}-{3:3}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       __mutex_lock_common kernel/locking/mutex.c:608 [inline]
       __mutex_lock+0x136/0xd70 kernel/locking/mutex.c:752
       iter_file_splice_write+0x335/0x14e0 fs/splice.c:687
       do_splice_from fs/splice.c:941 [inline]
       do_splice+0xd77/0x1900 fs/splice.c:1354
       __do_splice fs/splice.c:1436 [inline]
       __do_sys_splice fs/splice.c:1652 [inline]
       __se_sys_splice+0x331/0x4a0 fs/splice.c:1634
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (sb_writers#5
){.+.+}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3133 [inline]
       check_prevs_add kernel/locking/lockdep.c:3252 [inline]
       validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
       __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1676 [inline]
       sb_start_write+0x4d/0x1c0 include/linux/fs.h:1812
       mnt_want_write+0x3f/0x90 fs/namespace.c:515
       ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:642
       lookup_open fs/namei.c:3578 [inline]
       open_last_lookups fs/namei.c:3647 [inline]
       path_openat+0x1a9a/0x3470 fs/namei.c:3883
       do_filp_open+0x235/0x490 fs/namei.c:3913
       do_sys_openat2+0x13e/0x1d0 fs/open.c:1416
       do_sys_open fs/open.c:1431 [inline]
       __do_sys_creat fs/open.c:1507 [inline]
       __se_sys_creat fs/open.c:1501 [inline]
       __x64_sys_creat+0x123/0x170 fs/open.c:1501
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  sb_writers#5 --> &of->mutex --> &ovl_i_mutex_dir_key[depth]

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&ovl_i_mutex_dir_key[depth]);
                               lock(&of->mutex);
                               lock(&ovl_i_mutex_dir_key[depth]);
  rlock(sb_writers#5);

 *** DEADLOCK ***

2 locks held by syz.0.571/10510:
 #0: 
ffff888023f38420 (sb_writers#16){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff88805de73b38 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: inode_lock include/linux/fs.h:800 [inline]
 #1: ffff88805de73b38 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: open_last_lookups fs/namei.c:3644 [inline]
 #1: ffff88805de73b38 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: path_openat+0x7fb/0x3470 fs/namei.c:3883

stack backtrace:
CPU: 0 UID: 0 PID: 10510 Comm: syz.0.571 Not tainted 6.11.0-rc6-syzkaller-00308-gb31c44928842 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:93 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2186
 check_prev_add kernel/locking/lockdep.c:3133 [inline]
 check_prevs_add kernel/locking/lockdep.c:3252 [inline]
 validate_chain+0x18e0/0x5900 kernel/locking/lockdep.c:3868
 __lock_acquire+0x137a/0x2040 kernel/locking/lockdep.c:5142
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5759
 percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
 __sb_start_write include/linux/fs.h:1676 [inline]
 sb_start_write+0x4d/0x1c0 include/linux/fs.h:1812
 mnt_want_write+0x3f/0x90 fs/namespace.c:515
 ovl_create_object+0x13b/0x370 fs/overlayfs/dir.c:642
 lookup_open fs/namei.c:3578 [inline]
 open_last_lookups fs/namei.c:3647 [inline]
 path_openat+0x1a9a/0x3470 fs/namei.c:3883
 do_filp_open+0x235/0x490 fs/namei.c:3913
 do_sys_openat2+0x13e/0x1d0 fs/open.c:1416
 do_sys_open fs/open.c:1431 [inline]
 __do_sys_creat fs/open.c:1507 [inline]
 __se_sys_creat fs/open.c:1501 [inline]
 __x64_sys_creat+0x123/0x170 fs/open.c:1501
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb30f77cef9
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fb3104d6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000055
RAX: ffffffffffffffda RBX: 00007fb30f936058 RCX: 00007fb30f77cef9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000100
RBP: 00007fb30f7ef046 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fb30f936058 R15: 00007fffff90dd88
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/09/07 14:11 upstream b31c44928842 9750182a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mnt_want_write
* Struck through repros no longer work on HEAD.