syzbot


possible deadlock in walk_component (2)

Status: auto-obsoleted due to no activity on 2023/12/03 10:44
Subsystems: kernfs
[Documentation on labels]
Reported-by: syzbot+39acbe8ff4cab0acdb9d@syzkaller.appspotmail.com
First crash: 288d, last: 247d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernfs?] possible deadlock in walk_component (2) 0 (1) 2023/07/19 03:26
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 possible deadlock in walk_component syz error 126 524d 1392d 0/1 upstream: reported syz repro on 2020/07/06 08:23
upstream possible deadlock in walk_component (3) kernfs 1 25d 21d 0/26 closed as dup on 2024/04/09 14:55
upstream possible deadlock in walk_component fs 11 1305d 1518d 0/26 auto-closed as invalid on 2021/01/29 13:43

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.5.0-rc1-syzkaller-00201-g2772d7df3c93 #0 Not tainted
------------------------------------------------------
syz-executor.3/28959 is trying to acquire lock:
ffff8880340fa450 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:781 [inline]
ffff8880340fa450 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: lookup_slow fs/namei.c:1706 [inline]
ffff8880340fa450 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}, at: walk_component+0x33b/0x5a0 fs/namei.c:1998

but task is already holding lock:
ffff888062425488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x281/0x610 fs/kernfs/file.c:325

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&of->mutex){+.+.}-{3:3}:
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747
       kernfs_seq_start+0x4b/0x460 fs/kernfs/file.c:154
       seq_read_iter+0x2ad/0x1280 fs/seq_file.c:225
       kernfs_fop_read_iter+0x4c8/0x680 fs/kernfs/file.c:279
       call_read_iter include/linux/fs.h:1865 [inline]
       new_sync_read fs/read_write.c:389 [inline]
       vfs_read+0x4e0/0x930 fs/read_write.c:470
       ksys_read+0x12f/0x250 fs/read_write.c:613
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #2 (&p->lock){+.+.}-{3:3}:
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747
       seq_read_iter+0xda/0x1280 fs/seq_file.c:182
       proc_reg_read_iter+0x211/0x300 fs/proc/inode.c:305
       call_read_iter include/linux/fs.h:1865 [inline]
       copy_splice_read+0x418/0x8f0 fs/splice.c:367
       vfs_splice_read fs/splice.c:994 [inline]
       vfs_splice_read+0x2c8/0x3b0 fs/splice.c:963
       splice_direct_to_actor+0x2a5/0xa30 fs/splice.c:1070
       do_splice_direct+0x1af/0x280 fs/splice.c:1195
       do_sendfile+0xb88/0x1390 fs/read_write.c:1254
       __do_sys_sendfile64 fs/read_write.c:1322 [inline]
       __se_sys_sendfile64 fs/read_write.c:1308 [inline]
       __x64_sys_sendfile64+0x1d6/0x220 fs/read_write.c:1308
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #1 (sb_writers#4){.+.+}-{0:0}:
       percpu_down_read include/linux/percpu-rwsem.h:51 [inline]
       __sb_start_write include/linux/fs.h:1494 [inline]
       sb_start_write include/linux/fs.h:1569 [inline]
       mnt_want_write+0x6f/0x440 fs/namespace.c:403
       ovl_create_object+0x9e/0x2a0 fs/overlayfs/dir.c:629
       lookup_open.isra.0+0x1049/0x1360 fs/namei.c:3492
       open_last_lookups fs/namei.c:3560 [inline]
       path_openat+0x931/0x29c0 fs/namei.c:3790
       do_filp_open+0x1de/0x430 fs/namei.c:3820
       do_sys_openat2+0x176/0x1e0 fs/open.c:1407
       do_sys_open fs/open.c:1422 [inline]
       __do_sys_openat fs/open.c:1438 [inline]
       __se_sys_openat fs/open.c:1433 [inline]
       __x64_sys_openat+0x175/0x210 fs/open.c:1433
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&ovl_i_mutex_dir_key[depth]){++++}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3142 [inline]
       check_prevs_add kernel/locking/lockdep.c:3261 [inline]
       validate_chain kernel/locking/lockdep.c:3876 [inline]
       __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5144
       lock_acquire kernel/locking/lockdep.c:5761 [inline]
       lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5726
       down_read+0x9c/0x470 kernel/locking/rwsem.c:1520
       inode_lock_shared include/linux/fs.h:781 [inline]
       lookup_slow fs/namei.c:1706 [inline]
       walk_component+0x33b/0x5a0 fs/namei.c:1998
       lookup_last fs/namei.c:2455 [inline]
       path_lookupat+0x17f/0x770 fs/namei.c:2479
       filename_lookup+0x1e7/0x5b0 fs/namei.c:2508
       kern_path+0x35/0x50 fs/namei.c:2606
       lookup_bdev+0xd9/0x280 block/bdev.c:943
       resume_store+0x1d4/0x460 kernel/power/hibernate.c:1177
       kobj_attr_store+0x55/0x80 lib/kobject.c:833
       sysfs_kf_write+0x117/0x170 fs/sysfs/file.c:136
       kernfs_fop_write_iter+0x3ff/0x610 fs/kernfs/file.c:334
       call_write_iter include/linux/fs.h:1871 [inline]
       new_sync_write fs/read_write.c:491 [inline]
       vfs_write+0x650/0xe40 fs/read_write.c:584
       ksys_write+0x12f/0x250 fs/read_write.c:637
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

Chain exists of:
  &ovl_i_mutex_dir_key[depth] --> &p->lock --> &of->mutex

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&of->mutex);
                               lock(&p->lock);
                               lock(&of->mutex);
  rlock(&ovl_i_mutex_dir_key[depth]);

 *** DEADLOCK ***

4 locks held by syz-executor.3/28959:
 #0: ffff8880426b2d48 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe3/0x100 fs/file.c:1047
 #1: ffff88801bcea410 (sb_writers#8){.+.+}-{0:0}, at: ksys_write+0x12f/0x250 fs/read_write.c:637
 #2: ffff888062425488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x281/0x610 fs/kernfs/file.c:325
 #3: ffff88801826ce88 (kn->active#68){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x2a4/0x610 fs/kernfs/file.c:326

stack backtrace:
CPU: 0 PID: 28959 Comm: syz-executor.3 Not tainted 6.5.0-rc1-syzkaller-00201-g2772d7df3c93 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/03/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106
 check_noncircular+0x311/0x3f0 kernel/locking/lockdep.c:2195
 check_prev_add kernel/locking/lockdep.c:3142 [inline]
 check_prevs_add kernel/locking/lockdep.c:3261 [inline]
 validate_chain kernel/locking/lockdep.c:3876 [inline]
 __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5144
 lock_acquire kernel/locking/lockdep.c:5761 [inline]
 lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5726
 down_read+0x9c/0x470 kernel/locking/rwsem.c:1520
 inode_lock_shared include/linux/fs.h:781 [inline]
 lookup_slow fs/namei.c:1706 [inline]
 walk_component+0x33b/0x5a0 fs/namei.c:1998
 lookup_last fs/namei.c:2455 [inline]
 path_lookupat+0x17f/0x770 fs/namei.c:2479
 filename_lookup+0x1e7/0x5b0 fs/namei.c:2508
 kern_path+0x35/0x50 fs/namei.c:2606
 lookup_bdev+0xd9/0x280 block/bdev.c:943
 resume_store+0x1d4/0x460 kernel/power/hibernate.c:1177
 kobj_attr_store+0x55/0x80 lib/kobject.c:833
 sysfs_kf_write+0x117/0x170 fs/sysfs/file.c:136
 kernfs_fop_write_iter+0x3ff/0x610 fs/kernfs/file.c:334
 call_write_iter include/linux/fs.h:1871 [inline]
 new_sync_write fs/read_write.c:491 [inline]
 vfs_write+0x650/0xe40 fs/read_write.c:584
 ksys_write+0x12f/0x250 fs/read_write.c:637
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f3db0c7cb29
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f3db1a280c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f3db0d9bf80 RCX: 00007f3db0c7cb29
RDX: 0000000000000012 RSI: 0000000020000080 RDI: 0000000000000004
RBP: 00007f3db0cc847a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f3db0d9bf80 R15: 00007ffc4744d038
 </TASK>
PM: Image not found (code -6)

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/07/15 03:21 upstream 2772d7df3c93 35d9ecc5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in walk_component
2023/08/25 10:43 upstream f8d6ff449094 49be837e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in walk_component
* Struck through repros no longer work on HEAD.