syzbot


possible deadlock in lookup_slow (2)

Status: auto-closed as invalid on 2020/05/11 09:17
Subsystems: fs
[Documentation on labels]
Reported-by: syzbot+4821b50cc2e4bd1d0d10@syzkaller.appspotmail.com
First crash: 1618d, last: 1572d
Discussions (1)
Title Replies (including bot) Last reply
possible deadlock in lookup_slow (2) 0 (1) 2019/11/29 00:05
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in lookup_slow (3) kernfs 280 1d17h 288d 0/26 upstream: reported on 2023/07/19 13:17
linux-5.15 possible deadlock in lookup_slow 11 243d 415d 0/3 auto-obsoleted due to no activity on 2023/12/11 13:10
linux-4.19 possible deadlock in lookup_slow 22 1418d 1844d 0/1 auto-closed as invalid on 2020/10/12 22:59
upstream possible deadlock in lookup_slow fs 139 1804d 2045d 0/26 auto-closed as invalid on 2019/10/25 08:42
linux-5.15 possible deadlock in lookup_slow (2) origin:upstream C 14 2d19h 141d 0/3 upstream: reported C repro on 2023/12/13 22:35
linux-4.14 possible deadlock in lookup_slow C 2027 423d 1714d 0/1 upstream: reported C repro on 2019/08/24 01:53

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
5.5.0-rc5-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.0/11936 is trying to acquire lock:
ffff8880969ce710 (&ovl_i_mutex_dir_key[depth]){++++}, at: inode_lock_shared include/linux/fs.h:801 [inline]
ffff8880969ce710 (&ovl_i_mutex_dir_key[depth]){++++}, at: lookup_slow+0x4a/0x80 fs/namei.c:1681

but task is already holding lock:
ffff888050cf8b10 (&sig->cred_guard_mutex){+.+.}, at: __do_sys_perf_event_open+0xeaa/0x2c70 kernel/events/core.c:11257

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&sig->cred_guard_mutex){+.+.}:
       __mutex_lock_common kernel/locking/mutex.c:956 [inline]
       __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103
       mutex_lock_killable_nested+0x16/0x20 kernel/locking/mutex.c:1133
       do_io_accounting+0x1f4/0x820 fs/proc/base.c:2773
       proc_tgid_io_accounting+0x23/0x30 fs/proc/base.c:2822
       proc_single_show+0xfd/0x1c0 fs/proc/base.c:756
       seq_read+0x4ca/0x1170 fs/seq_file.c:229
       __vfs_read+0x8a/0x110 fs/read_write.c:425
       vfs_read+0x1f0/0x440 fs/read_write.c:461
       ksys_read+0x14f/0x290 fs/read_write.c:587
       __do_sys_read fs/read_write.c:597 [inline]
       __se_sys_read fs/read_write.c:595 [inline]
       __x64_sys_read+0x73/0xb0 fs/read_write.c:595
       do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #2 (&p->lock){+.+.}:
       __mutex_lock_common kernel/locking/mutex.c:956 [inline]
       __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103
       mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1118
       seq_read+0x71/0x1170 fs/seq_file.c:161
       do_loop_readv_writev fs/read_write.c:714 [inline]
       do_loop_readv_writev fs/read_write.c:701 [inline]
       do_iter_read+0x4a4/0x660 fs/read_write.c:935
       vfs_readv+0xf0/0x160 fs/read_write.c:997
       kernel_readv fs/splice.c:365 [inline]
       default_file_splice_read+0x4fb/0xa20 fs/splice.c:422
       do_splice_to+0x127/0x180 fs/splice.c:892
       splice_direct_to_actor+0x320/0xa30 fs/splice.c:971
       do_splice_direct+0x1da/0x2a0 fs/splice.c:1080
       do_sendfile+0x597/0xd00 fs/read_write.c:1464
       __do_sys_sendfile64 fs/read_write.c:1525 [inline]
       __se_sys_sendfile64 fs/read_write.c:1511 [inline]
       __x64_sys_sendfile64+0x1dd/0x220 fs/read_write.c:1511
       do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #1 (sb_writers#3){.+.+}:
       percpu_down_read include/linux/percpu-rwsem.h:40 [inline]
       __sb_start_write+0x241/0x460 fs/super.c:1674
       sb_start_write include/linux/fs.h:1650 [inline]
       mnt_want_write+0x3f/0xc0 fs/namespace.c:354
       ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:21
       ovl_create_object+0xb3/0x2c0 fs/overlayfs/dir.c:596
       ovl_create+0x28/0x30 fs/overlayfs/dir.c:627
       lookup_open+0x12d5/0x1a90 fs/namei.c:3241
       do_last fs/namei.c:3331 [inline]
       path_openat+0x14a2/0x4500 fs/namei.c:3537
       do_filp_open+0x1a1/0x280 fs/namei.c:3567
       do_sys_open+0x3fe/0x5d0 fs/open.c:1097
       __do_sys_open fs/open.c:1115 [inline]
       __se_sys_open fs/open.c:1110 [inline]
       __x64_sys_open+0x7e/0xc0 fs/open.c:1110
       do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (&ovl_i_mutex_dir_key[depth]){++++}:
       check_prev_add kernel/locking/lockdep.c:2476 [inline]
       check_prevs_add kernel/locking/lockdep.c:2581 [inline]
       validate_chain kernel/locking/lockdep.c:2971 [inline]
       __lock_acquire+0x2596/0x4a00 kernel/locking/lockdep.c:3955
       lock_acquire+0x190/0x410 kernel/locking/lockdep.c:4485
       down_read+0x95/0x430 kernel/locking/rwsem.c:1495
       inode_lock_shared include/linux/fs.h:801 [inline]
       lookup_slow+0x4a/0x80 fs/namei.c:1681
       walk_component+0x747/0x1df0 fs/namei.c:1802
       lookup_last fs/namei.c:2260 [inline]
       path_lookupat.isra.0+0x1f5/0x8d0 fs/namei.c:2305
       filename_lookup+0x1b0/0x3f0 fs/namei.c:2335
       kern_path+0x36/0x40 fs/namei.c:2421
       create_local_trace_uprobe+0x87/0x4a0 kernel/trace/trace_uprobe.c:1542
       perf_uprobe_init+0x131/0x210 kernel/trace/trace_event_perf.c:323
       perf_uprobe_event_init+0x106/0x1a0 kernel/events/core.c:9162
       perf_try_init_event+0x135/0x590 kernel/events/core.c:10462
       perf_init_event kernel/events/core.c:10514 [inline]
       perf_event_alloc.part.0+0x158f/0x3710 kernel/events/core.c:10794
       perf_event_alloc kernel/events/core.c:10676 [inline]
       __do_sys_perf_event_open+0x6f8/0x2c70 kernel/events/core.c:11277
       __se_sys_perf_event_open kernel/events/core.c:11151 [inline]
       __x64_sys_perf_event_open+0xbe/0x150 kernel/events/core.c:11151
       do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

Chain exists of:
  &ovl_i_mutex_dir_key[depth] --> &p->lock --> &sig->cred_guard_mutex

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&sig->cred_guard_mutex);
                               lock(&p->lock);
                               lock(&sig->cred_guard_mutex);
  lock(&ovl_i_mutex_dir_key[depth]);

 *** DEADLOCK ***

2 locks held by syz-executor.0/11936:
 #0: ffff888050cf8b10 (&sig->cred_guard_mutex){+.+.}, at: __do_sys_perf_event_open+0xeaa/0x2c70 kernel/events/core.c:11257
 #1: ffffffff8b66f028 (&pmus_srcu){....}, at: perf_event_alloc.part.0+0xefc/0x3710 kernel/events/core.c:10790

stack backtrace:
CPU: 0 PID: 11936 Comm: syz-executor.0 Not tainted 5.5.0-rc5-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x197/0x210 lib/dump_stack.c:118
 print_circular_bug.isra.0.cold+0x163/0x172 kernel/locking/lockdep.c:1685
 check_noncircular+0x32e/0x3e0 kernel/locking/lockdep.c:1809
 check_prev_add kernel/locking/lockdep.c:2476 [inline]
 check_prevs_add kernel/locking/lockdep.c:2581 [inline]
 validate_chain kernel/locking/lockdep.c:2971 [inline]
 __lock_acquire+0x2596/0x4a00 kernel/locking/lockdep.c:3955
 lock_acquire+0x190/0x410 kernel/locking/lockdep.c:4485
 down_read+0x95/0x430 kernel/locking/rwsem.c:1495
 inode_lock_shared include/linux/fs.h:801 [inline]
 lookup_slow+0x4a/0x80 fs/namei.c:1681
 walk_component+0x747/0x1df0 fs/namei.c:1802
 lookup_last fs/namei.c:2260 [inline]
 path_lookupat.isra.0+0x1f5/0x8d0 fs/namei.c:2305
 filename_lookup+0x1b0/0x3f0 fs/namei.c:2335
 kern_path+0x36/0x40 fs/namei.c:2421
 create_local_trace_uprobe+0x87/0x4a0 kernel/trace/trace_uprobe.c:1542
 perf_uprobe_init+0x131/0x210 kernel/trace/trace_event_perf.c:323
 perf_uprobe_event_init+0x106/0x1a0 kernel/events/core.c:9162
 perf_try_init_event+0x135/0x590 kernel/events/core.c:10462
 perf_init_event kernel/events/core.c:10514 [inline]
 perf_event_alloc.part.0+0x158f/0x3710 kernel/events/core.c:10794
 perf_event_alloc kernel/events/core.c:10676 [inline]
 __do_sys_perf_event_open+0x6f8/0x2c70 kernel/events/core.c:11277
 __se_sys_perf_event_open kernel/events/core.c:11151 [inline]
 __x64_sys_perf_event_open+0xbe/0x150 kernel/events/core.c:11151
 do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x45af49
Code: ad b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 7b b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00
RSP: 002b:00007f36a3aadc78 EFLAGS: 00000246 ORIG_RAX: 000000000000012a
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 000000000045af49
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000180
RBP: 000000000075bf20 R08: 0000000000000000 R09: 0000000000000000
R10: ffffffffffffffff R11: 0000000000000246 R12: 00007f36a3aae6d4
R13: 00000000004c906d R14: 00000000004e1928 R15: 00000000ffffffff

Crashes (7):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2020/01/12 09:16 upstream ac61145a725a 4c04afaa .config console log report ci-upstream-kasan-gce-root
2020/01/11 06:59 upstream e69ec487b2c7 4de4e9f0 .config console log report ci-upstream-kasan-gce-root
2019/12/19 17:43 upstream 4a94c4332334 36650b4b .config console log report ci-upstream-kasan-gce-root
2019/12/10 06:55 upstream 6794862a16ef 4b83c8fb .config console log report ci-upstream-kasan-gce-selinux-root
2019/11/27 08:47 upstream 89d57dddd7d3 1048481f .config console log report ci-upstream-kasan-gce-selinux-root
2019/12/27 16:53 linux-next 7ddd09fc4b74 be5c2c81 .config console log report ci-upstream-linux-next-kasan-gce-root
2019/12/04 15:06 linux-next c7c32c43e831 b2088328 .config console log report ci-upstream-linux-next-kasan-gce-root
* Struck through repros no longer work on HEAD.