====================================================== WARNING: possible circular locking dependency detected 5.6.0-rc4-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.3/23399 is trying to acquire lock: ffff888090d2cb80 (&ovl_i_mutex_dir_key[depth]){++++}, at: inode_lock_shared include/linux/fs.h:801 [inline] ffff888090d2cb80 (&ovl_i_mutex_dir_key[depth]){++++}, at: do_last fs/namei.c:3400 [inline] ffff888090d2cb80 (&ovl_i_mutex_dir_key[depth]){++++}, at: path_openat+0x1af6/0x32b0 fs/namei.c:3607 but task is already holding lock: ffff8880424150d0 (&sig->cred_guard_mutex){+.+.}, at: prepare_bprm_creds fs/exec.c:1412 [inline] ffff8880424150d0 (&sig->cred_guard_mutex){+.+.}, at: __do_execve_file.isra.0+0x376/0x2270 fs/exec.c:1757 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (&sig->cred_guard_mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:956 [inline] __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103 lock_trace+0x45/0xe0 fs/proc/base.c:408 proc_pid_syscall+0x83/0x200 fs/proc/base.c:637 proc_single_show+0xf7/0x1c0 fs/proc/base.c:758 seq_read+0x4b9/0x1160 fs/seq_file.c:229 do_loop_readv_writev fs/read_write.c:714 [inline] do_loop_readv_writev fs/read_write.c:701 [inline] do_iter_read+0x47f/0x650 fs/read_write.c:935 vfs_readv+0xf0/0x160 fs/read_write.c:1053 do_preadv+0x1b6/0x270 fs/read_write.c:1145 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #2 (&p->lock){+.+.}: __mutex_lock_common kernel/locking/mutex.c:956 [inline] __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103 seq_read+0x6b/0x1160 fs/seq_file.c:161 proc_reg_read+0x1c1/0x280 fs/proc/inode.c:223 do_loop_readv_writev fs/read_write.c:714 [inline] do_loop_readv_writev fs/read_write.c:701 [inline] do_iter_read+0x47f/0x650 fs/read_write.c:935 vfs_readv+0xf0/0x160 fs/read_write.c:1053 kernel_readv fs/splice.c:365 [inline] default_file_splice_read+0x4fb/0xa20 fs/splice.c:422 do_splice_to+0x10e/0x160 fs/splice.c:892 splice_direct_to_actor+0x307/0x980 fs/splice.c:971 do_splice_direct+0x1a8/0x270 fs/splice.c:1080 do_sendfile+0x549/0xc40 fs/read_write.c:1520 __do_sys_sendfile64 fs/read_write.c:1581 [inline] __se_sys_sendfile64 fs/read_write.c:1567 [inline] __x64_sys_sendfile64+0x1cc/0x210 fs/read_write.c:1567 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #1 (sb_writers#4){.+.+}: percpu_down_read include/linux/percpu-rwsem.h:40 [inline] __sb_start_write+0x21b/0x430 fs/super.c:1674 sb_start_write include/linux/fs.h:1649 [inline] mnt_want_write+0x3a/0xb0 fs/namespace.c:354 ovl_create_object+0x96/0x290 fs/overlayfs/dir.c:596 lookup_open+0x120d/0x1970 fs/namei.c:3309 do_last fs/namei.c:3401 [inline] path_openat+0xe8f/0x32b0 fs/namei.c:3607 do_filp_open+0x192/0x260 fs/namei.c:3637 do_sys_openat2+0x54c/0x740 fs/open.c:1149 do_sys_open+0xc3/0x140 fs/open.c:1165 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #0 (&ovl_i_mutex_dir_key[depth]){++++}: check_prev_add kernel/locking/lockdep.c:2475 [inline] check_prevs_add kernel/locking/lockdep.c:2580 [inline] validate_chain kernel/locking/lockdep.c:2970 [inline] __lock_acquire+0x201b/0x3ca0 kernel/locking/lockdep.c:3954 lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484 down_read+0x96/0x420 kernel/locking/rwsem.c:1495 inode_lock_shared include/linux/fs.h:801 [inline] do_last fs/namei.c:3400 [inline] path_openat+0x1af6/0x32b0 fs/namei.c:3607 do_filp_open+0x192/0x260 fs/namei.c:3637 do_open_execat+0x122/0x600 fs/exec.c:860 __do_execve_file.isra.0+0x16d5/0x2270 fs/exec.c:1765 do_execveat_common fs/exec.c:1871 [inline] do_execve fs/exec.c:1888 [inline] __do_sys_execve fs/exec.c:1964 [inline] __se_sys_execve fs/exec.c:1959 [inline] __x64_sys_execve+0x8a/0xb0 fs/exec.c:1959 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe other info that might help us debug this: Chain exists of: &ovl_i_mutex_dir_key[depth] --> &p->lock --> &sig->cred_guard_mutex Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&sig->cred_guard_mutex); lock(&p->lock); lock(&sig->cred_guard_mutex); lock(&ovl_i_mutex_dir_key[depth]); *** DEADLOCK *** 1 lock held by syz-executor.3/23399: #0: ffff8880424150d0 (&sig->cred_guard_mutex){+.+.}, at: prepare_bprm_creds fs/exec.c:1412 [inline] #0: ffff8880424150d0 (&sig->cred_guard_mutex){+.+.}, at: __do_execve_file.isra.0+0x376/0x2270 fs/exec.c:1757 stack backtrace: CPU: 1 PID: 23399 Comm: syz-executor.3 Not tainted 5.6.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x188/0x20d lib/dump_stack.c:118 check_noncircular+0x32e/0x3e0 kernel/locking/lockdep.c:1808 check_prev_add kernel/locking/lockdep.c:2475 [inline] check_prevs_add kernel/locking/lockdep.c:2580 [inline] validate_chain kernel/locking/lockdep.c:2970 [inline] __lock_acquire+0x201b/0x3ca0 kernel/locking/lockdep.c:3954 lock_acquire+0x197/0x420 kernel/locking/lockdep.c:4484 down_read+0x96/0x420 kernel/locking/rwsem.c:1495 inode_lock_shared include/linux/fs.h:801 [inline] do_last fs/namei.c:3400 [inline] path_openat+0x1af6/0x32b0 fs/namei.c:3607 do_filp_open+0x192/0x260 fs/namei.c:3637 do_open_execat+0x122/0x600 fs/exec.c:860 __do_execve_file.isra.0+0x16d5/0x2270 fs/exec.c:1765 do_execveat_common fs/exec.c:1871 [inline] do_execve fs/exec.c:1888 [inline] __do_sys_execve fs/exec.c:1964 [inline] __se_sys_execve fs/exec.c:1959 [inline] __x64_sys_execve+0x8a/0xb0 fs/exec.c:1959 do_syscall_64+0xf6/0x7d0 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x45c4a9 Code: ad b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 7b b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f620223ec78 EFLAGS: 00000246 ORIG_RAX: 000000000000003b RAX: ffffffffffffffda RBX: 00007f620223f6d4 RCX: 000000000045c4a9 RDX: 0000000020000680 RSI: 0000000020000440 RDI: 00000000200001c0 RBP: 000000000076bf20 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff R13: 00000000000000a6 R14: 00000000004c2fe0 R15: 000000000076bf2c