====================================================== WARNING: possible circular locking dependency detected 4.14.307-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.5/14230 is trying to acquire lock: (sb_writers#3){.+.+}, at: [] sb_start_write include/linux/fs.h:1551 [inline] (sb_writers#3){.+.+}, at: [] mnt_want_write+0x3a/0xb0 fs/namespace.c:386 but task is already holding lock: (&ovl_i_mutex_dir_key[depth]){++++}, at: [] inode_lock include/linux/fs.h:719 [inline] (&ovl_i_mutex_dir_key[depth]){++++}, at: [] do_last fs/namei.c:3331 [inline] (&ovl_i_mutex_dir_key[depth]){++++}, at: [] path_openat+0xde2/0x2970 fs/namei.c:3571 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&ovl_i_mutex_dir_key[depth]){++++}: down_write_killable+0x37/0xb0 kernel/locking/rwsem.c:68 iterate_dir+0x387/0x5e0 fs/readdir.c:43 ovl_dir_read fs/overlayfs/readdir.c:306 [inline] ovl_dir_read_merged+0x2c5/0x430 fs/overlayfs/readdir.c:365 ovl_check_empty_dir+0x6e/0x200 fs/overlayfs/readdir.c:870 ovl_check_empty_and_clear+0x72/0xe0 fs/overlayfs/dir.c:306 ovl_rename+0x57d/0xe50 fs/overlayfs/dir.c:959 vfs_rename+0x560/0x1820 fs/namei.c:4498 SYSC_renameat2 fs/namei.c:4646 [inline] SyS_renameat2+0x95b/0xad0 fs/namei.c:4535 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x5e/0xd3 -> #0 (sb_writers#3){.+.+}: lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline] percpu_down_read include/linux/percpu-rwsem.h:59 [inline] __sb_start_write+0x64/0x260 fs/super.c:1342 sb_start_write include/linux/fs.h:1551 [inline] mnt_want_write+0x3a/0xb0 fs/namespace.c:386 ovl_create_object+0x75/0x1d0 fs/overlayfs/dir.c:538 lookup_open+0x77a/0x1750 fs/namei.c:3241 do_last fs/namei.c:3334 [inline] path_openat+0xe08/0x2970 fs/namei.c:3571 do_filp_open+0x179/0x3c0 fs/namei.c:3605 do_sys_open+0x296/0x410 fs/open.c:1081 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x5e/0xd3 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&ovl_i_mutex_dir_key[depth]); lock(sb_writers#3); lock(&ovl_i_mutex_dir_key[depth]); lock(sb_writers#3); *** DEADLOCK *** 2 locks held by syz-executor.5/14230: #0: (sb_writers#20){.+.+}, at: [] sb_start_write include/linux/fs.h:1551 [inline] #0: (sb_writers#20){.+.+}, at: [] mnt_want_write+0x3a/0xb0 fs/namespace.c:386 #1: (&ovl_i_mutex_dir_key[depth]){++++}, at: [] inode_lock include/linux/fs.h:719 [inline] #1: (&ovl_i_mutex_dir_key[depth]){++++}, at: [] do_last fs/namei.c:3331 [inline] #1: (&ovl_i_mutex_dir_key[depth]){++++}, at: [] path_openat+0xde2/0x2970 fs/namei.c:3571 stack backtrace: CPU: 0 PID: 14230 Comm: syz-executor.5 Not tainted 4.14.307-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/16/2023 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258 check_prev_add kernel/locking/lockdep.c:1905 [inline] check_prevs_add kernel/locking/lockdep.c:2022 [inline] validate_chain kernel/locking/lockdep.c:2464 [inline] __lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline] percpu_down_read include/linux/percpu-rwsem.h:59 [inline] __sb_start_write+0x64/0x260 fs/super.c:1342 sb_start_write include/linux/fs.h:1551 [inline] mnt_want_write+0x3a/0xb0 fs/namespace.c:386 ovl_create_object+0x75/0x1d0 fs/overlayfs/dir.c:538 lookup_open+0x77a/0x1750 fs/namei.c:3241 do_last fs/namei.c:3334 [inline] path_openat+0xe08/0x2970 fs/namei.c:3571 do_filp_open+0x179/0x3c0 fs/namei.c:3605 do_sys_open+0x296/0x410 fs/open.c:1081 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x5e/0xd3 RIP: 0033:0x7f083d40b0f9 RSP: 002b:00007f083b95c168 EFLAGS: 00000246 ORIG_RAX: 0000000000000055 RAX: ffffffffffffffda RBX: 00007f083d52b050 RCX: 00007f083d40b0f9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000280 RBP: 00007f083d466ae9 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffe3d10d59f R14: 00007f083b95c300 R15: 0000000000022000 kvm_hv_get_msr: 22 callbacks suppressed kvm [14281]: vcpu0, guest rIP: 0x8c Hyper-V unhandled rdmsr: 0x40000086 kvm [14281]: vcpu0, guest rIP: 0x9945 Hyper-V unhandled rdmsr: 0x40000076 kvm [14281]: vcpu0, guest rIP: 0x3545 Hyper-V unhandled rdmsr: 0x40000042 kvm [14281]: vcpu0, guest rIP: 0x3045 Hyper-V unhandled rdmsr: 0x4000000a kvm [14281]: vcpu0, guest rIP: 0x3045 Hyper-V unhandled rdmsr: 0x40000074 kvm [14281]: vcpu0, guest rIP: 0x3045 Hyper-V unhandled rdmsr: 0x4000002f kvm [14281]: vcpu0, guest rIP: 0x3045 Hyper-V unhandled rdmsr: 0x40000042 unregister_netdevice: waiting for ip6gre0 to become free. Usage count = -1