====================================================== WARNING: possible circular locking dependency detected 4.14.222-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.5/15772 is trying to acquire lock: (&p->lock){+.+.}, at: [] seq_read+0xba/0x1120 fs/seq_file.c:165 but task is already holding lock: (sb_writers#3){.+.+}, at: [] file_start_write include/linux/fs.h:2712 [inline] (sb_writers#3){.+.+}, at: [] do_sendfile+0x84f/0xb30 fs/read_write.c:1440 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (sb_writers#3){.+.+}: percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline] percpu_down_read include/linux/percpu-rwsem.h:59 [inline] __sb_start_write+0x64/0x260 fs/super.c:1342 sb_start_write include/linux/fs.h:1549 [inline] mnt_want_write+0x3a/0xb0 fs/namespace.c:386 ovl_do_remove+0x67/0xb90 fs/overlayfs/dir.c:759 vfs_rmdir.part.0+0x144/0x390 fs/namei.c:3908 vfs_rmdir fs/namei.c:3893 [inline] do_rmdir+0x334/0x3c0 fs/namei.c:3968 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #2 (&ovl_i_mutex_dir_key[depth]#2){++++}: down_read+0x36/0x80 kernel/locking/rwsem.c:24 inode_lock_shared include/linux/fs.h:729 [inline] do_last fs/namei.c:3333 [inline] path_openat+0x149b/0x2970 fs/namei.c:3569 do_filp_open+0x179/0x3c0 fs/namei.c:3603 do_open_execat+0xd3/0x450 fs/exec.c:849 do_execveat_common+0x711/0x1f30 fs/exec.c:1755 do_execve fs/exec.c:1860 [inline] SYSC_execve fs/exec.c:1941 [inline] SyS_execve+0x3b/0x50 fs/exec.c:1936 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #1 (&sig->cred_guard_mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 lock_trace fs/proc/base.c:407 [inline] proc_pid_syscall+0xa7/0x2a0 fs/proc/base.c:639 proc_single_show+0xe7/0x150 fs/proc/base.c:761 seq_read+0x4cf/0x1120 fs/seq_file.c:237 do_loop_readv_writev fs/read_write.c:695 [inline] do_loop_readv_writev fs/read_write.c:682 [inline] do_iter_read+0x3eb/0x5b0 fs/read_write.c:919 vfs_readv+0xc8/0x120 fs/read_write.c:981 do_preadv fs/read_write.c:1065 [inline] SYSC_preadv fs/read_write.c:1115 [inline] SyS_preadv+0x15a/0x200 fs/read_write.c:1110 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #0 (&p->lock){+.+.}: lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 seq_read+0xba/0x1120 fs/seq_file.c:165 proc_reg_read+0xee/0x1a0 fs/proc/inode.c:217 do_loop_readv_writev fs/read_write.c:695 [inline] do_loop_readv_writev fs/read_write.c:682 [inline] do_iter_read+0x3eb/0x5b0 fs/read_write.c:919 vfs_readv+0xc8/0x120 fs/read_write.c:981 kernel_readv fs/splice.c:361 [inline] default_file_splice_read+0x418/0x910 fs/splice.c:416 do_splice_to+0xfb/0x140 fs/splice.c:880 splice_direct_to_actor+0x207/0x730 fs/splice.c:952 do_splice_direct+0x164/0x210 fs/splice.c:1061 do_sendfile+0x47f/0xb30 fs/read_write.c:1441 SYSC_sendfile64 fs/read_write.c:1502 [inline] SyS_sendfile64+0xff/0x110 fs/read_write.c:1488 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb other info that might help us debug this: Chain exists of: &p->lock --> &ovl_i_mutex_dir_key[depth]#2 --> sb_writers#3 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(sb_writers#3); lock(&ovl_i_mutex_dir_key[depth]#2); lock(sb_writers#3); lock(&p->lock); *** DEADLOCK *** 1 lock held by syz-executor.5/15772: #0: (sb_writers#3){.+.+}, at: [] file_start_write include/linux/fs.h:2712 [inline] #0: (sb_writers#3){.+.+}, at: [] do_sendfile+0x84f/0xb30 fs/read_write.c:1440 stack backtrace: CPU: 0 PID: 15772 Comm: syz-executor.5 Not tainted 4.14.222-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258 check_prev_add kernel/locking/lockdep.c:1905 [inline] check_prevs_add kernel/locking/lockdep.c:2022 [inline] validate_chain kernel/locking/lockdep.c:2464 [inline] __lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 seq_read+0xba/0x1120 fs/seq_file.c:165 proc_reg_read+0xee/0x1a0 fs/proc/inode.c:217 do_loop_readv_writev fs/read_write.c:695 [inline] do_loop_readv_writev fs/read_write.c:682 [inline] do_iter_read+0x3eb/0x5b0 fs/read_write.c:919 vfs_readv+0xc8/0x120 fs/read_write.c:981 kernel_readv fs/splice.c:361 [inline] default_file_splice_read+0x418/0x910 fs/splice.c:416 do_splice_to+0xfb/0x140 fs/splice.c:880 splice_direct_to_actor+0x207/0x730 fs/splice.c:952 do_splice_direct+0x164/0x210 fs/splice.c:1061 do_sendfile+0x47f/0xb30 fs/read_write.c:1441 SYSC_sendfile64 fs/read_write.c:1502 [inline] SyS_sendfile64+0xff/0x110 fs/read_write.c:1488 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb RIP: 0033:0x465ef9 RSP: 002b:00007fa899613188 EFLAGS: 00000246 ORIG_RAX: 0000000000000028 RAX: ffffffffffffffda RBX: 000000000056bf60 RCX: 0000000000465ef9 RDX: 0000000000000000 RSI: 0000000000000005 RDI: 0000000000000004 RBP: 00000000004bcd1c R08: 0000000000000000 R09: 0000000000000000 R10: 000400000000da7a R11: 0000000000000246 R12: 000000000056bf60 R13: 00007ffdb0ca4d6f R14: 00007fa899613300 R15: 0000000000022000 mmap: syz-executor.5 (15892) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.txt. kauditd_printk_skb: 18 callbacks suppressed audit: type=1800 audit(1614337061.716:230): pid=15929 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="sda1" ino=16536 res=0 audit: type=1800 audit(1614337061.856:231): pid=15928 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.5" name="bus" dev="sda1" ino=16527 res=0 audit: type=1800 audit(1614337062.686:232): pid=15969 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.4" name="bus" dev="sda1" ino=16436 res=0 audit: type=1800 audit(1614337062.686:233): pid=15954 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.0" name="bus" dev="sda1" ino=16177 res=0 audit: type=1800 audit(1614337062.686:234): pid=15959 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="sda1" ino=16445 res=0 audit: type=1800 audit(1614337062.686:235): pid=15960 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.1" name="bus" dev="sda1" ino=16527 res=0 audit: type=1800 audit(1614337062.986:236): pid=15979 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.5" name="bus" dev="sda1" ino=16549 res=0 audit: type=1800 audit(1614337063.506:237): pid=16024 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.4" name="bus" dev="sda1" ino=16543 res=0 audit: type=1800 audit(1614337063.526:238): pid=16025 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.1" name="bus" dev="sda1" ino=16544 res=0 audit: type=1800 audit(1614337063.526:239): pid=16026 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="sda1" ino=16545 res=0 kvm [16064]: vcpu0, guest rIP: 0x145 Hyper-V unhandled rdmsr: 0x40000004 kvm [16093]: vcpu0, guest rIP: 0x145 Hyper-V unhandled rdmsr: 0x40000004 kvm [16110]: vcpu0, guest rIP: 0x145 Hyper-V unhandled rdmsr: 0x40000004 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error kvm [16118]: vcpu0, guest rIP: 0x145 Hyper-V unhandled rdmsr: 0x40000004 ceph: No mds server is up or the cluster is laggy syz-executor.4 (16132): drop_caches: 2 overlayfs: failed to resolve './file0': -2 syz-executor.4 (16143): drop_caches: 2 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy syz-executor.4 (16172): drop_caches: 2 ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error syz-executor.4 (16186): drop_caches: 2 ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 ceph: No mds server is up or the cluster is laggy libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error syz-executor.4 (16220): drop_caches: 2 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 ceph: No mds server is up or the cluster is laggy libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: mon0 [d::]:6789 connect error syz-executor.0 (16266): drop_caches: 2 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error syz-executor.0 (16277): drop_caches: 2 ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 syz-executor.0 (16295): drop_caches: 2 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 ceph: No mds server is up or the cluster is laggy libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 ceph: No mds server is up or the cluster is laggy libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 ceph: No mds server is up or the cluster is laggy libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error PF_BRIDGE: br_mdb_parse() with invalid entry EXT4-fs warning (device sda1): verify_group_input:131: Cannot add at group 625 (only 16 groups) libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 EXT4-fs warning (device sda1): verify_group_input:131: Cannot add at group 625 (only 16 groups) libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 PF_BRIDGE: br_mdb_parse() with invalid entry libceph: mon0 [d::]:6789 connect error EXT4-fs warning (device sda1): verify_group_input:131: Cannot add at group 625 (only 16 groups) ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy PF_BRIDGE: br_mdb_parse() with invalid entry EXT4-fs warning (device sda1): verify_group_input:131: Cannot add at group 625 (only 16 groups)