ubi0: attached mtd0 (name "mtdram test device", size 0 MiB) ====================================================== ubi0: PEB size: 4096 bytes (4 KiB), LEB size: 3968 bytes WARNING: possible circular locking dependency detected 4.14.174-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.2/12041 is trying to acquire lock: (&sig->cred_guard_mutex){+.+.}, at: [] lock_trace+0x3f/0xc0 fs/proc/base.c:407 but task is already holding lock: ubi0: min./max. I/O unit sizes: 1/64, sub-page size 1 (&p->lock){+.+.}, at: [] seq_read+0xba/0x1160 fs/seq_file.c:165 which lock already depends on the new lock. ubi0: VID header offset: 64 (aligned 64), data offset: 128 the existing dependency chain (in reverse order) is: -> #3 (&p->lock){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xe8/0x1470 kernel/locking/mutex.c:893 seq_read+0xba/0x1160 fs/seq_file.c:165 ubi0: good PEBs: 32, bad PEBs: 0, corrupted PEBs: 0 proc_reg_read+0xf2/0x160 fs/proc/inode.c:217 do_loop_readv_writev fs/read_write.c:695 [inline] do_loop_readv_writev fs/read_write.c:682 [inline] do_iter_read+0x3e3/0x5a0 fs/read_write.c:919 vfs_readv+0xd3/0x130 fs/read_write.c:981 ubi0: user volume: 0, internal volumes: 1, max. volumes count: 23 kernel_readv fs/splice.c:361 [inline] default_file_splice_read+0x41d/0x870 fs/splice.c:416 do_splice_to+0xfb/0x150 fs/splice.c:880 ubi0: max/mean erase counter: 0/0, WL threshold: 4096, image sequence number: 761483420 splice_direct_to_actor+0x20a/0x730 fs/splice.c:952 do_splice_direct+0x164/0x210 fs/splice.c:1061 do_sendfile+0x469/0xaf0 fs/read_write.c:1441 SYSC_sendfile64 fs/read_write.c:1502 [inline] SyS_sendfile64+0xff/0x110 fs/read_write.c:1488 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x42/0xb7 -> #2 ( ubi0: available PEBs: 28, total reserved PEBs: 4, PEBs reserved for bad PEB handling: 0 sb_writers#4){.+.+}: percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:36 [inline] percpu_down_read include/linux/percpu-rwsem.h:59 [inline] __sb_start_write+0x1a1/0x2e0 fs/super.c:1363 sb_start_write include/linux/fs.h:1548 [inline] mnt_want_write+0x3a/0xb0 fs/namespace.c:386 ovl_xattr_set+0x4d/0x270 fs/overlayfs/inode.c:214 ovl_posix_acl_xattr_set+0x3f9/0x830 fs/overlayfs/super.c:762 ubi0: background thread "ubi_bgt0d" started, PID 12043 __vfs_setxattr+0xdc/0x130 fs/xattr.c:150 __vfs_setxattr_noperm+0xfd/0x3c0 fs/xattr.c:181 vfs_setxattr+0xba/0xe0 fs/xattr.c:224 setxattr+0x1a9/0x300 fs/xattr.c:453 path_setxattr+0x118/0x130 fs/xattr.c:472 ubi: mtd0 is already attached to ubi0 SYSC_lsetxattr fs/xattr.c:494 [inline] SyS_lsetxattr+0x33/0x40 fs/xattr.c:490 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x42/0xb7 -> #1 (&ovl_i_mutex_dir_key[depth]#2){++++}: down_read+0x37/0xa0 kernel/locking/rwsem.c:24 inode_lock_shared include/linux/fs.h:728 [inline] do_last fs/namei.c:3333 [inline] path_openat+0x185a/0x3c50 fs/namei.c:3569 do_filp_open+0x18e/0x250 fs/namei.c:3603 do_open_execat+0xda/0x430 fs/exec.c:849 open_exec+0x32/0x60 fs/exec.c:881 load_script+0x4ce/0x730 fs/binfmt_script.c:140 search_binary_handler fs/exec.c:1638 [inline] search_binary_handler+0x139/0x6c0 fs/exec.c:1616 exec_binprm fs/exec.c:1680 [inline] do_execveat_common.isra.0+0xf32/0x1c70 fs/exec.c:1802 do_execveat fs/exec.c:1858 [inline] SYSC_execveat fs/exec.c:1939 [inline] SyS_execveat+0x49/0x60 fs/exec.c:1931 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x42/0xb7 -> #0 (&sig->cred_guard_mutex){+.+.}: lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3994 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xe8/0x1470 kernel/locking/mutex.c:893 lock_trace+0x3f/0xc0 fs/proc/base.c:407 proc_pid_syscall+0x81/0x1f0 fs/proc/base.c:639 proc_single_show+0xe7/0x150 fs/proc/base.c:761 seq_read+0x4d2/0x1160 fs/seq_file.c:237 do_loop_readv_writev fs/read_write.c:695 [inline] do_loop_readv_writev fs/read_write.c:682 [inline] do_iter_read+0x3e3/0x5a0 fs/read_write.c:919 vfs_readv+0xd3/0x130 fs/read_write.c:981 do_preadv+0x161/0x200 fs/read_write.c:1065 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x42/0xb7 other info that might help us debug this: Chain exists of: &sig->cred_guard_mutex --> sb_writers#4 --> &p->lock Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&p->lock); lock(sb_writers#4); lock(&p->lock); lock(&sig->cred_guard_mutex); *** DEADLOCK *** 1 lock held by syz-executor.2/12041: #0: (&p->lock){+.+.}, at: [] seq_read+0xba/0x1160 fs/seq_file.c:165 stack backtrace: CPU: 0 PID: 12041 Comm: syz-executor.2 Not tainted 4.14.174-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x13e/0x194 lib/dump_stack.c:58 print_circular_bug.isra.0.cold+0x1c4/0x282 kernel/locking/lockdep.c:1258 check_prev_add kernel/locking/lockdep.c:1901 [inline] check_prevs_add kernel/locking/lockdep.c:2018 [inline] validate_chain kernel/locking/lockdep.c:2460 [inline] __lock_acquire+0x2cb3/0x4620 kernel/locking/lockdep.c:3487 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3994 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xe8/0x1470 kernel/locking/mutex.c:893 lock_trace+0x3f/0xc0 fs/proc/base.c:407 proc_pid_syscall+0x81/0x1f0 fs/proc/base.c:639 proc_single_show+0xe7/0x150 fs/proc/base.c:761 seq_read+0x4d2/0x1160 fs/seq_file.c:237 do_loop_readv_writev fs/read_write.c:695 [inline] do_loop_readv_writev fs/read_write.c:682 [inline] do_iter_read+0x3e3/0x5a0 fs/read_write.c:919 vfs_readv+0xd3/0x130 fs/read_write.c:981 do_preadv+0x161/0x200 fs/read_write.c:1065 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x42/0xb7 RIP: 0033:0x45c849 RSP: 002b:00007f2e868e2c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000127 RAX: ffffffffffffffda RBX: 00007f2e868e36d4 RCX: 000000000045c849 RDX: 0000000000000375 RSI: 00000000200017c0 RDI: 0000000000000006 RBP: 000000000076c0e0 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff R13: 000000000000085a R14: 00000000004cb1ac R15: 000000000076c0ec SELinux: unrecognized netlink message: protocol=4 nlmsg_type=44 sclass=netlink_tcpdiag_socket pig=12062 comm=syz-executor.0 SELinux: unrecognized netlink message: protocol=4 nlmsg_type=44 sclass=netlink_tcpdiag_socket pig=12069 comm=syz-executor.0 use of bytesused == 0 is deprecated and will be removed in the future, use the actual size instead. option changes via remount are deprecated (pid=12134 comm=syz-executor.2) option changes via remount are deprecated (pid=12134 comm=syz-executor.2) NOHZ: local_softirq_pending 08 print_req_error: 22 callbacks suppressed print_req_error: I/O error, dev loop3, sector 2 device vxlan0 entered promiscuous mode hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock netlink: 20 bytes leftover after parsing attributes in process `syz-executor.5'. netlink: 20 bytes leftover after parsing attributes in process `syz-executor.5'. netlink: 20 bytes leftover after parsing attributes in process `syz-executor.5'. device vxlan0 entered promiscuous mode print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock netlink: 20 bytes leftover after parsing attributes in process `syz-executor.5'. print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock device vxlan0 entered promiscuous mode device vxlan0 entered promiscuous mode EXT4-fs (loop0): invalid inodes per group: 16384 print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock device vxlan0 entered promiscuous mode print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop11, sector 2 device vxlan0 entered promiscuous mode hfsplus: unable to find HFS+ superblock hfsplus: unable to find HFS+ superblock hfsplus: unable to find HFS+ superblock hfsplus: unable to find HFS+ superblock device vxlan0 entered promiscuous mode hfsplus: unable to find HFS+ superblock device vxlan0 entered promiscuous mode hfsplus: unable to find HFS+ superblock device vxlan0 entered promiscuous mode device vxlan0 entered promiscuous mode device vxlan0 entered promiscuous mode print_req_error: 5 callbacks suppressed print_req_error: I/O error, dev loop3, sector 2 device vxlan0 entered promiscuous mode hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock device vxlan0 entered promiscuous mode device vxlan0 entered promiscuous mode device vxlan0 entered promiscuous mode print_req_error: I/O error, dev loop11, sector 2 hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock print_req_error: I/O error, dev loop3, sector 2 hfsplus: unable to find HFS+ superblock device vxlan0 entered promiscuous mode