====================================================== SQUASHFS error: Unable to read directory block [11f:26] WARNING: possible circular locking dependency detected 4.14.295-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.0/7997 is trying to acquire lock: (rtnl_mutex){+.+.}, at: [<ffffffff85c2a12e>] unregister_netdevice_notifier+0x5e/0x2b0 net/core/dev.c:1630 but task is already holding lock: (&xt[i].mutex){+.+.}, at: [<ffffffff85f1a908>] xt_find_table_lock+0x38/0x3d0 net/netfilter/x_tables.c:1088 libceph: connect [d::]:6789 error -101 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&xt[i].mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 target_revfn+0x43/0x210 net/netfilter/x_tables.c:354 xt_find_revision+0x15e/0x1d0 net/netfilter/x_tables.c:378 nfnl_compat_get+0x1f7/0x870 net/netfilter/nft_compat.c:678 nfnetlink_rcv_msg+0x9bb/0xc00 net/netfilter/nfnetlink.c:214 netlink_rcv_skb+0x125/0x390 net/netlink/af_netlink.c:2454 nfnetlink_rcv+0x1ab/0x1da0 net/netfilter/nfnetlink.c:515 netlink_unicast_kernel net/netlink/af_netlink.c:1296 [inline] netlink_unicast+0x437/0x610 net/netlink/af_netlink.c:1322 netlink_sendmsg+0x648/0xbc0 net/netlink/af_netlink.c:1893 libceph: mon0 [d::]:6789 connect error sock_sendmsg_nosec net/socket.c:646 [inline] sock_sendmsg+0xb5/0x100 net/socket.c:656 ___sys_sendmsg+0x6c8/0x800 net/socket.c:2062 __sys_sendmsg+0xa3/0x120 net/socket.c:2096 SYSC_sendmsg net/socket.c:2107 [inline] SyS_sendmsg+0x27/0x40 net/socket.c:2103 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #1 (&table[i].mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 nf_tables_netdev_event+0x10d/0x4d0 net/netfilter/nf_tables_netdev.c:122 notifier_call_chain+0x108/0x1a0 kernel/notifier.c:93 call_netdevice_notifiers_info net/core/dev.c:1667 [inline] call_netdevice_notifiers net/core/dev.c:1683 [inline] rollback_registered_many+0x765/0xbb0 net/core/dev.c:7211 rollback_registered+0xca/0x170 net/core/dev.c:7253 unregister_netdevice_queue+0x1b4/0x360 net/core/dev.c:8274 unregister_netdevice include/linux/netdevice.h:2444 [inline] unregister_netdev+0x18/0x20 net/core/dev.c:8315 slip_close drivers/net/slip/slip.c:912 [inline] slip_hangup+0x153/0x1b0 drivers/net/slip/slip.c:918 tty_ldisc_hangup+0x155/0x6c0 drivers/tty/tty_ldisc.c:745 __tty_hangup.part.0+0x31a/0x730 drivers/tty/tty_io.c:622 __tty_hangup drivers/tty/tty_io.c:2600 [inline] tty_vhangup drivers/tty/tty_io.c:695 [inline] tty_ioctl+0x730/0x1430 drivers/tty/tty_io.c:2599 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:500 [inline] do_vfs_ioctl+0x75a/0xff0 fs/ioctl.c:684 SYSC_ioctl fs/ioctl.c:701 [inline] SyS_ioctl+0x7f/0xb0 fs/ioctl.c:692 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #0 (rtnl_mutex){+.+.}: lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 unregister_netdevice_notifier+0x5e/0x2b0 net/core/dev.c:1630 tee_tg_destroy+0x5c/0xb0 net/netfilter/xt_TEE.c:123 cleanup_entry+0x232/0x310 net/ipv6/netfilter/ip6_tables.c:685 __do_replace+0x38d/0x580 net/ipv4/netfilter/arp_tables.c:930 do_replace net/ipv6/netfilter/ip6_tables.c:1162 [inline] do_ip6t_set_ctl+0x256/0x3b0 net/ipv6/netfilter/ip6_tables.c:1688 nf_sockopt net/netfilter/nf_sockopt.c:106 [inline] nf_setsockopt+0x5f/0xb0 net/netfilter/nf_sockopt.c:115 ipv6_setsockopt+0xc0/0x120 net/ipv6/ipv6_sockglue.c:937 tcp_setsockopt+0x7b/0xc0 net/ipv4/tcp.c:2830 SYSC_setsockopt net/socket.c:1865 [inline] SyS_setsockopt+0x110/0x1e0 net/socket.c:1844 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb other info that might help us debug this: Chain exists of: rtnl_mutex --> &table[i].mutex --> &xt[i].mutex Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&xt[i].mutex); lock(&table[i].mutex); lock(&xt[i].mutex); lock(rtnl_mutex); *** DEADLOCK *** 1 lock held by syz-executor.0/7997: #0: (&xt[i].mutex){+.+.}, at: [<ffffffff85f1a908>] xt_find_table_lock+0x38/0x3d0 net/netfilter/x_tables.c:1088 stack backtrace: CPU: 0 PID: 7997 Comm: syz-executor.0 Not tainted 4.14.295-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/22/2022 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258 check_prev_add kernel/locking/lockdep.c:1905 [inline] check_prevs_add kernel/locking/lockdep.c:2022 [inline] validate_chain kernel/locking/lockdep.c:2464 [inline] __lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 unregister_netdevice_notifier+0x5e/0x2b0 net/core/dev.c:1630 tee_tg_destroy+0x5c/0xb0 net/netfilter/xt_TEE.c:123 cleanup_entry+0x232/0x310 net/ipv6/netfilter/ip6_tables.c:685 __do_replace+0x38d/0x580 net/ipv4/netfilter/arp_tables.c:930 do_replace net/ipv6/netfilter/ip6_tables.c:1162 [inline] do_ip6t_set_ctl+0x256/0x3b0 net/ipv6/netfilter/ip6_tables.c:1688 nf_sockopt net/netfilter/nf_sockopt.c:106 [inline] nf_setsockopt+0x5f/0xb0 net/netfilter/nf_sockopt.c:115 ipv6_setsockopt+0xc0/0x120 net/ipv6/ipv6_sockglue.c:937 tcp_setsockopt+0x7b/0xc0 net/ipv4/tcp.c:2830 SYSC_setsockopt net/socket.c:1865 [inline] SyS_setsockopt+0x110/0x1e0 net/socket.c:1844 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb RIP: 0033:0x7f83a138fbfa RSP: 002b:00007ffd042ab408 EFLAGS: 00000206 ORIG_RAX: 0000000000000036 RAX: ffffffffffffffda RBX: 0000000000000029 RCX: 00007f83a138fbfa RDX: 0000000000000040 RSI: 0000000000000029 RDI: 0000000000000003 RBP: 00007ffd042ab430 R08: 00000000000003b8 R09: ffffffffffff0000 R10: 00007f83a1483bc0 R11: 0000000000000206 R12: 00007ffd042ab490 R13: 0000000000000003 R14: 00007ffd042ab42c R15: 00007f83a1483b60 ip6_tables: ip6tables: counters copy to user failed while replacing table SQUASHFS error: lzo decompression failed, data probably corrupt ip6_tables: ip6tables: counters copy to user failed while replacing table ceph: No mds server is up or the cluster is laggy SQUASHFS error: squashfs_read_data failed to read block 0x11f SQUASHFS error: Unable to read metadata cache entry [11f] SQUASHFS error: Unable to read metadata cache entry [11f] SQUASHFS error: Unable to read directory block [11f:26] SQUASHFS error: Unable to read directory block [11f:26] ip6_tables: ip6tables: counters copy to user failed while replacing table overlayfs: missing 'lowerdir' SQUASHFS error: lzo decompression failed, data probably corrupt SQUASHFS error: squashfs_read_data failed to read block 0x11f SQUASHFS error: Unable to read metadata cache entry [11f] SQUASHFS error: Unable to read metadata cache entry [11f] SQUASHFS error: Unable to read directory block [11f:26] SQUASHFS error: Unable to read directory block [11f:26] SQUASHFS error: lzo decompression failed, data probably corrupt SQUASHFS error: squashfs_read_data failed to read block 0x11f SQUASHFS error: Unable to read metadata cache entry [11f] SQUASHFS error: Unable to read metadata cache entry [11f] SQUASHFS error: Unable to read directory block [11f:26] SQUASHFS error: Unable to read directory block [11f:26] ceph: No mds server is up or the cluster is laggy dlm: non-version read from control device 0 dlm: non-version read from control device 0 IPVS: ftp: loaded support on port[0] = 21 overlayfs: missing 'lowerdir' dlm: non-version read from control device 0 dlm: non-version read from control device 0 libceph: mon1 [::6]:6789 socket closed (con state CONNECTING) ceph: No mds server is up or the cluster is laggy dlm: non-version read from control device 0 dlm: non-version read from control device 0 overlayfs: missing 'lowerdir' libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy