L1TF CPU bug present and SMT on, data leak possible. See CVE-2018-3646 and https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/l1tf.html for details. ====================================================== WARNING: possible circular locking dependency detected 4.14.232-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.0/11233 is trying to acquire lock: (&table[i].mutex){+.+.}, at: [] nf_tables_netdev_event+0x10d/0x4d0 net/netfilter/nf_tables_netdev.c:122 but task is already holding lock: (rtnl_mutex){+.+.}, at: [] ppp_ioctl+0x123b/0x21e0 drivers/net/ppp/ppp_generic.c:620 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (rtnl_mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 unregister_netdevice_notifier+0x5e/0x2b0 net/core/dev.c:1630 tee_tg_destroy+0x5c/0xb0 net/netfilter/xt_TEE.c:123 cleanup_entry+0x232/0x310 net/ipv6/netfilter/ip6_tables.c:684 __do_replace+0x38d/0x580 net/ipv6/netfilter/ip6_tables.c:1105 do_replace net/ipv6/netfilter/ip6_tables.c:1161 [inline] do_ip6t_set_ctl+0x256/0x3b0 net/ipv6/netfilter/ip6_tables.c:1687 nf_sockopt net/netfilter/nf_sockopt.c:106 [inline] nf_setsockopt+0x5f/0xb0 net/netfilter/nf_sockopt.c:115 ipv6_setsockopt+0xc0/0x120 net/ipv6/ipv6_sockglue.c:937 tcp_setsockopt+0x7b/0xc0 net/ipv4/tcp.c:2828 SYSC_setsockopt net/socket.c:1865 [inline] SyS_setsockopt+0x110/0x1e0 net/socket.c:1844 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #1 (&xt[i].mutex){+.+.}: __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 target_revfn+0x43/0x210 net/netfilter/x_tables.c:354 xt_find_revision+0x15e/0x1d0 net/netfilter/x_tables.c:378 nfnl_compat_get+0x1f7/0x870 net/netfilter/nft_compat.c:678 nfnetlink_rcv_msg+0x9bb/0xc00 net/netfilter/nfnetlink.c:214 netlink_rcv_skb+0x125/0x390 net/netlink/af_netlink.c:2433 nfnetlink_rcv+0x1ab/0x1da0 net/netfilter/nfnetlink.c:515 netlink_unicast_kernel net/netlink/af_netlink.c:1287 [inline] netlink_unicast+0x437/0x610 net/netlink/af_netlink.c:1313 netlink_sendmsg+0x62e/0xb80 net/netlink/af_netlink.c:1878 sock_sendmsg_nosec net/socket.c:646 [inline] sock_sendmsg+0xb5/0x100 net/socket.c:656 ___sys_sendmsg+0x6c8/0x800 net/socket.c:2062 __sys_sendmsg+0xa3/0x120 net/socket.c:2096 SYSC_sendmsg net/socket.c:2107 [inline] SyS_sendmsg+0x27/0x40 net/socket.c:2103 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb -> #0 (&table[i].mutex){+.+.}: lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 nf_tables_netdev_event+0x10d/0x4d0 net/netfilter/nf_tables_netdev.c:122 notifier_call_chain+0x108/0x1a0 kernel/notifier.c:93 call_netdevice_notifiers_info net/core/dev.c:1667 [inline] call_netdevice_notifiers net/core/dev.c:1683 [inline] rollback_registered_many+0x765/0xba0 net/core/dev.c:7203 rollback_registered+0xca/0x170 net/core/dev.c:7245 unregister_netdevice_queue+0x1b4/0x360 net/core/dev.c:8266 unregister_netdevice include/linux/netdevice.h:2443 [inline] ppp_ioctl+0x1b25/0x21e0 drivers/net/ppp/ppp_generic.c:622 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:500 [inline] do_vfs_ioctl+0x75a/0xff0 fs/ioctl.c:684 SYSC_ioctl fs/ioctl.c:701 [inline] SyS_ioctl+0x7f/0xb0 fs/ioctl.c:692 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb other info that might help us debug this: Chain exists of: &table[i].mutex --> &xt[i].mutex --> rtnl_mutex Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(rtnl_mutex); lock(&xt[i].mutex); lock(rtnl_mutex); lock(&table[i].mutex); *** DEADLOCK *** 2 locks held by syz-executor.0/11233: #0: (ppp_mutex){+.+.}, at: [] ppp_ioctl+0x84/0x21e0 drivers/net/ppp/ppp_generic.c:596 #1: (rtnl_mutex){+.+.}, at: [] ppp_ioctl+0x123b/0x21e0 drivers/net/ppp/ppp_generic.c:620 stack backtrace: CPU: 1 PID: 11233 Comm: syz-executor.0 Not tainted 4.14.232-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258 check_prev_add kernel/locking/lockdep.c:1905 [inline] check_prevs_add kernel/locking/lockdep.c:2022 [inline] validate_chain kernel/locking/lockdep.c:2464 [inline] __lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893 nf_tables_netdev_event+0x10d/0x4d0 net/netfilter/nf_tables_netdev.c:122 notifier_call_chain+0x108/0x1a0 kernel/notifier.c:93 call_netdevice_notifiers_info net/core/dev.c:1667 [inline] call_netdevice_notifiers net/core/dev.c:1683 [inline] rollback_registered_many+0x765/0xba0 net/core/dev.c:7203 rollback_registered+0xca/0x170 net/core/dev.c:7245 unregister_netdevice_queue+0x1b4/0x360 net/core/dev.c:8266 unregister_netdevice include/linux/netdevice.h:2443 [inline] ppp_ioctl+0x1b25/0x21e0 drivers/net/ppp/ppp_generic.c:622 vfs_ioctl fs/ioctl.c:46 [inline] file_ioctl fs/ioctl.c:500 [inline] do_vfs_ioctl+0x75a/0xff0 fs/ioctl.c:684 SYSC_ioctl fs/ioctl.c:701 [inline] SyS_ioctl+0x7f/0xb0 fs/ioctl.c:692 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb RIP: 0033:0x4665f9 RSP: 002b:00007f8a55952188 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 000000000056bf60 RCX: 00000000004665f9 RDX: 0000000000000000 RSI: 000000004004743c RDI: 0000000000000003 RBP: 00000000004bfce1 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 000000000056bf60 R13: 00007fffe424d4df R14: 00007f8a55952300 R15: 0000000000022000 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy sd 0:0:1:0: [sg0] tag#92 FAILED Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK sd 0:0:1:0: [sg0] tag#92 CDB: opcode=0x94 sd 0:0:1:0: [sg0] tag#92 CDB[00]: 94 e1 ce 17 6e f6 bc ba db 23 b3 ef a0 7b eb a2 sd 0:0:1:0: [sg0] tag#92 CDB[10]: fa 17 7b f5 91 f1 06 1e d6 ea f5 ee 1d 13 bb a5 sd 0:0:1:0: [sg0] tag#92 CDB[20]: 36 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy sd 0:0:1:0: [sg0] tag#3456 FAILED Result: hostbyte=DID_ABORT driverbyte=DRIVER_OK sd 0:0:1:0: [sg0] tag#3456 CDB: opcode=0x94 sd 0:0:1:0: [sg0] tag#3456 CDB[00]: 94 e1 ce 17 6e f6 bc ba db 23 b3 ef a0 7b eb a2 sd 0:0:1:0: [sg0] tag#3456 CDB[10]: fa 17 7b f5 91 f1 06 1e d6 ea f5 ee 1d 13 bb a5 sd 0:0:1:0: [sg0] tag#3456 CDB[20]: 36 libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy out of order segment: rcv_next 4FA3F124 seq 4FA3F267 - 4FA3F268 out of order segment: rcv_next 4FABF376 seq 4FABF4B9 - 4FABF4BA libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error netem: incorrect gi model size libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error out of order segment: rcv_next 50370D3B seq 50370E7E - 50370E7F netem: change failed netem: incorrect gi model size netem: change failed ceph: No mds server is up or the cluster is laggy mmap: syz-executor.2 (11586) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.txt. out of order segment: rcv_next 5076C54A seq 5076C68D - 5076C68E out of order segment: rcv_next 508AFB1C seq 508AFC5F - 508AFC60 F2FS-fs (loop5): Magic Mismatch, valid(0xf2f52010) - read(0x0) F2FS-fs (loop5): Can't find valid F2FS filesystem in 2th superblock F2FS-fs (loop5): invalid crc value libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error F2FS-fs (loop5): Try to recover 2th superblock, ret: 0 F2FS-fs (loop5): Mounted with checkpoint version = 753bd00b libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy audit: type=1804 audit(1620321789.199:9): pid=11672 uid=0 auid=4294967295 ses=4294967295 op="invalid_pcr" cause="open_writers" comm="syz-executor.3" name="/root/syzkaller-testdir293935719/syzkaller.8pk034/78/file0" dev="sda1" ino=14247 res=1 input: syz1 as /devices/virtual/input/input13 F2FS-fs (loop5): Magic Mismatch, valid(0xf2f52010) - read(0x0) F2FS-fs (loop5): Can't find valid F2FS filesystem in 2th superblock F2FS-fs (loop5): invalid crc value input: syz1 as /devices/virtual/input/input15 F2FS-fs (loop5): Try to recover 2th superblock, ret: 0 F2FS-fs (loop5): Mounted with checkpoint version = 753bd00b input: syz1 as /devices/virtual/input/input16 input: syz1 as /devices/virtual/input/input17 input: syz1 as /devices/virtual/input/input18 input: syz1 as /devices/virtual/input/input19 input: syz1 as /devices/virtual/input/input20 syz-executor.0 uses obsolete (PF_INET,SOCK_PACKET)