WARNING: possible recursive locking detected 4.14.267-syzkaller #0 Not tainted -------------------------------------------- syz-executor.5/12961 is trying to acquire lock: (&(&bond->stats_lock)->rlock#3/3){+.+.}, at: [] bond_get_stats+0xb7/0x440 drivers/net/bonding/bond_main.c:3457 but task is already holding lock: (&(&bond->stats_lock)->rlock#3/3){+.+.}, at: [] bond_get_stats+0xb7/0x440 drivers/net/bonding/bond_main.c:3457 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&(&bond->stats_lock)->rlock#3/3); lock(&(&bond->stats_lock)->rlock#3/3); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by syz-executor.5/12961: #0: (rtnl_mutex){+.+.}, at: [] rtnl_lock net/core/rtnetlink.c:72 [inline] #0: (rtnl_mutex){+.+.}, at: [] rtnetlink_rcv_msg+0x31d/0xb10 net/core/rtnetlink.c:4317 #1: (&(&bond->stats_lock)->rlock#3/3){+.+.}, at: [] bond_get_stats+0xb7/0x440 drivers/net/bonding/bond_main.c:3457 #2: (rcu_read_lock){....}, at: [] bond_get_nest_level drivers/net/bonding/bond_main.c:3446 [inline] #2: (rcu_read_lock){....}, at: [] bond_get_stats+0x9b/0x440 drivers/net/bonding/bond_main.c:3457 stack backtrace: CPU: 0 PID: 12961 Comm: syz-executor.5 Not tainted 4.14.267-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x1b2/0x281 lib/dump_stack.c:58 print_deadlock_bug kernel/locking/lockdep.c:1800 [inline] check_deadlock kernel/locking/lockdep.c:1847 [inline] validate_chain kernel/locking/lockdep.c:2448 [inline] __lock_acquire.cold+0x180/0x97c kernel/locking/lockdep.c:3491 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998 _raw_spin_lock_nested+0x30/0x40 kernel/locking/spinlock.c:362 bond_get_stats+0xb7/0x440 drivers/net/bonding/bond_main.c:3457 dev_get_stats+0xa5/0x280 net/core/dev.c:8019 bond_get_stats+0x1da/0x440 drivers/net/bonding/bond_main.c:3463 dev_get_stats+0xa5/0x280 net/core/dev.c:8019 rtnl_fill_stats+0x48/0xa90 net/core/rtnetlink.c:1079 rtnl_fill_ifinfo+0xe16/0x3050 net/core/rtnetlink.c:1385 rtmsg_ifinfo_build_skb+0x8e/0x130 net/core/rtnetlink.c:2915 rtmsg_ifinfo_event net/core/rtnetlink.c:2945 [inline] rtmsg_ifinfo_event net/core/rtnetlink.c:2936 [inline] rtnetlink_event+0xee/0x1a0 net/core/rtnetlink.c:4366 notifier_call_chain+0x108/0x1a0 kernel/notifier.c:93 call_netdevice_notifiers_info net/core/dev.c:1667 [inline] call_netdevice_notifiers net/core/dev.c:1683 [inline] netdev_features_change net/core/dev.c:1296 [inline] netdev_change_features+0x7e/0xa0 net/core/dev.c:7457 bond_compute_features+0x444/0x860 drivers/net/bonding/bond_main.c:1122 bond_slave_netdev_event drivers/net/bonding/bond_main.c:3191 [inline] bond_netdev_event+0x664/0xbd0 drivers/net/bonding/bond_main.c:3232 notifier_call_chain+0x108/0x1a0 kernel/notifier.c:93 call_netdevice_notifiers_info net/core/dev.c:1667 [inline] call_netdevice_notifiers net/core/dev.c:1683 [inline] netdev_features_change net/core/dev.c:1296 [inline] netdev_change_features+0x7e/0xa0 net/core/dev.c:7457 bond_compute_features+0x444/0x860 drivers/net/bonding/bond_main.c:1122 bond_enslave+0x37fb/0x4cf0 drivers/net/bonding/bond_main.c:1757 do_set_master+0x19e/0x200 net/core/rtnetlink.c:1961 rtnl_newlink+0x1356/0x1830 net/core/rtnetlink.c:2759 rtnetlink_rcv_msg+0x3be/0xb10 net/core/rtnetlink.c:4322 netlink_rcv_skb+0x125/0x390 net/netlink/af_netlink.c:2446 netlink_unicast_kernel net/netlink/af_netlink.c:1294 [inline] netlink_unicast+0x437/0x610 net/netlink/af_netlink.c:1320 netlink_sendmsg+0x648/0xbc0 net/netlink/af_netlink.c:1891 sock_sendmsg_nosec net/socket.c:646 [inline] sock_sendmsg+0xb5/0x100 net/socket.c:656 ___sys_sendmsg+0x6c8/0x800 net/socket.c:2062 __sys_sendmsg+0xa3/0x120 net/socket.c:2096 SYSC_sendmsg net/socket.c:2107 [inline] SyS_sendmsg+0x27/0x40 net/socket.c:2103 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292 entry_SYSCALL_64_after_hwframe+0x46/0xbb RIP: 0033:0x7fab60f53059 RSP: 002b:00007fab5f886168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007fab61066100 RCX: 00007fab60f53059 RDX: 0000000000000000 RSI: 0000000020000240 RDI: 0000000000000005 RBP: 00007fab60fad08d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fff03bc028f R14: 00007fab5f886300 R15: 0000000000022000 bond1: making interface vlan2 the new active one device bridge1 entered promiscuous mode bond1: Enslaving vlan2 as an active interface with an up link syz-executor.5 (12961) used greatest stack depth: 23792 bytes left netlink: 4 bytes leftover after parsing attributes in process `syz-executor.5'. 8021q: adding VLAN 0 to HW filter on device bond2 bond0: Enslaving bond2 as an active interface with an up link audit: type=1800 audit(5940451758.441:37): pid=13036 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="loop2" ino=23 res=0 device bridge2 entered promiscuous mode device bridge2 left promiscuous mode bond2: making interface vlan3 the new active one device bridge2 entered promiscuous mode bond2: Enslaving vlan3 as an active interface with an up link netlink: 4 bytes leftover after parsing attributes in process `syz-executor.5'. 8021q: adding VLAN 0 to HW filter on device bond3 bond0: Enslaving bond3 as an active interface with an up link device bridge3 entered promiscuous mode device bridge3 left promiscuous mode bond3: making interface vlan4 the new active one device bridge3 entered promiscuous mode bond3: Enslaving vlan4 as an active interface with an up link audit: type=1800 audit(5940451759.561:38): pid=13112 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="loop2" ino=25 res=0 netlink: 4 bytes leftover after parsing attributes in process `syz-executor.5'. 8021q: adding VLAN 0 to HW filter on device bond4 bond0: Enslaving bond4 as an active interface with an up link device bridge4 entered promiscuous mode device bridge4 left promiscuous mode bond4: making interface vlan5 the new active one device bridge4 entered promiscuous mode bond4: Enslaving vlan5 as an active interface with an up link audit: type=1800 audit(5940451760.221:39): pid=13190 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.5" name="bus" dev="sda1" ino=14340 res=0 audit: type=1800 audit(5940451760.261:40): pid=13187 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="loop2" ino=27 res=0 audit: type=1800 audit(5940451761.391:41): pid=13204 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.5" name="bus" dev="sda1" ino=14342 res=0 audit: type=1800 audit(5940451761.431:42): pid=13206 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="loop2" ino=29 res=0 audit: type=1800 audit(5940451762.361:43): pid=13231 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="loop2" ino=31 res=0 audit: type=1800 audit(5940451762.491:44): pid=13237 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.5" name="bus" dev="sda1" ino=14335 res=0 audit: type=1800 audit(5940451763.161:45): pid=13242 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor.2" name="bus" dev="loop2" ino=33 res=0 netlink: 4 bytes leftover after parsing attributes in process `syz-executor.5'. 8021q: adding VLAN 0 to HW filter on device bond5 bond0: Enslaving bond5 as an active interface with an up link device bridge5 entered promiscuous mode device bridge5 left promiscuous mode bond5: making interface vlan6 the new active one device bridge5 entered promiscuous mode bond5: Enslaving vlan6 as an active interface with an up link netlink: 4 bytes leftover after parsing attributes in process `syz-executor.5'. 8021q: adding VLAN 0 to HW filter on device bond1 bond0: Enslaving bond1 as an active interface with an up link netlink: 4 bytes leftover after parsing attributes in process `syz-executor.2'. 8021q: adding VLAN 0 to HW filter on device bond6 bond0: Enslaving bond6 as an active interface with an up link device bridge6 entered promiscuous mode device bridge6 left promiscuous mode bond6: making interface vlan7 the new active one device bridge6 entered promiscuous mode bond6: Enslaving vlan7 as an active interface with an up link device bridge5 entered promiscuous mode device bridge5 left promiscuous mode bond1: making interface vlan2 the new active one device bridge5 entered promiscuous mode bond1: Enslaving vlan2 as an active interface with an up link netlink: 4 bytes leftover after parsing attributes in process `syz-executor.2'. netlink: 4 bytes leftover after parsing attributes in process `syz-executor.5'. 8021q: adding VLAN 0 to HW filter on device bond7 bond0: Enslaving bond7 as an active interface with an up link 8021q: adding VLAN 0 to HW filter on device bond2 bond0: Enslaving bond2 as an active interface with an up link device bridge7 entered promiscuous mode device bridge7 left promiscuous mode bond7: making interface vlan8 the new active one device bridge7 entered promiscuous mode bond7: Enslaving vlan8 as an active interface with an up link device bridge6 entered promiscuous mode device bridge6 left promiscuous mode bond2: making interface vlan3 the new active one device bridge6 entered promiscuous mode bond2: Enslaving vlan3 as an active interface with an up link netlink: 4 bytes leftover after parsing attributes in process `syz-executor.2'. 8021q: adding VLAN 0 to HW filter on device bond3 bond0: Enslaving bond3 as an active interface with an up link device bridge7 entered promiscuous mode device bridge7 left promiscuous mode bond3: making interface vlan4 the new active one device bridge7 entered promiscuous mode bond3: Enslaving vlan4 as an active interface with an up link gfs2: invalid mount option: pcr=00000000000000000000 gfs2: can't parse mount arguments libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy kvm: pic: single mode not supported kvm: pic: single mode not supported kvm: pic: level sensitive irq not supported kvm: pic: single mode not supported kvm: pic: level sensitive irq not supported bond0: option active_slave: mode dependency failed, not supported in mode balance-rr(0) libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error kvm: pic: single mode not supported kvm: pic: level sensitive irq not supported kvm: pic: single mode not supported kvm: pic: level sensitive irq not supported ax25_connect(): syz-executor.0 uses autobind, please contact jreuter@yaina.de kvm: pic: single mode not supported kvm: pic: single mode not supported kvm: pic: level sensitive irq not supported kvm: pic: level sensitive irq not supported kvm: pic: single mode not supported kvm: pic: single mode not supported kvm: pic: level sensitive irq not supported kvm: pic: single mode not supported kvm: pic: level sensitive irq not supported kvm: pic: level sensitive irq not supported libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error kvm: pic: level sensitive irq not supported ceph: No mds server is up or the cluster is laggy ax25_connect(): syz-executor.0 uses autobind, please contact jreuter@yaina.de libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy ax25_connect(): syz-executor.0 uses autobind, please contact jreuter@yaina.de ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [d::]:6789 error -101 libceph: mon0 [d::]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error ceph: No mds server is up or the cluster is laggy libceph: connect [::6]:6789 error -101 libceph: mon1 [::6]:6789 connect error