forked to background, child pid 3186 no interfaces have a carrier [ 26.488047][ T3187] 8021q: adding VLAN 0 to HW filter on device bond0 [ 26.504523][ T3187] eql: remember to turn off Van-Jacobson compression on your slave devices Starting sshd: OK syzkaller Warning: Permanently added '10.128.1.89' (ECDSA) to the list of known hosts. syzkaller login: [ 63.872371][ T3601] cgroup: Unknown subsys name 'net' [ 63.974835][ T3601] cgroup: Unknown subsys name 'rlimit' [ 64.175890][ T3603] chnl_net:caif_netlink_parms(): no params data found [ 64.216517][ T3603] bridge0: port 1(bridge_slave_0) entered blocking state [ 64.224446][ T3603] bridge0: port 1(bridge_slave_0) entered disabled state [ 64.232424][ T3603] device bridge_slave_0 entered promiscuous mode [ 64.241389][ T3603] bridge0: port 2(bridge_slave_1) entered blocking state [ 64.248485][ T3603] bridge0: port 2(bridge_slave_1) entered disabled state [ 64.256350][ T3603] device bridge_slave_1 entered promiscuous mode [ 64.276435][ T3603] bond0: (slave bond_slave_0): Enslaving as an active interface with an up link [ 64.287074][ T3603] bond0: (slave bond_slave_1): Enslaving as an active interface with an up link [ 64.309037][ T3603] team0: Port device team_slave_0 added [ 64.316099][ T3603] team0: Port device team_slave_1 added [ 64.333457][ T3603] batman_adv: batadv0: Adding interface: batadv_slave_0 [ 64.340542][ T3603] batman_adv: batadv0: The MTU of interface batadv_slave_0 is too small (1500) to handle the transport of batman-adv packets. Packets going over this interface will be fragmented on layer2 which could impact the performance. Setting the MTU to 1560 would solve the problem. [ 64.366476][ T3603] batman_adv: batadv0: Not using interface batadv_slave_0 (retrying later): interface not active [ 64.379028][ T3603] batman_adv: batadv0: Adding interface: batadv_slave_1 [ 64.386045][ T3603] batman_adv: batadv0: The MTU of interface batadv_slave_1 is too small (1500) to handle the transport of batman-adv packets. Packets going over this interface will be fragmented on layer2 which could impact the performance. Setting the MTU to 1560 would solve the problem. [ 64.412174][ T3603] batman_adv: batadv0: Not using interface batadv_slave_1 (retrying later): interface not active [ 64.437093][ T3603] device hsr_slave_0 entered promiscuous mode [ 64.444165][ T3603] device hsr_slave_1 entered promiscuous mode [ 64.518597][ T3603] netdevsim netdevsim0 netdevsim0: renamed from eth0 [ 64.528502][ T3603] netdevsim netdevsim0 netdevsim1: renamed from eth1 [ 64.537915][ T3603] netdevsim netdevsim0 netdevsim2: renamed from eth2 [ 64.546890][ T3603] netdevsim netdevsim0 netdevsim3: renamed from eth3 [ 64.567339][ T3603] bridge0: port 2(bridge_slave_1) entered blocking state [ 64.574582][ T3603] bridge0: port 2(bridge_slave_1) entered forwarding state [ 64.582391][ T3603] bridge0: port 1(bridge_slave_0) entered blocking state [ 64.589529][ T3603] bridge0: port 1(bridge_slave_0) entered forwarding state [ 64.632571][ T3603] 8021q: adding VLAN 0 to HW filter on device bond0 [ 64.644771][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): veth0: link becomes ready [ 64.654783][ T3165] bridge0: port 1(bridge_slave_0) entered disabled state [ 64.664449][ T3165] bridge0: port 2(bridge_slave_1) entered disabled state [ 64.672252][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready [ 64.685569][ T3603] 8021q: adding VLAN 0 to HW filter on device team0 [ 64.696720][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): bridge_slave_0: link becomes ready [ 64.705722][ T3165] bridge0: port 1(bridge_slave_0) entered blocking state [ 64.712859][ T3165] bridge0: port 1(bridge_slave_0) entered forwarding state [ 64.730519][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): bridge_slave_1: link becomes ready [ 64.738884][ T3165] bridge0: port 2(bridge_slave_1) entered blocking state [ 64.746022][ T3165] bridge0: port 2(bridge_slave_1) entered forwarding state [ 64.754649][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): team_slave_0: link becomes ready [ 64.764414][ T6] IPv6: ADDRCONF(NETDEV_CHANGE): team0: link becomes ready [ 64.776330][ T3610] IPv6: ADDRCONF(NETDEV_CHANGE): team_slave_1: link becomes ready [ 64.791055][ T3603] hsr0: Slave A (hsr_slave_0) is not up; please bring it up to get a fully working HSR network [ 64.802068][ T3603] hsr0: Slave B (hsr_slave_1) is not up; please bring it up to get a fully working HSR network [ 64.815428][ T3610] IPv6: ADDRCONF(NETDEV_CHANGE): hsr_slave_0: link becomes ready [ 64.825428][ T3610] IPv6: ADDRCONF(NETDEV_CHANGE): hsr_slave_1: link becomes ready [ 64.834077][ T3610] IPv6: ADDRCONF(NETDEV_CHANGE): hsr0: link becomes ready [ 64.850838][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): vxcan0: link becomes ready [ 64.858225][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): vxcan1: link becomes ready [ 64.867173][ T3603] 8021q: adding VLAN 0 to HW filter on device batadv0 [ 64.979702][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): veth0_virt_wifi: link becomes ready [ 64.988378][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): veth0_vlan: link becomes ready [ 64.997386][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): vlan0: link becomes ready [ 65.005556][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): vlan1: link becomes ready [ 65.013711][ T3603] device veth0_vlan entered promiscuous mode [ 65.025084][ T3603] device veth1_vlan entered promiscuous mode [ 65.042362][ T26] IPv6: ADDRCONF(NETDEV_CHANGE): macvlan0: link becomes ready [ 65.052374][ T26] IPv6: ADDRCONF(NETDEV_CHANGE): macvlan1: link becomes ready [ 65.060894][ T26] IPv6: ADDRCONF(NETDEV_CHANGE): veth0_macvtap: link becomes ready [ 65.071647][ T3603] device veth0_macvtap entered promiscuous mode [ 65.081110][ T3603] device veth1_macvtap entered promiscuous mode [ 65.095668][ T3603] batman_adv: batadv0: Interface activated: batadv_slave_0 [ 65.103276][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): veth0_to_batadv: link becomes ready [ 65.112661][ T3165] IPv6: ADDRCONF(NETDEV_CHANGE): macvtap0: link becomes ready [ 65.123388][ T3603] batman_adv: batadv0: Interface activated: batadv_slave_1 [ 65.131670][ T3610] IPv6: ADDRCONF(NETDEV_CHANGE): veth1_to_batadv: link becomes ready [ 65.142477][ T3603] netdevsim netdevsim0 netdevsim0: set [1, 0] type 2 family 0 port 6081 - 0 [ 65.152341][ T3603] netdevsim netdevsim0 netdevsim1: set [1, 0] type 2 family 0 port 6081 - 0 [ 65.161258][ T3603] netdevsim netdevsim0 netdevsim2: set [1, 0] type 2 family 0 port 6081 - 0 [ 65.170275][ T3603] netdevsim netdevsim0 netdevsim3: set [1, 0] type 2 family 0 port 6081 - 0 executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program [ 71.032591][ T1229] ieee802154 phy0 wpan0: encryption failed: -22 [ 71.039128][ T1229] ieee802154 phy1 wpan1: encryption failed: -22 executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program [ 76.152021][ T14] cfg80211: failed to load regulatory.db executing program executing program [ 77.016794][ T34] [ 77.019158][ T34] ====================================================== [ 77.026179][ T34] WARNING: possible circular locking dependency detected [ 77.033199][ T34] 6.0.0-rc1-next-20220817-syzkaller #0 Not tainted [ 77.039704][ T34] ------------------------------------------------------ [ 77.046724][ T34] kworker/u4:2/34 is trying to acquire lock: [ 77.052705][ T34] ffff8880799640e8 ((work_completion)(&(&cp->cp_send_w)->work)){+.+.}-{0:0}, at: __flush_work+0xdd/0xae0 [ 77.063981][ T34] [ 77.063981][ T34] but task is already holding lock: [ 77.071346][ T34] ffff8880746ae730 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: rds_tcp_reset_callbacks+0x1bf/0x4d0 [ 77.081573][ T34] [ 77.081573][ T34] which lock already depends on the new lock. [ 77.081573][ T34] [ 77.091979][ T34] [ 77.091979][ T34] the existing dependency chain (in reverse order) is: [ 77.100992][ T34] [ 77.100992][ T34] -> #1 (k-sk_lock-AF_INET6){+.+.}-{0:0}: [ 77.108916][ T34] lock_sock_nested+0x36/0xf0 [ 77.114129][ T34] tcp_sock_set_cork+0x16/0x90 [ 77.119426][ T34] rds_send_xmit+0x386/0x2540 [ 77.124636][ T34] rds_send_worker+0x92/0x2e0 [ 77.129832][ T34] process_one_work+0x991/0x1610 [ 77.135300][ T34] worker_thread+0x665/0x1080 [ 77.140505][ T34] kthread+0x2e4/0x3a0 [ 77.145097][ T34] ret_from_fork+0x1f/0x30 [ 77.150046][ T34] [ 77.150046][ T34] -> #0 ((work_completion)(&(&cp->cp_send_w)->work)){+.+.}-{0:0}: [ 77.160035][ T34] __lock_acquire+0x2a43/0x56d0 [ 77.165409][ T34] lock_acquire+0x1ab/0x570 [ 77.170431][ T34] __flush_work+0x105/0xae0 [ 77.175463][ T34] __cancel_work_timer+0x3f9/0x570 [ 77.181102][ T34] rds_tcp_reset_callbacks+0x1cb/0x4d0 [ 77.187095][ T34] rds_tcp_accept_one+0x9d5/0xd10 [ 77.192646][ T34] rds_tcp_accept_worker+0x55/0x80 [ 77.198299][ T34] process_one_work+0x991/0x1610 [ 77.203765][ T34] worker_thread+0x665/0x1080 [ 77.208969][ T34] kthread+0x2e4/0x3a0 [ 77.213566][ T34] ret_from_fork+0x1f/0x30 [ 77.218512][ T34] [ 77.218512][ T34] other info that might help us debug this: [ 77.218512][ T34] [ 77.228729][ T34] Possible unsafe locking scenario: [ 77.228729][ T34] [ 77.236170][ T34] CPU0 CPU1 [ 77.241527][ T34] ---- ---- [ 77.246886][ T34] lock(k-sk_lock-AF_INET6); [ 77.251559][ T34] lock((work_completion)(&(&cp->cp_send_w)->work)); [ 77.260837][ T34] lock(k-sk_lock-AF_INET6); [ 77.268028][ T34] lock((work_completion)(&(&cp->cp_send_w)->work)); [ 77.274785][ T34] [ 77.274785][ T34] *** DEADLOCK *** [ 77.274785][ T34] [ 77.282916][ T34] 4 locks held by kworker/u4:2/34: [ 77.288018][ T34] #0: ffff888027184938 ((wq_completion)krdsd){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 [ 77.298304][ T34] #1: ffffc90000ab7da8 ((work_completion)(&rtn->rds_tcp_accept_w)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 [ 77.310409][ T34] #2: ffff8880254ac088 (&tc->t_conn_path_lock){+.+.}-{3:3}, at: rds_tcp_accept_one+0x892/0xd10 [ 77.320865][ T34] #3: ffff8880746ae730 (k-sk_lock-AF_INET6){+.+.}-{0:0}, at: rds_tcp_reset_callbacks+0x1bf/0x4d0 [ 77.331494][ T34] [ 77.331494][ T34] stack backtrace: [ 77.337373][ T34] CPU: 1 PID: 34 Comm: kworker/u4:2 Not tainted 6.0.0-rc1-next-20220817-syzkaller #0 [ 77.346826][ T34] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022 [ 77.356879][ T34] Workqueue: krdsd rds_tcp_accept_worker [ 77.362524][ T34] Call Trace: [ 77.365797][ T34] [ 77.368735][ T34] dump_stack_lvl+0xcd/0x134 [ 77.373359][ T34] check_noncircular+0x25f/0x2e0 [ 77.378300][ T34] ? register_lock_class+0xbe/0x1120 [ 77.383589][ T34] ? print_circular_bug+0x1e0/0x1e0 [ 77.388809][ T34] ? usage_match+0x100/0x100 [ 77.393401][ T34] ? usage_match+0x100/0x100 [ 77.398023][ T34] __lock_acquire+0x2a43/0x56d0 [ 77.402882][ T34] ? lockdep_hardirqs_on_prepare+0x400/0x400 [ 77.408874][ T34] lock_acquire+0x1ab/0x570 [ 77.413380][ T34] ? __flush_work+0xdd/0xae0 [ 77.417982][ T34] ? lock_release+0x780/0x780 [ 77.422677][ T34] ? debug_object_assert_init+0x246/0x2e0 [ 77.428410][ T34] ? lock_downgrade+0x6e0/0x6e0 [ 77.433262][ T34] ? lock_chain_count+0x20/0x20 [ 77.438119][ T34] __flush_work+0x105/0xae0 [ 77.442637][ T34] ? __flush_work+0xdd/0xae0 [ 77.447240][ T34] ? lock_chain_count+0x20/0x20 [ 77.452092][ T34] ? queue_delayed_work_on+0x120/0x120 [ 77.457563][ T34] ? mark_lock.part.0+0xee/0x1910 [ 77.462591][ T34] ? del_timer+0xc5/0x110 [ 77.466929][ T34] ? mark_held_locks+0x9f/0xe0 [ 77.471712][ T34] ? __cancel_work_timer+0x408/0x570 [ 77.477015][ T34] __cancel_work_timer+0x3f9/0x570 [ 77.482137][ T34] ? cancel_delayed_work+0x20/0x20 [ 77.487259][ T34] ? rds_tcp_reset_callbacks+0x1bf/0x4d0 [ 77.492939][ T34] ? mark_held_locks+0x9f/0xe0 [ 77.497704][ T34] ? __local_bh_enable_ip+0xa0/0x120 [ 77.502994][ T34] ? __local_bh_enable_ip+0xa0/0x120 [ 77.508287][ T34] rds_tcp_reset_callbacks+0x1cb/0x4d0 [ 77.513759][ T34] ? mutex_lock_io_nested+0x1190/0x1190 [ 77.519315][ T34] ? rds_tcp_set_callbacks+0x590/0x590 [ 77.524786][ T34] ? rds_conn_create+0x40/0x50 [ 77.529560][ T34] rds_tcp_accept_one+0x9d5/0xd10 [ 77.534596][ T34] ? rds_tcp_keepalive+0xd0/0xd0 [ 77.539548][ T34] rds_tcp_accept_worker+0x55/0x80 [ 77.544671][ T34] process_one_work+0x991/0x1610 [ 77.549802][ T34] ? pwq_dec_nr_in_flight+0x2a0/0x2a0 [ 77.555288][ T34] ? rwlock_bug.part.0+0x90/0x90 [ 77.560238][ T34] ? _raw_spin_lock_irq+0x41/0x50 [ 77.565280][ T34] worker_thread+0x665/0x1080 [ 77.569972][ T34] ? process_one_work+0x1610/0x1610 [ 77.575193][ T34] kthread+0x2e4/0x3a0 [ 77.579269][ T34] ? kthread_complete_and_exit+0x40/0x40 [ 77.584908][ T34] ret_from_fork+0x1f/0x30 [ 77.589342][ T34] executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program executing program