Warning: Permanently added '10.128.0.252' (ED25519) to the list of known hosts. [ 34.533982][ T6097] chnl_net:caif_netlink_parms(): no params data found [ 34.564652][ T6097] bridge0: port 1(bridge_slave_0) entered blocking state [ 34.566716][ T6097] bridge0: port 1(bridge_slave_0) entered disabled state [ 34.568749][ T6097] bridge_slave_0: entered allmulticast mode [ 34.570801][ T6097] bridge_slave_0: entered promiscuous mode [ 34.574428][ T6097] bridge0: port 2(bridge_slave_1) entered blocking state [ 34.576358][ T6097] bridge0: port 2(bridge_slave_1) entered disabled state [ 34.578238][ T6097] bridge_slave_1: entered allmulticast mode [ 34.580305][ T6097] bridge_slave_1: entered promiscuous mode [ 34.592810][ T6097] bond0: (slave bond_slave_0): Enslaving as an active interface with an up link [ 34.596771][ T6097] bond0: (slave bond_slave_1): Enslaving as an active interface with an up link [ 34.608964][ T6097] team0: Port device team_slave_0 added [ 34.612193][ T6097] team0: Port device team_slave_1 added [ 34.623252][ T6097] batman_adv: batadv0: Adding interface: batadv_slave_0 [ 34.625254][ T6097] batman_adv: batadv0: The MTU of interface batadv_slave_0 is too small (1500) to handle the transport of batman-adv packets. Packets going over this interface will be fragmented on layer2 which could impact the performance. Setting the MTU to 1560 would solve the problem. [ 34.631919][ T6097] batman_adv: batadv0: Not using interface batadv_slave_0 (retrying later): interface not active [ 34.636500][ T6097] batman_adv: batadv0: Adding interface: batadv_slave_1 [ 34.638248][ T6097] batman_adv: batadv0: The MTU of interface batadv_slave_1 is too small (1500) to handle the transport of batman-adv packets. Packets going over this interface will be fragmented on layer2 which could impact the performance. Setting the MTU to 1560 would solve the problem. [ 34.644890][ T6097] batman_adv: batadv0: Not using interface batadv_slave_1 (retrying later): interface not active [ 34.715846][ T6097] hsr_slave_0: entered promiscuous mode [ 34.754254][ T6097] hsr_slave_1: entered promiscuous mode [ 34.864279][ T6097] netdevsim netdevsim0 netdevsim0: renamed from eth0 [ 34.906427][ T6097] netdevsim netdevsim0 netdevsim1: renamed from eth1 [ 34.956198][ T6097] netdevsim netdevsim0 netdevsim2: renamed from eth2 [ 34.995456][ T6097] netdevsim netdevsim0 netdevsim3: renamed from eth3 [ 35.059007][ T6097] bridge0: port 2(bridge_slave_1) entered blocking state [ 35.060868][ T6097] bridge0: port 2(bridge_slave_1) entered forwarding state [ 35.063193][ T6097] bridge0: port 1(bridge_slave_0) entered blocking state [ 35.065238][ T6097] bridge0: port 1(bridge_slave_0) entered forwarding state [ 35.092226][ T6097] 8021q: adding VLAN 0 to HW filter on device bond0 [ 35.111401][ T6104] bridge0: port 1(bridge_slave_0) entered disabled state [ 35.126533][ T6104] bridge0: port 2(bridge_slave_1) entered disabled state [ 35.133854][ T6097] 8021q: adding VLAN 0 to HW filter on device team0 [ 35.139298][ T23] bridge0: port 1(bridge_slave_0) entered blocking state [ 35.141072][ T23] bridge0: port 1(bridge_slave_0) entered forwarding state [ 35.147147][ T23] bridge0: port 2(bridge_slave_1) entered blocking state [ 35.149069][ T23] bridge0: port 2(bridge_slave_1) entered forwarding state [ 35.163499][ T6097] hsr0: Slave A (hsr_slave_0) is not up; please bring it up to get a fully working HSR network [ 35.166696][ T6097] hsr0: Slave B (hsr_slave_1) is not up; please bring it up to get a fully working HSR network [ 35.188104][ T6097] 8021q: adding VLAN 0 to HW filter on device batadv0 [ 35.211191][ T6097] veth0_vlan: entered promiscuous mode [ 35.216967][ T6097] veth1_vlan: entered promiscuous mode [ 35.228746][ T6097] veth0_macvtap: entered promiscuous mode [ 35.232232][ T6097] veth1_macvtap: entered promiscuous mode [ 35.243822][ T6097] batman_adv: batadv0: Interface activated: batadv_slave_0 [ 35.249064][ T6097] batman_adv: batadv0: Interface activated: batadv_slave_1 [ 35.252294][ T6097] netdevsim netdevsim0 netdevsim0: set [1, 0] type 2 family 0 port 6081 - 0 [ 35.254680][ T6097] netdevsim netdevsim0 netdevsim1: set [1, 0] type 2 family 0 port 6081 - 0 [ 35.256908][ T6097] netdevsim netdevsim0 netdevsim2: set [1, 0] type 2 family 0 port 6081 - 0 [ 35.259092][ T6097] netdevsim netdevsim0 netdevsim3: set [1, 0] type 2 family 0 port 6081 - 0 executing program [ 35.711982][ C0] [ 35.712669][ C0] ====================================================== [ 35.714464][ C0] WARNING: possible circular locking dependency detected [ 35.716326][ C0] 6.7.0-rc5-syzkaller-gd5b235ec8eab #0 Not tainted [ 35.717872][ C0] ------------------------------------------------------ [ 35.719675][ C0] kworker/0:0/8 is trying to acquire lock: [ 35.721095][ C0] ffff0000d4695088 (&priv->active_session_list_lock){+.-.}-{2:2}, at: j1939_session_activate+0x60/0x378 [ 35.723998][ C0] [ 35.723998][ C0] but task is already holding lock: [ 35.725919][ C0] ffff0000d462c5c8 (&jsk->sk_session_queue_lock){+.-.}-{2:2}, at: j1939_sk_queue_activate_next+0x60/0x3b4 [ 35.728721][ C0] [ 35.728721][ C0] which lock already depends on the new lock. [ 35.728721][ C0] [ 35.731340][ C0] [ 35.731340][ C0] the existing dependency chain (in reverse order) is: [ 35.733599][ C0] [ 35.733599][ C0] -> #2 (&jsk->sk_session_queue_lock){+.-.}-{2:2}: [ 35.735798][ C0] _raw_spin_lock_bh+0x48/0x60 [ 35.737331][ C0] j1939_sk_queue_drop_all+0x4c/0x200 [ 35.738858][ C0] j1939_sk_netdev_event_netdown+0xe0/0x144 [ 35.740446][ C0] j1939_netdev_notify+0xf0/0x144 [ 35.741896][ C0] notifier_call_chain+0x1a4/0x510 [ 35.743348][ C0] raw_notifier_call_chain+0x3c/0x50 [ 35.744809][ C0] __dev_notify_flags+0x2c4/0x550 [ 35.746265][ C0] dev_change_flags+0xd0/0x15c [ 35.747624][ C0] do_setlink+0xc64/0x366c [ 35.748812][ C0] rtnl_newlink+0x14b4/0x1bc0 [ 35.750032][ C0] rtnetlink_rcv_msg+0x748/0xdbc [ 35.751448][ C0] netlink_rcv_skb+0x214/0x3c4 [ 35.752714][ C0] rtnetlink_rcv+0x28/0x38 [ 35.753891][ C0] netlink_unicast+0x65c/0x898 [ 35.755143][ C0] netlink_sendmsg+0x83c/0xb20 [ 35.756434][ C0] ____sys_sendmsg+0x56c/0x840 [ 35.757789][ C0] __sys_sendmsg+0x26c/0x33c [ 35.759063][ C0] __arm64_sys_sendmsg+0x80/0x94 [ 35.760503][ C0] invoke_syscall+0x98/0x2b8 [ 35.761776][ C0] el0_svc_common+0x130/0x23c [ 35.763113][ C0] do_el0_svc+0x48/0x58 [ 35.764237][ C0] el0_svc+0x54/0x158 [ 35.765408][ C0] el0t_64_sync_handler+0x84/0xfc [ 35.766827][ C0] el0t_64_sync+0x190/0x194 [ 35.768068][ C0] [ 35.768068][ C0] -> #1 (&priv->j1939_socks_lock){+.-.}-{2:2}: [ 35.770296][ C0] _raw_spin_lock_bh+0x48/0x60 [ 35.771701][ C0] j1939_sk_errqueue+0x90/0x144 [ 35.772743][ C0] j1939_session_put+0xf0/0x4b4 [ 35.773719][ C0] j1939_cancel_active_session+0x2ec/0x414 [ 35.774843][ C0] j1939_netdev_notify+0xe8/0x144 [ 35.775826][ C0] notifier_call_chain+0x1a4/0x510 [ 35.776835][ C0] raw_notifier_call_chain+0x3c/0x50 [ 35.778284][ C0] __dev_notify_flags+0x2c4/0x550 [ 35.779636][ C0] dev_change_flags+0xd0/0x15c [ 35.780856][ C0] do_setlink+0xc64/0x366c [ 35.782092][ C0] rtnl_newlink+0x14b4/0x1bc0 [ 35.783377][ C0] rtnetlink_rcv_msg+0x748/0xdbc [ 35.784691][ C0] netlink_rcv_skb+0x214/0x3c4 [ 35.786133][ C0] rtnetlink_rcv+0x28/0x38 [ 35.787350][ C0] netlink_unicast+0x65c/0x898 [ 35.788687][ C0] netlink_sendmsg+0x83c/0xb20 [ 35.790014][ C0] ____sys_sendmsg+0x56c/0x840 [ 35.791372][ C0] __sys_sendmsg+0x26c/0x33c [ 35.792581][ C0] __arm64_sys_sendmsg+0x80/0x94 [ 35.793883][ C0] invoke_syscall+0x98/0x2b8 [ 35.795134][ C0] el0_svc_common+0x130/0x23c [ 35.796554][ C0] do_el0_svc+0x48/0x58 [ 35.797657][ C0] el0_svc+0x54/0x158 [ 35.798745][ C0] el0t_64_sync_handler+0x84/0xfc [ 35.800208][ C0] el0t_64_sync+0x190/0x194 [ 35.801416][ C0] [ 35.801416][ C0] -> #0 (&priv->active_session_list_lock){+.-.}-{2:2}: [ 35.803725][ C0] __lock_acquire+0x3384/0x763c [ 35.805148][ C0] lock_acquire+0x23c/0x71c [ 35.806415][ C0] _raw_spin_lock_bh+0x48/0x60 [ 35.807807][ C0] j1939_session_activate+0x60/0x378 [ 35.809299][ C0] j1939_sk_queue_activate_next+0x230/0x3b4 [ 35.810914][ C0] j1939_xtp_rx_eoma+0x2c0/0x4c0 [ 35.812267][ C0] j1939_tp_recv+0x714/0xe14 [ 35.813551][ C0] j1939_can_recv+0x5bc/0x930 [ 35.814898][ C0] can_rcv_filter+0x308/0x714 [ 35.816146][ C0] can_receive+0x33c/0x49c [ 35.817424][ C0] can_rcv+0x128/0x23c [ 35.818599][ C0] __netif_receive_skb+0x18c/0x400 [ 35.819962][ C0] process_backlog+0x3c0/0x70c [ 35.821385][ C0] __napi_poll+0xb4/0x650 [ 35.822662][ C0] net_rx_action+0x5e4/0xdc4 [ 35.824007][ C0] __do_softirq+0x2d8/0xce4 [ 35.825296][ C0] ____do_softirq+0x14/0x20 [ 35.826531][ C0] call_on_irq_stack+0x24/0x4c [ 35.827942][ C0] do_softirq_own_stack+0x20/0x2c [ 35.829282][ C0] do_softirq+0x90/0xf8 [ 35.830449][ C0] __local_bh_enable_ip+0x288/0x44c [ 35.831865][ C0] _raw_read_unlock_bh+0x3c/0x4c [ 35.833323][ C0] inet6_fill_ifla6_attrs+0xf18/0x1e88 [ 35.834880][ C0] inet6_fill_link_af+0xac/0x144 [ 35.836165][ C0] rtnl_fill_link_af+0x17c/0x3f4 [ 35.837459][ C0] rtnl_fill_ifinfo+0x1a0c/0x1d2c [ 35.838878][ C0] rtmsg_ifinfo_build_skb+0x180/0x260 [ 35.840350][ C0] rtmsg_ifinfo+0xa0/0x188 [ 35.841643][ C0] netdev_state_change+0x1a8/0x238 [ 35.843095][ C0] linkwatch_do_dev+0x108/0x1a8 [ 35.844410][ C0] __linkwatch_run_queue+0x3a0/0x700 [ 35.845950][ C0] linkwatch_event+0x58/0x68 [ 35.847305][ C0] process_one_work+0x694/0x1204 [ 35.848711][ C0] worker_thread+0x938/0xef4 [ 35.850090][ C0] kthread+0x288/0x310 [ 35.851348][ C0] ret_from_fork+0x10/0x20 [ 35.852625][ C0] [ 35.852625][ C0] other info that might help us debug this: [ 35.852625][ C0] [ 35.855306][ C0] Chain exists of: [ 35.855306][ C0] &priv->active_session_list_lock --> &priv->j1939_socks_lock --> &jsk->sk_session_queue_lock [ 35.855306][ C0] [ 35.859481][ C0] Possible unsafe locking scenario: [ 35.859481][ C0] [ 35.861397][ C0] CPU0 CPU1 [ 35.862752][ C0] ---- ---- [ 35.864118][ C0] lock(&jsk->sk_session_queue_lock); [ 35.865518][ C0] lock(&priv->j1939_socks_lock); [ 35.867413][ C0] lock(&jsk->sk_session_queue_lock); [ 35.869444][ C0] lock(&priv->active_session_list_lock); [ 35.870604][ C0] [ 35.870604][ C0] *** DEADLOCK *** [ 35.870604][ C0] [ 35.872704][ C0] 7 locks held by kworker/0:0/8: [ 35.873973][ C0] #0: ffff0000c0020d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x560/0x1204 [ 35.876583][ C0] #1: ffff800092f77c20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x5a0/0x1204 [ 35.879125][ C0] #2: ffff800090f71068 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c [ 35.881343][ C0] #3: ffff80008e6c4e00 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x10/0x4c [ 35.883905][ C0] #4: ffff80008e6c4e00 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x18/0x54 [ 35.886379][ C0] #5: ffff80008e6c4e00 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x10/0x4c [ 35.888713][ C0] #6: ffff0000d462c5c8 (&jsk->sk_session_queue_lock){+.-.}-{2:2}, at: j1939_sk_queue_activate_next+0x60/0x3b4 [ 35.891700][ C0] [ 35.891700][ C0] stack backtrace: [ 35.893319][ C0] CPU: 0 PID: 8 Comm: kworker/0:0 Not tainted 6.7.0-rc5-syzkaller-gd5b235ec8eab #0 [ 35.895702][ C0] Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023 [ 35.898162][ C0] Workqueue: events linkwatch_event [ 35.899587][ C0] Call trace: [ 35.900417][ C0] dump_backtrace+0x1b8/0x1e4 [ 35.901364][ C0] show_stack+0x2c/0x3c [ 35.902298][ C0] dump_stack_lvl+0xd0/0x124 [ 35.903437][ C0] dump_stack+0x1c/0x28 [ 35.904433][ C0] print_circular_bug+0x150/0x1b8 [ 35.905702][ C0] check_noncircular+0x310/0x404 [ 35.906921][ C0] __lock_acquire+0x3384/0x763c [ 35.908045][ C0] lock_acquire+0x23c/0x71c [ 35.909176][ C0] _raw_spin_lock_bh+0x48/0x60 [ 35.910452][ C0] j1939_session_activate+0x60/0x378 [ 35.911673][ C0] j1939_sk_queue_activate_next+0x230/0x3b4 [ 35.913221][ C0] j1939_xtp_rx_eoma+0x2c0/0x4c0 [ 35.914422][ C0] j1939_tp_recv+0x714/0xe14 [ 35.915609][ C0] j1939_can_recv+0x5bc/0x930 [ 35.916781][ C0] can_rcv_filter+0x308/0x714 [ 35.917969][ C0] can_receive+0x33c/0x49c [ 35.919072][ C0] can_rcv+0x128/0x23c [ 35.920086][ C0] __netif_receive_skb+0x18c/0x400 [ 35.921451][ C0] process_backlog+0x3c0/0x70c [ 35.922668][ C0] __napi_poll+0xb4/0x650 [ 35.923828][ C0] net_rx_action+0x5e4/0xdc4 [ 35.924994][ C0] __do_softirq+0x2d8/0xce4 [ 35.926049][ C0] ____do_softirq+0x14/0x20 [ 35.927197][ C0] call_on_irq_stack+0x24/0x4c [ 35.928470][ C0] do_softirq_own_stack+0x20/0x2c [ 35.929732][ C0] do_softirq+0x90/0xf8 [ 35.930666][ C0] __local_bh_enable_ip+0x288/0x44c [ 35.932021][ C0] _raw_read_unlock_bh+0x3c/0x4c [ 35.933365][ C0] inet6_fill_ifla6_attrs+0xf18/0x1e88 [ 35.934774][ C0] inet6_fill_link_af+0xac/0x144 [ 35.936050][ C0] rtnl_fill_link_af+0x17c/0x3f4 [ 35.937285][ C0] rtnl_fill_ifinfo+0x1a0c/0x1d2c [ 35.938686][ C0] rtmsg_ifinfo_build_skb+0x180/0x260 [ 35.940051][ C0] rtmsg_ifinfo+0xa0/0x188 [ 35.941152][ C0] netdev_state_change+0x1a8/0x238 [ 35.942471][ C0] linkwatch_do_dev+0x108/0x1a8 [ 35.943736][ C0] __linkwatch_run_queue+0x3a0/0x700 [ 35.944948][ C0] linkwatch_event+0x58/0x68 [ 35.946147][ C0] process_one_work+0x694/0x1204 [ 35.947417][ C0] worker_thread+0x938/0xef4 [ 35.948615][ C0] kthread+0x288/0x310 [ 35.949601][ C0] ret_from_fork+0x10/0x20