INFO: task kworker/0:1:9 blocked for more than 143 seconds. Not tainted 6.12.0-rc1-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:1 state:D stack:19440 pid:9 tgid:9 ppid:2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:5315 [inline] __schedule+0x1895/0x4b30 kernel/sched/core.c:6675 __schedule_loop kernel/sched/core.c:6752 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6767 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6824 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 ovs_lock net/openvswitch/datapath.c:108 [inline] ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/0:3:5331 blocked for more than 143 seconds. Not tainted 6.12.0-rc1-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:3 state:D stack:19544 pid:5331 tgid:5331 ppid:2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:5315 [inline] __schedule+0x1895/0x4b30 kernel/sched/core.c:6675 __schedule_loop kernel/sched/core.c:6752 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6767 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6824 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 ovs_lock net/openvswitch/datapath.c:108 [inline] ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/0:5:5642 blocked for more than 144 seconds. Not tainted 6.12.0-rc1-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:5 state:D stack:20160 pid:5642 tgid:5642 ppid:2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:5315 [inline] __schedule+0x1895/0x4b30 kernel/sched/core.c:6675 __schedule_loop kernel/sched/core.c:6752 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6767 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6824 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 ovs_lock net/openvswitch/datapath.c:108 [inline] ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/0:6:5758 blocked for more than 144 seconds. Not tainted 6.12.0-rc1-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:6 state:D stack:18848 pid:5758 tgid:5758 ppid:2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:5315 [inline] __schedule+0x1895/0x4b30 kernel/sched/core.c:6675 __schedule_loop kernel/sched/core.c:6752 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6767 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6824 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 ovs_lock net/openvswitch/datapath.c:108 [inline] ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/0:2:22857 blocked for more than 145 seconds. Not tainted 6.12.0-rc1-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:2 state:D stack:19792 pid:22857 tgid:22857 ppid:2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:5315 [inline] __schedule+0x1895/0x4b30 kernel/sched/core.c:6675 __schedule_loop kernel/sched/core.c:6752 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6767 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6824 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 ovs_lock net/openvswitch/datapath.c:108 [inline] ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/0:4:24502 blocked for more than 146 seconds. Not tainted 6.12.0-rc1-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:4 state:D stack:27792 pid:24502 tgid:24502 ppid:2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:5315 [inline] __schedule+0x1895/0x4b30 kernel/sched/core.c:6675 __schedule_loop kernel/sched/core.c:6752 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6767 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6824 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 ovs_lock net/openvswitch/datapath.c:108 [inline] ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Showing all locks held in the system: 3 locks held by kworker/0:1/9: #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc900000e7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc900000e7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:108 [inline] #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 1 lock held by khungtaskd/30: #0: ffffffff8e937de0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline] #0: ffffffff8e937de0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline] #0: ffffffff8e937de0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6720 5 locks held by kworker/1:1/46: 3 locks held by kworker/u8:4/62: #0: ffff88814bbb8148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88814bbb8148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc900015d7d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc900015d7d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4736 2 locks held by getty/4987: #0: ffff88802daf50a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211 3 locks held by kworker/0:3/5331: #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc90003e0fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc90003e0fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:108 [inline] #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 3 locks held by kworker/0:5/5642: #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc90004a07d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc90004a07d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:108 [inline] #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 3 locks held by kworker/0:6/5758: #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc90003dafd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc90003dafd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:108 [inline] #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 3 locks held by kworker/1:4/5759: 3 locks held by kworker/u8:11/12139: #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc900039d7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc900039d7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:580 3 locks held by kworker/0:2/22857: #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc90003d5fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc90003d5fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:108 [inline] #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 1 lock held by syz-executor/24268: #0: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline] #0: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517 3 locks held by syz-executor/24491: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: setup_net+0x602/0x9e0 net/core/net_namespace.c:378 #2: ffffffff8e7d1dd0 (cpu_hotplug_lock){++++}-{0:0}, at: flush_all_backlogs net/core/dev.c:6021 [inline] #2: ffffffff8e7d1dd0 (cpu_hotplug_lock){++++}-{0:0}, at: unregister_netdevice_many_notify+0x5ea/0x1da0 net/core/dev.c:11380 1 lock held by syz-executor/24497: #0: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline] #0: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517 2 locks held by syz-executor/24499: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ppp_exit_net+0xe3/0x3d0 drivers/net/ppp/ppp_generic.c:1146 2 locks held by syz-executor/24501: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: setup_net+0x602/0x9e0 net/core/net_namespace.c:378 3 locks held by kworker/0:4/24502: #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc9000346fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc9000346fd00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:108 [inline] #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 3 locks held by kworker/0:8/24503: #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc900049f7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc900049f7d00 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:108 [inline] #2: ffffffff90004f68 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x2f/0xe0 net/openvswitch/datapath.c:2527 2 locks held by syz-executor/24512: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24521: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24524: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24525: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24527: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24536: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24545: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24547: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24549: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24551: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/24562: #0: ffffffff8fcc4c10 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd1708 (rtnl_mutex){+.+.}-{3:3}, at: register_nexthop_notifier+0x84/0x290 net/ipv4/nexthop.c:3885 ============================================= NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.12.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline] watchdog+0xff4/0x1040 kernel/hung_task.c:379 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 46 Comm: kworker/1:1 Not tainted 6.12.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Workqueue: events nsim_dev_trap_report_work RIP: 0010:hlock_class kernel/locking/lockdep.c:228 [inline] RIP: 0010:check_wait_context kernel/locking/lockdep.c:4827 [inline] RIP: 0010:__lock_acquire+0x5e7/0x2050 kernel/locking/lockdep.c:5152 Code: 41 8b 1f 81 e3 ff 1f 00 00 48 89 d8 48 c1 e8 06 48 8d 3c c5 00 58 2c 94 be 08 00 00 00 e8 d1 3e 8e 00 48 0f a3 1d 69 07 bc 12 <73> 26 48 69 c3 c8 00 00 00 48 8d 98 c0 d6 c3 93 48 ba 00 00 00 00 RSP: 0018:ffffc90000a18890 EFLAGS: 00000057 RAX: 0000000000000001 RBX: 0000000000000811 RCX: ffffffff8170508f RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff942c5900 RBP: 0000000000000002 R08: ffffffff942c5907 R09: 1ffffffff2858b20 R10: dffffc0000000000 R11: fffffbfff2858b21 R12: ffff888020e80000 R13: 0000000000000811 R14: 1ffff110041d0174 R15: ffff888020e80ba0 FS: 0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b2d21dff8 CR3: 000000000e734000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5825 local_lock_acquire include/linux/local_lock_internal.h:29 [inline] process_backlog+0x824/0x15b0 net/core/dev.c:6114 __napi_poll+0xcb/0x490 net/core/dev.c:6771 napi_poll net/core/dev.c:6840 [inline] net_rx_action+0x89b/0x1240 net/core/dev.c:6962 handle_softirqs+0x2c5/0x980 kernel/softirq.c:554 do_softirq+0x11b/0x1e0 kernel/softirq.c:455 __local_bh_enable_ip+0x1bb/0x200 kernel/softirq.c:382 spin_unlock_bh include/linux/spinlock.h:396 [inline] nsim_dev_trap_report drivers/net/netdevsim/dev.c:820 [inline] nsim_dev_trap_report_work+0x75d/0xaa0 drivers/net/netdevsim/dev.c:850 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244