INFO: task kworker/R-mm_pe:13 blocked for more than 143 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-mm_pe state:D stack:29328 pid:13 tgid:13 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:10144 blocked for more than 144 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:29328 pid:10144 tgid:10144 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:10152 blocked for more than 144 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:29328 pid:10152 tgid:10152 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:10251 blocked for more than 145 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:29088 pid:10251 tgid:10251 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:10254 blocked for more than 145 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:29328 pid:10254 tgid:10254 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:10314 blocked for more than 146 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:29264 pid:10314 tgid:10314 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-bond0:10798 blocked for more than 146 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-bond0 state:D stack:29392 pid:10798 tgid:10798 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 set_pf_worker kernel/workqueue.c:3316 [inline] rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-bond0:10799 blocked for more than 147 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-bond0 state:D stack:29392 pid:10799 tgid:10799 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 set_pf_worker kernel/workqueue.c:3316 [inline] rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-bond0:10801 blocked for more than 147 seconds. Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-bond0 state:D stack:29392 pid:10801 tgid:10801 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5264 [inline] __schedule+0x1893/0x4b50 kernel/sched/core.c:6607 __schedule_loop kernel/sched/core.c:6684 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6699 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6756 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 set_pf_worker kernel/workqueue.c:3316 [inline] rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Showing all locks held in the system: 1 lock held by kworker/R-mm_pe/13: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 6 locks held by kworker/1:0/25: 1 lock held by khungtaskd/30: #0: ffffffff8e9389e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline] #0: ffffffff8e9389e0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline] #0: ffffffff8e9389e0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6701 2 locks held by dhcpcd/4891: #0: ffff88807a4106c8 (nlk_cb_mutex-ROUTE){+.+.}-{3:3}, at: netlink_dump+0xcb/0xd80 net/netlink/af_netlink.c:2271 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline] #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_dumpit+0x99/0x200 net/core/rtnetlink.c:6505 2 locks held by getty/4985: #0: ffff88803299a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc900031232f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211 2 locks held by kworker/1:2/5236: 3 locks held by kworker/1:6/5342: 1 lock held by kworker/R-bond0/10105: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xfbf/0x10a0 kernel/workqueue.c:3535 1 lock held by kworker/R-wg-cr/10144: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/10146: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/10152: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/10251: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/10254: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/10265: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by kworker/R-wg-cr/10270: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by kworker/R-wg-cr/10276: 1 lock held by kworker/R-wg-cr/10313: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by kworker/R-wg-cr/10314: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 3 locks held by kworker/u8:10/10771: #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc900095b7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc900095b7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:580 1 lock held by kworker/1:5/10777: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_thread+0x5c/0xd30 kernel/workqueue.c:3342 1 lock held by kworker/R-bond0/10798: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-bond0/10799: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-bond0/10801: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-bond0/10806: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-bond0/10807: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10814: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10815: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10816: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10817: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10818: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10819: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10820: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10821: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 3 locks held by syz-executor/10824: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: setup_net+0x602/0x9e0 net/core/net_namespace.c:378 #2: ffffffff8e7d28d0 (cpu_hotplug_lock){++++}-{0:0}, at: flush_all_backlogs net/core/dev.c:6021 [inline] #2: ffffffff8e7d28d0 (cpu_hotplug_lock){++++}-{0:0}, at: unregister_netdevice_many_notify+0x5ea/0x1da0 net/core/dev.c:11380 2 locks held by syz-executor/10827: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: setup_net+0x602/0x9e0 net/core/net_namespace.c:378 1 lock held by kworker/R-wg-cr/10830: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10832: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10836: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/10840: #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2fe8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 2 locks held by syz-executor/10845: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: wg_netns_pre_exit+0x1f/0x1e0 drivers/net/wireguard/device.c:414 2 locks held by syz-executor/10852: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/10854: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/10862: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/10865: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/10867: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/10869: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 2 locks held by syz-executor/10871: #0: ffffffff8fcbcd90 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcc9888 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159 ============================================= NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 Call Trace: __dump_stack lib/dump_stack.c:93 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:119 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline] watchdog+0xff4/0x1040 kernel/hung_task.c:379 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 25 Comm: kworker/1:0 Not tainted 6.11.0-syzkaller-07462-g1868f9d0260e #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/06/2024 Workqueue: wg-crypt-wg0 wg_packet_tx_worker RIP: 0010:skb_end_pointer include/linux/skbuff.h:1669 [inline] RIP: 0010:__dev_queue_xmit+0x24b/0x3e80 net/core/dev.c:4343 Code: 7a 06 01 0f 85 71 1f 00 00 e8 41 99 08 f8 80 3d 44 05 7a 06 01 0f 85 91 1f 00 00 e8 2f 99 08 f8 48 b8 00 00 00 00 00 fc ff df <48> 8b 4c 24 58 80 3c 01 00 74 0a 48 8b 7c 24 20 e8 d0 28 70 f8 48 RSP: 0018:ffffc90000a17b00 EFLAGS: 00000246 RAX: dffffc0000000000 RBX: 000000000000004a RCX: ffff88801d681e00 RDX: 0000000000000100 RSI: 000000000000004a RDI: 0000000000000000 RBP: ffffc90000a17df0 R08: ffffffff898c25c2 R09: dd860aaaaaaaaaaa R10: 0aaaaaaaaaaa0000 R11: dd860aaaaaaaaaaa R12: 1ffff11007a2eb42 R13: 0000000000000132 R14: dffffc0000000000 R15: ffff88803d175a10 FS: 0000000000000000(0000) GS:ffff8880b8900000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000000020b63fe4 CR3: 000000000e734000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: dev_queue_xmit include/linux/netdevice.h:3094 [inline] neigh_hh_output include/net/neighbour.h:526 [inline] neigh_output include/net/neighbour.h:540 [inline] ip6_finish_output2+0xfc9/0x1730 net/ipv6/ip6_output.c:141 ip6_finish_output+0x41e/0x810 net/ipv6/ip6_output.c:226 synproxy_send_tcp_ipv6+0x568/0x7c0 net/netfilter/nf_synproxy_core.c:851 synproxy_send_client_synack_ipv6+0x7d0/0xc30 net/netfilter/nf_synproxy_core.c:897 nft_synproxy_eval_v6 net/netfilter/nft_synproxy.c:90 [inline] nft_synproxy_do_eval+0x739/0xa60 net/netfilter/nft_synproxy.c:145 expr_call_ops_eval net/netfilter/nf_tables_core.c:240 [inline] nft_do_chain+0x4ad/0x1da0 net/netfilter/nf_tables_core.c:288 nft_do_chain_inet+0x418/0x6b0 net/netfilter/nft_chain_filter.c:161 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline] nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626 nf_hook include/linux/netfilter.h:269 [inline] NF_HOOK+0x29e/0x450 include/linux/netfilter.h:312 NF_HOOK+0x3a4/0x450 include/linux/netfilter.h:314 __netif_receive_skb_one_core net/core/dev.c:5662 [inline] __netif_receive_skb+0x1ea/0x650 net/core/dev.c:5775 process_backlog+0x662/0x15b0 net/core/dev.c:6107 __napi_poll+0xcb/0x490 net/core/dev.c:6771 napi_poll net/core/dev.c:6840 [inline] net_rx_action+0x89b/0x1240 net/core/dev.c:6962 handle_softirqs+0x2c5/0x980 kernel/softirq.c:554 do_softirq+0x11b/0x1e0 kernel/softirq.c:455 __local_bh_enable_ip+0x1bb/0x200 kernel/softirq.c:382 wg_socket_send_skb_to_peer+0x176/0x1d0 drivers/net/wireguard/socket.c:184 wg_packet_create_data_done drivers/net/wireguard/send.c:251 [inline] wg_packet_tx_worker+0x1bf/0x810 drivers/net/wireguard/send.c:276 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244