INFO: task kworker/u8:2:36 blocked for more than 143 seconds. Not tainted 6.16.0-rc3-syzkaller-00057-g92ca6c498a5e #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u8:2 state:D stack:23240 pid:36 tgid:36 ppid:2 task_flags:0x4208160 flags:0x00004000 Workqueue: netns cleanup_net Call Trace: context_switch kernel/sched/core.c:5396 [inline] __schedule+0x116a/0x5de0 kernel/sched/core.c:6785 __schedule_loop kernel/sched/core.c:6863 [inline] schedule+0xe7/0x3a0 kernel/sched/core.c:6878 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75 do_wait_for_common kernel/sched/completion.c:95 [inline] __wait_for_common+0x2ff/0x4e0 kernel/sched/completion.c:116 __flush_workqueue+0x3e2/0x1230 kernel/workqueue.c:4002 rds_tcp_listen_stop+0x104/0x150 net/rds/tcp_listen.c:351 rds_tcp_kill_sock net/rds/tcp.c:611 [inline] rds_tcp_exit_net+0xcb/0x810 net/rds/tcp.c:634 ops_exit_list net/core/net_namespace.c:200 [inline] ops_undo_list+0x2eb/0xab0 net/core/net_namespace.c:253 cleanup_net+0x408/0x890 net/core/net_namespace.c:686 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238 process_scheduled_works kernel/workqueue.c:3321 [inline] worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402 kthread+0x3c5/0x780 kernel/kthread.c:464 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Showing all locks held in the system: 1 lock held by khungtaskd/31: #0: ffffffff8e5c47c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8e5c47c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline] #0: ffffffff8e5c47c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6770 3 locks held by kworker/u8:2/36: #0: ffff88801c6fe148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213 #1: ffffc90000ac7d10 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214 #2: ffffffff90338250 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x890 net/core/net_namespace.c:662 3 locks held by kworker/u11:1/11043: #0: ffff888029d4b948 ((wq_completion)hci0){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213 #1: ffffc900040ffd10 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214 #2: ffff8880620d8d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x175/0x430 net/bluetooth/hci_sync.c:331 1 lock held by syz.1.1031/11456: #0: ffffffff90338250 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x286/0x5f0 net/core/net_namespace.c:570 5 locks held by kworker/u10:5/11553: #0: ffff8880b843a418 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:614 [inline] #0: ffff8880b843a418 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x7e/0x130 kernel/sched/core.c:599 #1: ffff88805809e018 (&pid_list->lock){-.-.}-{2:2}, at: trace_pid_list_is_set+0x4c/0x150 kernel/trace/pid_list.c:141 #2: ffff8880b8425b18 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x127/0x1d0 kernel/time/timer.c:1004 #3: ffffffff9afe5a68 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x14c/0x4c0 lib/debugobjects.c:818 #4: ffffffff8e482d88 (text_mutex){+.+.}-{4:4}, at: arch_jump_label_transform_apply+0x17/0x30 arch/x86/kernel/jump_label.c:145 1 lock held by syz.4.1214/12522: #0: ffffffff90338250 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x286/0x5f0 net/core/net_namespace.c:570 2 locks held by getty/12679: #0: ffff8880360020a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243 #1: ffffc900035bb2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222 1 lock held by syz.2.1300/13022: #0: ffffffff90338250 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x286/0x5f0 net/core/net_namespace.c:570 1 lock held by syz.3.1313/13040: #0: ffffffff8e5cfdb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:336 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc3-syzkaller-00057-g92ca6c498a5e #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline] watchdog+0xf70/0x12c0 kernel/hung_task.c:470 kthread+0x3c5/0x780 kernel/kthread.c:464 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 10 Comm: kworker/0:1 Not tainted 6.16.0-rc3-syzkaller-00057-g92ca6c498a5e #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 Workqueue: events drain_vmap_area_work RIP: 0010:check_wait_context kernel/locking/lockdep.c:4873 [inline] RIP: 0010:__lock_acquire+0x2a4/0x1c90 kernel/locking/lockdep.c:5190 Code: 01 0f 88 db 0d 00 00 49 63 c6 48 8d 04 80 49 8d 04 c4 eb 12 41 83 ee 01 48 83 e8 28 41 83 fe ff 0f 84 8f 04 00 00 0f b6 50 21 <31> ca 83 e2 60 74 e3 41 83 c6 01 65 8b 05 42 5b 34 12 85 c0 0f 84 RSP: 0018:ffffc900000f74f0 EFLAGS: 00000013 RAX: ffff88801e6a2918 RBX: 0000000000000004 RCX: 0000000000000000 RDX: 000000000000000a RSI: 0000000000000004 RDI: ffff88801e6a2990 RBP: ffff88801e6a1e00 R08: 0000000000000000 R09: 0000000000000000 R10: 00000000000000a0 R11: 0000000000000001 R12: ffff88801e6a28f0 R13: ffff88801e6a2990 R14: 0000000000000001 R15: 0000000000000001 FS: 0000000000000000(0000) GS:ffff888124760000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000005c3000 CR3: 000000000e382000 CR4: 00000000003526f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: lock_acquire kernel/locking/lockdep.c:5871 [inline] lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5828 rcu_lock_acquire include/linux/rcupdate.h:331 [inline] rcu_read_lock include/linux/rcupdate.h:841 [inline] class_rcu_constructor include/linux/rcupdate.h:1155 [inline] unwind_next_frame+0xd1/0x20a0 arch/x86/kernel/unwind_orc.c:479 arch_stack_walk+0x94/0x100 arch/x86/kernel/stacktrace.c:25 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122 save_stack+0x160/0x1f0 mm/page_owner.c:156 __reset_page_owner+0x84/0x1a0 mm/page_owner.c:308 reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1248 [inline] __free_frozen_pages+0x7fe/0x1180 mm/page_alloc.c:2706 kasan_depopulate_vmalloc_pte+0x5f/0x80 mm/kasan/shadow.c:472 apply_to_pte_range mm/memory.c:3032 [inline] apply_to_pmd_range mm/memory.c:3076 [inline] apply_to_pud_range mm/memory.c:3112 [inline] apply_to_p4d_range mm/memory.c:3148 [inline] __apply_to_page_range+0xa8f/0x1350 mm/memory.c:3184 kasan_release_vmalloc+0xd1/0xe0 mm/kasan/shadow.c:593 kasan_release_vmalloc_node mm/vmalloc.c:2241 [inline] purge_vmap_node+0x1c4/0xa30 mm/vmalloc.c:2258 __purge_vmap_area_lazy+0xa06/0xc60 mm/vmalloc.c:2348 drain_vmap_area_work+0x27/0x40 mm/vmalloc.c:2382 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238 process_scheduled_works kernel/workqueue.c:3321 [inline] worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402 kthread+0x3c5/0x780 kernel/kthread.c:464 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245