INFO: task kworker/u10:5:158 blocked for more than 430 seconds. Not tainted 6.14.0-rc1-syzkaller-g245aece3750d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u10:5 state:D stack:0 pid:158 tgid:158 ppid:2 task_flags:0x4208060 flags:0x00000000 Workqueue: events_unbound bpf_map_free_deferred Call Trace: [] context_switch kernel/sched/core.c:5377 [inline] [] __schedule+0xe4c/0x3d70 kernel/sched/core.c:6764 [] __schedule_loop kernel/sched/core.c:6841 [inline] [] schedule+0xc4/0x324 kernel/sched/core.c:6856 [] schedule_timeout+0x1c6/0x28a kernel/time/sleep_timeout.c:75 [] do_wait_for_common kernel/sched/completion.c:95 [inline] [] __wait_for_common+0x1ca/0x4b6 kernel/sched/completion.c:116 [] wait_for_common kernel/sched/completion.c:127 [inline] [] wait_for_completion+0x1a/0x22 kernel/sched/completion.c:148 [] rcu_barrier+0x2dc/0x6cc kernel/rcu/tree.c:3809 [] dev_map_free+0x11c/0x6bc kernel/bpf/devmap.c:214 [] bpf_map_free kernel/bpf/syscall.c:841 [inline] [] bpf_map_free_deferred+0x226/0x47a kernel/bpf/syscall.c:867 [] process_one_work+0x96a/0x1f3a kernel/workqueue.c:3236 [] process_scheduled_works kernel/workqueue.c:3317 [inline] [] worker_thread+0x5be/0xdc6 kernel/workqueue.c:3398 [] kthread+0x37e/0x7b6 kernel/kthread.c:464 [] ret_from_fork+0xe/0x18 arch/riscv/kernel/entry.S:327 Showing all locks held in the system: 1 lock held by khungtaskd/39: #0: ffffffff883d8200 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x68/0x288 kernel/locking/lockdep.c:6742 3 locks held by kworker/u10:5/158: #0: ffffaf8011a89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x848/0x1f3a kernel/workqueue.c:3211 #1: ffff8f8000567bd0 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work+0x870/0x1f3a kernel/workqueue.c:3211 #2: ffffffff883e8380 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x56/0x6cc kernel/rcu/tree.c:3741 2 locks held by getty/3139: #0: ffffaf8018d2e0a0 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3a/0x46 drivers/tty/tty_ldsem.c:340 #1: ffff8f800008b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0xd7c/0x129a drivers/tty/n_tty.c:2211 1 lock held by syz-executor/3168: 3 locks held by kworker/0:3/3814: 3 locks held by kworker/1:4/3860: 2 locks held by kworker/0:4/3905: 2 locks held by kworker/0:0/5075: 2 locks held by kworker/u10:1/5914: 2 locks held by kworker/0:7/6586: 2 locks held by kworker/0:9/6617: 1 lock held by syz.0.974/6718: #0: ffffffff883e8380 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x56/0x6cc kernel/rcu/tree.c:3741 3 locks held by kworker/1:5/6726: 2 locks held by kworker/0:14/6739: 2 locks held by kworker/u10:8/6803: #0: ffffaf8011a89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x848/0x1f3a kernel/workqueue.c:3211 #1: ffff8f8002bc7bd0 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x870/0x1f3a kernel/workqueue.c:3211 ============================================= NMI backtrace for cpu 1 CPU: 1 UID: 0 PID: 39 Comm: khungtaskd Not tainted 6.14.0-rc1-syzkaller-g245aece3750d #0 Hardware name: riscv-virtio,qemu (DT) Call Trace: [] dump_backtrace+0x2e/0x3c arch/riscv/kernel/stacktrace.c:132 [] show_stack+0x30/0x3c arch/riscv/kernel/stacktrace.c:138 [] __dump_stack lib/dump_stack.c:94 [inline] [] dump_stack_lvl+0x12e/0x1a6 lib/dump_stack.c:120 [] dump_stack+0x1c/0x24 lib/dump_stack.c:129 [] nmi_cpu_backtrace+0x3b0/0x3b2 lib/nmi_backtrace.c:113 [] nmi_trigger_cpumask_backtrace+0x2b6/0x458 lib/nmi_backtrace.c:62 [] arch_trigger_cpumask_backtrace+0x2c/0x3e arch/riscv/kernel/smp.c:348 [] trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline] [] check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline] [] watchdog+0xcf2/0x11de kernel/hung_task.c:399 [] kthread+0x37e/0x7b6 kernel/kthread.c:464 [] ret_from_fork+0xe/0x18 arch/riscv/kernel/entry.S:327 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 4559 Comm: kworker/0:2 Not tainted 6.14.0-rc1-syzkaller-g245aece3750d #0 Hardware name: riscv-virtio,qemu (DT) Workqueue: wg-crypt-wg1 wg_packet_decrypt_worker epc : stack_depot_save_flags+0x32/0x956 lib/stackdepot.c:609 ra : stack_depot_save_flags+0x32/0x956 lib/stackdepot.c:609 epc : ffffffff81936206 ra : ffffffff81936206 sp : ffff8f8000005f90 gp : ffffffff89c1d3c0 tp : ffffaf8012b13480 t0 : ffff8f80000060e0 t1 : ffffaf8012b14078 t2 : 0000000000000016 s0 : ffff8f8000006020 s1 : 0000000000000000 a0 : 0000000000000010 a1 : 0000000000000010 a2 : 0000000000052820 a3 : 0000000000000001 a4 : ffffffff86268970 a5 : ffffffff8585177c a6 : 0000000000f00000 a7 : 1ffff5f0025627ff s2 : ffff8f8000006060 s3 : 0000000000052820 s4 : ffffaf8012b13480 s5 : 0000000000052820 s6 : 0000000000000003 s7 : 0000025b9fa3f3c0 s8 : ffffaf8072fee640 s9 : ffffffff90fcb3a0 s10: 000000000001fc2d s11: 0000000000000001 t3 : ffffaf8012b13fb0 t4 : 1ffff5f0025627f5 t5 : 1ffff5f002562813 t6 : 0000000000000004 status: 0000000200000120 badaddr: 0000000000000000 cause: 8000000000000001 [] stack_depot_save_flags+0x32/0x956 lib/stackdepot.c:609 [] stack_depot_save+0xe/0x18 lib/stackdepot.c:686 [] save_stack+0x138/0x1a4 mm/page_owner.c:157 [] __set_page_owner+0xa2/0x710 mm/page_owner.c:320 [] set_page_owner include/linux/page_owner.h:32 [inline] [] post_alloc_hook+0xea/0x1e2 mm/page_alloc.c:1551 [] prep_new_page mm/page_alloc.c:1559 [inline] [] get_page_from_freelist+0xf78/0x2bd6 mm/page_alloc.c:3477 [] __alloc_frozen_pages_noprof+0x1e8/0x20fc mm/page_alloc.c:4739 [] alloc_pages_mpol+0x1fa/0x5b8 mm/mempolicy.c:2270 [] alloc_frozen_pages_noprof+0x174/0x2f0 mm/mempolicy.c:2341 [] alloc_slab_page mm/slub.c:2423 [inline] [] allocate_slab mm/slub.c:2587 [inline] [] new_slab+0x26a/0x340 mm/slub.c:2640 [] ___slab_alloc+0xaf4/0x1290 mm/slub.c:3826 [] __slab_alloc.constprop.0+0x60/0xb0 mm/slub.c:3916 [] __slab_alloc_node mm/slub.c:3991 [inline] [] slab_alloc_node mm/slub.c:4152 [inline] [] kmem_cache_alloc_noprof+0xd2/0x3e2 mm/slub.c:4171 [] dst_alloc+0x94/0x174 net/core/dst.c:89 [] rt_dst_alloc+0x3a/0x340 net/ipv4/route.c:1626 [] __mkroute_output net/ipv4/route.c:2620 [inline] [] ip_route_output_key_hash_rcu+0x822/0x2748 net/ipv4/route.c:2842 [] ip_route_output_key_hash+0x158/0x31c net/ipv4/route.c:2671 [] __ip_route_output_key include/net/route.h:169 [inline] [] ip_route_output_flow+0x2a/0x142 net/ipv4/route.c:2899 [] ip_route_output_key include/net/route.h:179 [inline] [] ip_route_me_harder+0x52e/0x11a6 net/ipv4/netfilter.c:53 [] synproxy_send_tcp.isra.0+0x2be/0x5d2 net/netfilter/nf_synproxy_core.c:431 [] synproxy_send_client_synack+0x6dc/0x8be net/netfilter/nf_synproxy_core.c:484 [] nft_synproxy_eval_v4 net/netfilter/nft_synproxy.c:59 [inline] [] nft_synproxy_do_eval+0x8ac/0xa52 net/netfilter/nft_synproxy.c:141 [] nft_synproxy_eval+0x28/0x36 net/netfilter/nft_synproxy.c:247 [] expr_call_ops_eval net/netfilter/nf_tables_core.c:240 [inline] [] nft_do_chain+0x328/0x1598 net/netfilter/nf_tables_core.c:288 [] nft_do_chain_inet+0x180/0x316 net/netfilter/nft_chain_filter.c:161 [] nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline] [] nf_hook_slow+0xb8/0x1ec net/netfilter/core.c:626 [] nf_hook include/linux/netfilter.h:269 [inline] [] NF_HOOK include/linux/netfilter.h:312 [inline] [] ip_local_deliver+0x2ea/0x568 net/ipv4/ip_input.c:254 [] dst_input include/net/dst.h:469 [inline] [] ip_rcv_finish+0x1b0/0x2d2 net/ipv4/ip_input.c:447 [] NF_HOOK include/linux/netfilter.h:314 [inline] [] NF_HOOK include/linux/netfilter.h:308 [inline] [] ip_rcv+0xd6/0x44e net/ipv4/ip_input.c:567 [] __netif_receive_skb_one_core+0x106/0x16e net/core/dev.c:5828 [] __netif_receive_skb+0x2c/0x144 net/core/dev.c:5941 [] process_backlog+0x4f6/0x1cb0 net/core/dev.c:6289 [] __napi_poll.constprop.0+0xaa/0x4b8 net/core/dev.c:7106 [] napi_poll net/core/dev.c:7175 [inline] [] net_rx_action+0xa12/0xf10 net/core/dev.c:7297 [] handle_softirqs+0x4b2/0x132e kernel/softirq.c:561 [] __do_softirq+0x12/0x1a kernel/softirq.c:595 [] ___do_softirq+0x18/0x20 arch/riscv/kernel/irq.c:85 [] call_on_irq_stack+0x32/0x40 arch/riscv/kernel/entry.S:356