syzbot


INFO: task hung in con_flush_chars (4)

Status: auto-obsoleted due to no activity on 2025/06/20 07:50
Subsystems: serial
[Documentation on labels]
First crash: 321d, last: 150d
Similar bugs (4)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in con_flush_chars (2) serial 1 2 1757d 1830d 0/29 auto-closed as invalid on 2021/01/25 20:43
upstream INFO: task hung in con_flush_chars (3) serial 1 1 1430d 1430d 0/29 auto-closed as invalid on 2021/12/18 10:46
upstream INFO: task hung in con_flush_chars serial 1 1 1953d 1953d 0/29 auto-closed as invalid on 2020/07/13 22:15
linux-4.14 INFO: task hung in con_flush_chars 1 2 1988d 2005d 0/1 auto-closed as invalid on 2020/07/08 07:44

Sample crash report:
INFO: task kworker/u8:9:12254 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc7-syzkaller-00196-g88d324e69ea9 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:9    state:D stack:20624 pid:12254 tgid:12254 ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: events_unbound flush_to_ldisc
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x190e/0x4c90 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6857
 schedule_timeout+0xb0/0x290 kernel/time/sleep_timeout.c:75
 ___down_common kernel/locking/semaphore.c:229 [inline]
 __down_common+0x375/0x820 kernel/locking/semaphore.c:250
 down+0x84/0xc0 kernel/locking/semaphore.c:64
 console_lock+0x145/0x1b0 kernel/printk/printk.c:2833
 con_flush_chars+0x6f/0x270 drivers/tty/vt/vt.c:3501
 __receive_buf drivers/tty/n_tty.c:1644 [inline]
 n_tty_receive_buf_common+0xc64/0x12d0 drivers/tty/n_tty.c:1739
 tty_port_default_receive_buf+0x6d/0xa0 drivers/tty/tty_port.c:37
 receive_buf drivers/tty/tty_buffer.c:445 [inline]
 flush_to_ldisc+0x328/0x860 drivers/tty/tty_buffer.c:495
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xabe/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd30 kernel/workqueue.c:3400
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Showing all locks held in the system:
1 lock held by kthreadd/2:
4 locks held by kworker/R-netns/8:
 #0: ffff88801bef6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801bef6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc900000d7be0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900000d7be0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec9710 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x17a/0xd60 net/core/net_namespace.c:606
 #3: ffffffff8fed5f48 (rtnl_mutex){+.+.}-{4:4}, at: wg_netns_pre_exit+0x1f/0x1e0 drivers/net/wireguard/device.c:415
3 locks held by kworker/u8:0/12:
1 lock held by kworker/R-mm_pe/14:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
1 lock held by khungtaskd/31:
 #0: ffffffff8eb393e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8eb393e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8eb393e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6746
3 locks held by kworker/u8:2/36:
3 locks held by kworker/u8:4/63:
1 lock held by kswapd0/90:
3 locks held by kworker/1:2/975:
 #0: ffff88807ca79d48 ((wq_completion)wg-kex-wg2#18){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88807ca79d48 ((wq_completion)wg-kex-wg2#18){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90003a17c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*((worker))) *)(( unsigned long)((worker)))); (typeof((typeof(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003a17c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*((worker))) *)(( unsigned long)((worker)))); (typeof((typeof(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffff88805accf538 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_begin_session+0x38/0xc00 drivers/net/wireguard/noise.c:822
3 locks held by kworker/0:2/1220:
3 locks held by kworker/u8:7/3020:
3 locks held by kworker/R-ipv6_/3171:
 #0: ffff8880307f1948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff8880307f1948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000bfc7be0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000bfc7be0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fed5f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #2: ffffffff8fed5f48 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4730
1 lock held by kworker/R-bat_e/3401:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xa21/0xf90 kernel/workqueue.c:3529
2 locks held by syslogd/5192:
2 locks held by klogd/5199:
2 locks held by udevd/5210:
2 locks held by getty/5589:
 #0: ffff88803100c0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fde2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x616/0x1770 drivers/tty/n_tty.c:2211
4 locks held by kworker/1:1/5841:
 #0: ffff88801b081d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b081d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc900040dfc60 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900040dfc60 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fed5f48 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x99/0xfb0 net/wireless/reg.c:2481
 #3: ffff88805c650768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6061 [inline]
 #3: ffff88805c650768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: reg_leave_invalid_chans net/wireless/reg.c:2469 [inline]
 #3: ffff88805c650768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: reg_check_chans_work+0x164/0xfb0 net/wireless/reg.c:2484
2 locks held by syz-executor/5853:
2 locks held by kworker/1:3/5869:
2 locks held by kworker/u8:1/5902:
2 locks held by kworker/0:1/5913:
3 locks held by kworker/u8:3/5919:
3 locks held by kworker/0:3/5920:
4 locks held by kworker/0:4/5921:
5 locks held by kworker/0:5/5922:
4 locks held by kworker/1:4/5923:
 #0: ffff88805e7b4148 ((wq_completion)wg-kex-wg2#36){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88805e7b4148 ((wq_completion)wg-kex-wg2#36){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90004a0fc60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*((worker))) *)(( unsigned long)((worker)))); (typeof((typeof(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90004a0fc60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*((worker))) *)(( unsigned long)((worker)))); (typeof((typeof(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffff88805fbe5308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x158/0xd40 drivers/net/wireguard/noise.c:598
 #3: ffff8880328058b8 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x6fd/0xd40 drivers/net/wireguard/noise.c:632
3 locks held by kworker/u8:5/5924:
2 locks held by kworker/0:6/5925:
2 locks held by kworker/1:5/5927:
5 locks held by kworker/1:6/5928:
3 locks held by kworker/u8:6/5941:
2 locks held by udevd/6032:
3 locks held by kworker/u8:8/6396:
1 lock held by kworker/R-wg-cr/7595:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
3 locks held by kworker/0:7/7975:
 #0: ffff88801b080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90003effc60 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003effc60 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fed5f48 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
4 locks held by syz-executor/8001:
1 lock held by kworker/R-wg-cr/8052:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
2 locks held by kworker/1:8/8496:
3 locks held by kworker/0:8/8881:
1 lock held by kworker/R-wg-cr/11486:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xa21/0xf90 kernel/workqueue.c:3529
1 lock held by kworker/R-wg-cr/11487:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/11492:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/11548:
1 lock held by kworker/R-wg-cr/11549:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/11553:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
2 locks held by syz-executor/11863:
1 lock held by kworker/R-wg-cr/11927:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/11930:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/11945:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/11946:
 #0: ffffffff8e9e46e8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2678
5 locks held by kworker/u8:9/12254:
 #0: ffff88801b089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90003f2fc60 ((work_completion)(&buf->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003f2fc60 ((work_completion)(&buf->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffff88801b0a10b8 (&buf->lock){+.+.}-{4:4}, at: flush_to_ldisc+0x38/0x860 drivers/tty/tty_buffer.c:467
 #3: ffff88802e3ee0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref+0x1c/0x80 drivers/tty/tty_ldisc.c:263
 #4: ffff88802e3ee2e8 (&tty->termios_rwsem){++++}-{4:4}, at: n_tty_receive_buf_common+0x87/0x12d0 drivers/tty/n_tty.c:1702
6 locks held by syz.7.1225/12783:
2 locks held by dhcpcd-run-hook/12809:
2 locks held by syz.4.1231/12810:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.14.0-rc7-syzkaller-00196-g88d324e69ea9 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 3020 Comm: kworker/u8:7 Not tainted 6.14.0-rc7-syzkaller-00196-g88d324e69ea9 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Workqueue: wg-kex-wg0 wg_packet_handshake_send_worker
RIP: 0010:on_stack arch/x86/include/asm/stacktrace.h:58 [inline]
RIP: 0010:stack_access_ok arch/x86/kernel/unwind_orc.c:393 [inline]
RIP: 0010:deref_stack_reg arch/x86/kernel/unwind_orc.c:403 [inline]
RIP: 0010:unwind_next_frame+0xb98/0x22d0 arch/x86/kernel/unwind_orc.c:585
Code: c1 ef 03 43 80 3c 27 00 74 08 48 89 df e8 40 df bb 00 4c 8b 74 24 08 4d 8b 66 10 48 b8 00 00 00 00 00 fc ff df 48 8b 4c 24 20 <0f> b6 04 01 84 c0 0f 85 67 12 00 00 48 8b 04 24 4c 8d 68 f8 41 83
RSP: 0018:ffffc90000a275f0 EFLAGS: 00000246
RAX: dffffc0000000000 RBX: ffffc90000a276d0 RCX: 1ffff92000144ed8
RDX: ffffffff90d3a71c RSI: 0000000000000002 RDI: 0000000000000001
RBP: ffffc90000a21000 R08: 000000000000000f R09: ffffc90000a277b0
R10: ffffc90000a27710 R11: ffffffff81ad6c00 R12: ffffc90000a29000
R13: ffffc90000a276c0 R14: ffffc90000a276c0 R15: 1ffff92000144eda
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000268030 CR3: 000000004a228000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:576
 poison_slab_object mm/kasan/common.c:247 [inline]
 __kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
 kasan_slab_free include/linux/kasan.h:233 [inline]
 slab_free_hook mm/slub.c:2353 [inline]
 slab_free mm/slub.c:4609 [inline]
 kmem_cache_free+0x195/0x410 mm/slub.c:4711
 kfree_skb_reason include/linux/skbuff.h:1271 [inline]
 kfree_skb include/linux/skbuff.h:1280 [inline]
 ip6_mc_input+0x974/0xb70 net/ipv6/ip6_input.c:591
 ip_sabotage_in+0x203/0x290 net/bridge/br_netfilter_hooks.c:993
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626
 nf_hook include/linux/netfilter.h:269 [inline]
 NF_HOOK+0x29e/0x450 include/linux/netfilter.h:312
 __netif_receive_skb_one_core net/core/dev.c:5896 [inline]
 __netif_receive_skb+0x1ea/0x650 net/core/dev.c:6009
 netif_receive_skb_internal net/core/dev.c:6095 [inline]
 netif_receive_skb+0x1e8/0x890 net/core/dev.c:6154
 NF_HOOK+0x9e/0x400 include/linux/netfilter.h:314
 br_handle_frame_finish+0x1905/0x2000
 br_nf_hook_thresh+0x472/0x590
 br_nf_pre_routing_finish_ipv6+0xaa0/0xdd0
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x379/0x770 net/bridge/br_netfilter_ipv6.c:184
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:282 [inline]
 br_handle_frame+0x9f3/0x1530 net/bridge/br_input.c:433
 __netif_receive_skb_core+0x13e7/0x4540 net/core/dev.c:5790
 __netif_receive_skb_one_core net/core/dev.c:5894 [inline]
 __netif_receive_skb+0x12f/0x650 net/core/dev.c:6009
 process_backlog+0x662/0x15b0 net/core/dev.c:6357
 __napi_poll+0xcb/0x490 net/core/dev.c:7191
 napi_poll net/core/dev.c:7260 [inline]
 net_rx_action+0x89b/0x1240 net/core/dev.c:7382
 handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
 do_softirq+0x11b/0x1e0 kernel/softirq.c:462
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x1bb/0x200 kernel/softirq.c:389
 wg_packet_send_handshake_initiation drivers/net/wireguard/send.c:42 [inline]
 wg_packet_handshake_send_worker+0x1e5/0x330 drivers/net/wireguard/send.c:51
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xabe/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd30 kernel/workqueue.c:3400
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/22 07:45 upstream 88d324e69ea9 c6512ef7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in con_flush_chars
2025/01/11 17:49 upstream e0daef7de1ac 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in con_flush_chars
2024/12/17 05:15 upstream f44d154d6e3d f93b2b55 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in con_flush_chars
2024/10/02 13:17 upstream e32cde8d2bd7 ea2b66a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in con_flush_chars
* Struck through repros no longer work on HEAD.