syzbot


INFO: task hung in con_get_trans_old

Status: auto-obsoleted due to no activity on 2025/09/19 16:20
Subsystems: kernel
[Documentation on labels]
First crash: 219d, last: 210d

Sample crash report:
INFO: task syz.1.6411:20597 blocked for more than 145 seconds.
      Not tainted 6.16.0-rc2-syzkaller-00278-g3f75bfff44be #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.6411      state:D stack:29208 pid:20597 tgid:20587 ppid:5820   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x116a/0x5de0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6878
 schedule_timeout+0x257/0x290 kernel/time/sleep_timeout.c:75
 ___down_common+0x2d8/0x460 kernel/locking/semaphore.c:268
 __down_common kernel/locking/semaphore.c:293 [inline]
 __down+0x20/0x30 kernel/locking/semaphore.c:303
 down+0x74/0xa0 kernel/locking/semaphore.c:100
 console_lock+0x5b/0xa0 kernel/printk/printk.c:2849
 con_get_trans_old+0x9b/0x2b0 drivers/tty/vt/consolemap.c:377
 vt_io_ioctl drivers/tty/vt/vt_ioctl.c:528 [inline]
 vt_ioctl+0x585/0x30a0 drivers/tty/vt/vt_ioctl.c:755
 tty_ioctl+0x661/0x1640 drivers/tty/tty_io.c:2792
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:907 [inline]
 __se_sys_ioctl fs/ioctl.c:893 [inline]
 __x64_sys_ioctl+0x18b/0x210 fs/ioctl.c:893
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x4c0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff33518e929
RSP: 002b:00007ff33601b038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007ff3353b6080 RCX: 00007ff33518e929
RDX: 0000200000000240 RSI: 0000000000004b40 RDI: 0000000000000003
RBP: 00007ff335210b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007ff3353b6080 R15: 00007fff11945c78
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/0:0/9:
4 locks held by kworker/u8:0/12:
4 locks held by kworker/u8:1/13:
 #0: ffff8880249c3948 ((wq_completion)wg-kex-wg1){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc90000127d10 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888036455308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0xec/0x650 drivers/net/wireguard/noise.c:529
 #3: ffff88802a2c9708 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x100/0x650 drivers/net/wireguard/noise.c:530
1 lock held by kworker/R-mm_pe/14:
3 locks held by kworker/1:0/24:
1 lock held by khungtaskd/31:
 #0: ffffffff8e5c4880 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e5c4880 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e5c4880 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6770
4 locks held by kworker/u8:2/36:
 #0: ffff8880273ca948 ((wq_completion)wg-kex-wg2#5){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc90000ac7d10 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888077d2d308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0xec/0x650 drivers/net/wireguard/noise.c:529
 #3: ffff88802a8a0338 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x100/0x650 drivers/net/wireguard/noise.c:530
4 locks held by kworker/u8:3/49:
2 locks held by kworker/u8:4/72:
4 locks held by kworker/u8:5/144:
3 locks held by kworker/0:2/974:
6 locks held by kworker/u8:7/2997:
1 lock held by kworker/R-ipv6_/3166:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-bat_e/3396:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
3 locks held by kworker/u8:8/3535:
1 lock held by klogd/5174:
1 lock held by udevd/5185:
2 locks held by getty/5567:
 #0: ffff8880329240a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
1 lock held by syz-executor/5812:
1 lock held by syz-executor/5813:
2 locks held by syz-executor/5814:
1 lock held by syz-executor/5815:
 #0: ffff88814c158308 (&xt[i].mutex){+.+.}-{4:4}, at: xt_find_table_lock+0x5e/0x520 net/netfilter/x_tables.c:1243
1 lock held by kworker/R-wg-cr/5850:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/5851:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/5852:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3531
4 locks held by kworker/0:3/5853:
1 lock held by kworker/R-wg-cr/5855:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/5856:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/5858:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3531
1 lock held by kworker/R-wg-cr/5859:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5861:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5862:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3531
3 locks held by kworker/1:3/5874:
3 locks held by kworker/1:5/5889:
3 locks held by kworker/0:6/5929:
4 locks held by kworker/1:7/5961:
 #0: ffff88805af14d48 ((wq_completion)wg-kex-wg1#2){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc9000212fd10 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((__typeof__(*((worker))) *)(( unsigned long)((worker))))); (typeof((__typeof__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888036455308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x1c2/0x880 drivers/net/wireguard/noise.c:598
 #3: ffff88802a2c9708 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x5ac/0x880 drivers/net/wireguard/noise.c:632
1 lock held by kworker/R-bond1/9688:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-bond2/14251:
1 lock held by kworker/R-bond3/17891:
4 locks held by kworker/0:1/19973:
6 locks held by syz.1.6411/20589:
3 locks held by kworker/u8:6/20604:
4 locks held by kworker/u8:9/20609:
3 locks held by kworker/1:1/20610:
1 lock held by kworker/u8:10/20614:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
3 locks held by kworker/1:4/20615:
4 locks held by kworker/u8:11/20616:
4 locks held by syz-executor/20617:
1 lock held by kworker/1:8/20619:
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3327 [inline]
 #0: ffffffff8e47b6a8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_thread+0x6c/0xf10 kernel/workqueue.c:3353

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc2-syzkaller-00278-g3f75bfff44be #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xf70/0x12c0 kernel/hung_task.c:470
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 2997 Comm: kworker/u8:7 Not tainted 6.16.0-rc2-syzkaller-00278-g3f75bfff44be #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: wg-kex-wg1 wg_packet_handshake_send_worker
RIP: 0010:__sanitizer_cov_trace_const_cmp4+0x8/0x20 kernel/kcov.c:314
Code: bf 03 00 00 00 e9 58 fe ff ff 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 48 8b 0c 24 <89> f2 89 fe bf 05 00 00 00 e9 2a fe ff ff 66 2e 0f 1f 84 00 00 00
RSP: 0018:ffffc900000068e0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff89580c09
RDX: ffff888031832440 RSI: 0000000000000000 RDI: 0000000000000007
RBP: 0000000000000300 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000300 R11: 0000000000000001 R12: 0000000000000000
R13: ffff888028242000 R14: ffff88809dd7f070 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888124753000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fad9b77fd38 CR3: 000000000e382000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 cpu_max_bits_warn include/linux/cpumask.h:135 [inline]
 cpumask_check include/linux/cpumask.h:142 [inline]
 cpumask_test_cpu include/linux/cpumask.h:638 [inline]
 cpu_online include/linux/cpumask.h:1197 [inline]
 trace_netif_rx_entry+0x29/0x200 include/trace/events/net.h:257
 __netif_rx+0x80/0xb0 net/core/dev.c:5510
 veth_forward_skb drivers/net/veth.c:321 [inline]
 veth_xmit+0x8c5/0xe90 drivers/net/veth.c:375
 __netdev_start_xmit include/linux/netdevice.h:5215 [inline]
 netdev_start_xmit include/linux/netdevice.h:5224 [inline]
 xmit_one net/core/dev.c:3830 [inline]
 dev_hard_start_xmit+0x97/0x740 net/core/dev.c:3846
 __dev_queue_xmit+0x7eb/0x43e0 net/core/dev.c:4713
 dev_queue_xmit include/linux/netdevice.h:3355 [inline]
 br_dev_queue_push_xmit+0x272/0x8a0 net/bridge/br_forward.c:53
 br_nf_dev_queue_xmit+0x6f3/0x2cb0 net/bridge/br_netfilter_hooks.c:923
 NF_HOOK include/linux/netfilter.h:317 [inline]
 NF_HOOK include/linux/netfilter.h:311 [inline]
 br_nf_post_routing+0x8e7/0x1190 net/bridge/br_netfilter_hooks.c:969
 nf_hook_entry_hookfn include/linux/netfilter.h:157 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:623
 nf_hook+0x45e/0x780 include/linux/netfilter.h:272
 NF_HOOK include/linux/netfilter.h:315 [inline]
 br_forward_finish+0xcd/0x130 net/bridge/br_forward.c:66
 br_nf_hook_thresh+0x304/0x410 net/bridge/br_netfilter_hooks.c:1170
 br_nf_forward_finish+0x66a/0xba0 net/bridge/br_netfilter_hooks.c:665
 NF_HOOK include/linux/netfilter.h:317 [inline]
 NF_HOOK include/linux/netfilter.h:311 [inline]
 br_nf_forward_ip.part.0+0x609/0x810 net/bridge/br_netfilter_hooks.c:719
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:679 [inline]
 br_nf_forward+0xf0f/0x1be0 net/bridge/br_netfilter_hooks.c:776
 nf_hook_entry_hookfn include/linux/netfilter.h:157 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:623
 nf_hook+0x45e/0x780 include/linux/netfilter.h:272
 NF_HOOK include/linux/netfilter.h:315 [inline]
 __br_forward+0x1be/0x5b0 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 maybe_deliver+0xf1/0x180 net/bridge/br_forward.c:190
 br_flood+0x17c/0x650 net/bridge/br_forward.c:237
 br_handle_frame_finish+0xf2d/0x1ca0 net/bridge/br_input.c:221
 br_nf_hook_thresh+0x304/0x410 net/bridge/br_netfilter_hooks.c:1170
 br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:317 [inline]
 br_nf_pre_routing_ipv6+0x3cd/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:157 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:283 [inline]
 br_handle_frame+0xad8/0x14b0 net/bridge/br_input.c:434
 __netif_receive_skb_core.constprop.0+0xa26/0x4a00 net/core/dev.c:5863
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:5975
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6090
 process_backlog+0x442/0x15e0 net/core/dev.c:6442
 __napi_poll.constprop.0+0xba/0x550 net/core/dev.c:7414
 napi_poll net/core/dev.c:7478 [inline]
 net_rx_action+0xa9f/0xfe0 net/core/dev.c:7605
 handle_softirqs+0x216/0x8e0 kernel/softirq.c:579
 do_softirq kernel/softirq.c:480 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:467
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:407
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 fpregs_unlock arch/x86/include/asm/fpu/api.h:77 [inline]
 kernel_fpu_end+0x5e/0x70 arch/x86/kernel/fpu/core.c:476
 blake2s_compress+0x7f/0xe0 arch/x86/lib/crypto/blake2s-glue.c:46
 blake2s_update+0xef/0x360 lib/crypto/blake2s.c:32
 hmac.constprop.0+0x32a/0x420 drivers/net/wireguard/noise.c:332
 kdf.constprop.0+0x14b/0x280 drivers/net/wireguard/noise.c:367
 message_ephemeral+0x5f/0x70 drivers/net/wireguard/noise.c:493
 wg_noise_handshake_create_initiation+0x2c6/0x650 drivers/net/wireguard/noise.c:545
 wg_packet_send_handshake_initiation+0x19a/0x360 drivers/net/wireguard/send.c:34
 wg_packet_handshake_send_worker+0x1c/0x30 drivers/net/wireguard/send.c:51
 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3321 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/06/21 16:11 upstream 3f75bfff44be d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in con_get_trans_old
2025/06/13 00:19 linux-next 0bb71d301869 98683f8f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in con_get_trans_old
* Struck through repros no longer work on HEAD.