syzbot


INFO: task hung in wg_noise_handshake_consume_initiation (4)

Status: auto-obsoleted due to no activity on 2025/05/08 02:39
Subsystems: wireguard
[Documentation on labels]
First crash: 182d, last: 162d
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in wg_noise_handshake_consume_initiation wireguard 1 2 1802d 1861d 0/29 auto-closed as invalid on 2020/11/10 10:04
upstream INFO: task hung in wg_noise_handshake_consume_initiation (3) wireguard 1 8 282d 459d 0/29 auto-obsoleted due to no activity on 2025/01/07 21:33
upstream INFO: task hung in wg_noise_handshake_consume_initiation (2) wireguard 1 1 767d 767d 0/29 auto-obsoleted due to no activity on 2023/08/12 04:36

Sample crash report:
INFO: task kworker/0:6:29185 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc1-syzkaller-00081-gbb066fe812d6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:6     state:D stack:21688 pid:29185 tgid:29185 ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: wg-kex-wg2 wg_packet_handshake_receive_worker
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5377 [inline]
 __schedule+0x190e/0x4c90 kernel/sched/core.c:6764
 __schedule_loop kernel/sched/core.c:6841 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6856
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6913
 rwsem_down_read_slowpath kernel/locking/rwsem.c:1084 [inline]
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0x705/0xa40 kernel/locking/rwsem.c:1526
 wg_noise_handshake_consume_initiation+0x844/0xf70 drivers/net/wireguard/noise.c:632
 wg_receive_handshake_packet drivers/net/wireguard/receive.c:144 [inline]
 wg_packet_handshake_receive_worker+0x5bb/0xf50 drivers/net/wireguard/receive.c:213
 process_one_work kernel/workqueue.c:3236 [inline]
 process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317
 worker_thread+0x870/0xd30 kernel/workqueue.c:3398
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/u8:1/12:
 #0: ffff88801baf5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801baf5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90000117c60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90000117c60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fcb3c10 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x17a/0xd60 net/core/net_namespace.c:606
 #3: ffff888068f1d4e8 (&wg->device_update_lock){+.+.}-{4:4}, at: wg_destruct+0x110/0x2e0 drivers/net/wireguard/device.c:249
1 lock held by kworker/R-mm_pe/13:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
2 locks held by kworker/1:0/25:
1 lock held by khungtaskd/30:
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6746
2 locks held by kworker/1:1/52:
2 locks held by kworker/1:2/1169:
4 locks held by kworker/u8:8/5077:
 #0: ffff888036ff0148 ((wq_completion)wg-kex-wg2#123){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff888036ff0148 ((wq_completion)wg-kex-wg2#123){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90010247c60 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90010247c60 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffff888040a9d308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x120/0xf30 drivers/net/wireguard/noise.c:529
 #3: ffff88807ab4c5b8 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x132/0xf30 drivers/net/wireguard/noise.c:530
2 locks held by getty/5596:
 #0: ffff88814de720a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fde2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
2 locks held by kworker/1:4/5891:
1 lock held by kworker/1:7/7587:
4 locks held by kworker/1:8/7588:
1 lock held by kworker/R-wg-cr/15953:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by kworker/R-wg-cr/15955:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by kworker/R-wg-cr/15957:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
2 locks held by kworker/0:0/21645:
 #0: ffff88801ac80d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac80d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc9000437fc60 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc9000437fc60 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
2 locks held by kworker/1:3/23595:
1 lock held by kworker/R-wg-cr/24297:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by kworker/R-wg-cr/24298:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
2 locks held by kworker/u8:10/24536:
1 lock held by kworker/R-wg-cr/26585:
1 lock held by kworker/R-wg-cr/26586:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/28825:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
4 locks held by kworker/0:6/29185:
 #0: ffff88807ab29948 ((wq_completion)wg-kex-wg2#124){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88807ab29948 ((wq_completion)wg-kex-wg2#124){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90003c97c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*((worker))) *)(( unsigned long)((worker)))); (typeof((typeof(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90003c97c60 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __ptr = (unsigned long) ((typeof(*((worker))) *)(( unsigned long)((worker)))); (typeof((typeof(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffff888040a9d308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x156/0xf70 drivers/net/wireguard/noise.c:598
 #3: ffff88807ab4c5b8 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x844/0xf70 drivers/net/wireguard/noise.c:632
1 lock held by kworker/R-wg-cr/31288:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
2 locks held by kworker/1:5/32717:
2 locks held by kworker/1:6/2449:
2 locks held by kworker/1:9/5990:
2 locks held by kworker/1:10/5991:
2 locks held by kworker/1:11/5992:
1 lock held by kworker/R-wg-cr/8022:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/8046:
3 locks held by syz-executor/9668:
 #0: ffff8880960dcd80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:480 [inline]
 #0: ffff8880960dcd80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x203/0x510 net/bluetooth/hci_core.c:2677
 #1: ffff8880960dc078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x5c8/0x11c0 net/bluetooth/hci_sync.c:5185
 #2: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:334 [inline]
 #2: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:996
1 lock held by syz-executor/9671:
 #0: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:334 [inline]
 #0: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:996
1 lock held by kworker/R-wg-cr/9743:
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3e48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by rm/9960:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.14.0-rc1-syzkaller-00081-gbb066fe812d6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 15957 Comm: kworker/R-wg-cr Not tainted 6.14.0-rc1-syzkaller-00081-gbb066fe812d6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Workqueue:  0x0 (wg-crypt-wg2)
RIP: 0010:nft_synproxy_eval_v4+0x3dd/0x610
Code: 24 20 48 8b 5c 24 28 48 89 de 48 8b 54 24 18 4c 89 e9 e8 f6 23 ef ff 48 89 df e8 ee 64 90 ff 41 bf 02 00 00 00 48 8b 5c 24 30 <48> 89 d8 48 c1 e8 03 42 0f b6 04 30 84 c0 0f 85 d4 00 00 00 44 89
RSP: 0018:ffffc90000a180c0 EFLAGS: 00000286
RAX: 0fb28a7689e4a800 RBX: ffffc90000a183a0 RCX: ffffffff819b316a
RDX: dffffc0000000000 RSI: ffffffff8c0aa680 RDI: ffffffff8c608a00
RBP: ffffc90000a18190 R08: ffffffff942f985f R09: 1ffffffff285f30b
R10: dffffc0000000000 R11: fffffbfff285f30c R12: 1ffff92000143042
R13: ffffc90000a18210 R14: dffffc0000000000 R15: 0000000000000002
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000404030 CR3: 000000000e738000 CR4: 00000000003526f0
DR0: 0000000080000001 DR1: 0000000000000003 DR2: 00000000000048de
DR3: 000000000000013a DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 nft_synproxy_do_eval+0x362/0xa60 net/netfilter/nft_synproxy.c:141
 expr_call_ops_eval net/netfilter/nf_tables_core.c:240 [inline]
 nft_do_chain+0x4ad/0x1da0 net/netfilter/nf_tables_core.c:288
 nft_do_chain_inet+0x418/0x6b0 net/netfilter/nft_chain_filter.c:161
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626
 nf_hook include/linux/netfilter.h:269 [inline]
 NF_HOOK+0x29e/0x450 include/linux/netfilter.h:312
 NF_HOOK+0x3a4/0x450 include/linux/netfilter.h:314
 __netif_receive_skb_one_core net/core/dev.c:5828 [inline]
 __netif_receive_skb+0x2bf/0x650 net/core/dev.c:5941
 process_backlog+0x662/0x15b0 net/core/dev.c:6289
 __napi_poll+0xcb/0x490 net/core/dev.c:7106
 napi_poll net/core/dev.c:7175 [inline]
 net_rx_action+0x89b/0x1240 net/core/dev.c:7297
 handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
 __do_softirq kernel/softirq.c:595 [inline]
 invoke_softirq kernel/softirq.c:435 [inline]
 __irq_exit_rcu+0xf7/0x220 kernel/softirq.c:662
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:678
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
 sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1049
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:__mutex_trylock_common+0x0/0x2e0 kernel/locking/mutex.c:79
Code: cc cc 44 89 f1 80 e1 07 80 c1 03 38 c1 7c bf 4c 89 f7 e8 43 19 8c 00 eb b5 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <f3> 0f 1e fa 55 48 89 e5 41 57 41 56 41 55 41 54 53 48 83 e4 e0 48
RSP: 0018:ffffc90002ec7ab8 EFLAGS: 00000246
RAX: 0000000000000001 RBX: ffffc90002ec7b88 RCX: ffffffff8199e4ed
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8e7e3de0
RBP: ffffc90002ec7c48 R08: ffffffff901b5277 R09: 1ffffffff2036a4e
R10: dffffc0000000000 R11: fffffbfff2036a4f R12: 1ffff920005d8f6c
R13: ffffffff8e7e3e48 R14: ffffffff8e7e3de0 R15: 0000000000000000
 __mutex_trylock kernel/locking/mutex.c:127 [inline]
 __mutex_lock_common kernel/locking/mutex.c:588 [inline]
 __mutex_lock+0x1b7/0x1010 kernel/locking/mutex.c:730
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3478
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
IPVS: wlc: UDP 224.0.0.2:0 - no destination available

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/02/07 02:33 upstream bb066fe812d6 53657d1b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in wg_noise_handshake_consume_initiation
2025/01/18 08:01 upstream 595523945be0 f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in wg_noise_handshake_consume_initiation
* Struck through repros no longer work on HEAD.