syzbot


INFO: task hung in ppp_exit_net (4)

Status: upstream: reported on 2024/06/13 09:12
Subsystems: ppp
[Documentation on labels]
Reported-by: syzbot+32bd764abd98eb40dea8@syzkaller.appspotmail.com
First crash: 404d, last: 38d
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly ppp report (Dec 2024) 0 (1) 2024/12/27 23:24
[syzbot] Monthly ppp report (Sep 2024) 0 (1) 2024/09/23 09:02
[syzbot] [ppp?] INFO: task hung in ppp_exit_net (4) 0 (1) 2024/06/13 09:12
Similar bugs (5)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in ppp_exit_net 1 397d 397d 0/3 auto-obsoleted due to no activity on 2024/09/04 17:46
upstream INFO: task hung in ppp_exit_net (2) ppp 1 845d 845d 0/29 auto-obsoleted due to no activity on 2023/06/04 05:29
upstream INFO: task hung in ppp_exit_net (3) ppp 1 683d 683d 0/29 auto-obsoleted due to no activity on 2023/11/13 21:08
upstream INFO: task hung in ppp_exit_net ppp 1 1095d 1095d 0/29 auto-closed as invalid on 2022/09/27 15:01
linux-6.1 INFO: task hung in ppp_exit_net 1 402d 402d 0/3 auto-obsoleted due to no activity on 2024/08/30 06:26

Sample crash report:
INFO: task kworker/u8:1:13 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc7-syzkaller-00002-gb36ddb9210e6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:1    state:D stack:25424 pid:13    tgid:13    ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x116f/0x5de0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 __mutex_lock_common kernel/locking/mutex.c:678 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:746
 ppp_exit_net+0xad/0x3b0 drivers/net/ppp/ppp_generic.c:1158
 ops_exit_list+0xb0/0x180 net/core/net_namespace.c:172
 cleanup_net+0x5c1/0xb30 net/core/net_namespace.c:654
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task kworker/1:2:1208 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc7-syzkaller-00002-gb36ddb9210e6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:2     state:D stack:26264 pid:1208  tgid:1208  ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: events_power_efficient crda_timeout_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x116f/0x5de0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 __mutex_lock_common kernel/locking/mutex.c:678 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:746
 crda_timeout_work+0x15/0x50 net/wireless/reg.c:541
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz.1.28:6046 blocked for more than 143 seconds.
      Not tainted 6.15.0-rc7-syzkaller-00002-gb36ddb9210e6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.28        state:D stack:27432 pid:6046  tgid:6045  ppid:5828   task_flags:0x400140 flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x116f/0x5de0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 __mutex_lock_common kernel/locking/mutex.c:678 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:746
 register_nexthop_notifier+0x1b/0x70 net/ipv4/nexthop.c:3918
 ops_init+0x1e2/0x5f0 net/core/net_namespace.c:138
 setup_net+0x21e/0x850 net/core/net_namespace.c:364
 copy_net_ns+0x2a6/0x5f0 net/core/net_namespace.c:518
 create_new_namespaces+0x3ea/0xad0 kernel/nsproxy.c:110
 unshare_nsproxy_namespaces+0xc0/0x1f0 kernel/nsproxy.c:228
 ksys_unshare+0x45b/0xa40 kernel/fork.c:3376
 __do_sys_unshare kernel/fork.c:3447 [inline]
 __se_sys_unshare kernel/fork.c:3445 [inline]
 __x64_sys_unshare+0x31/0x40 kernel/fork.c:3445
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f813078e969
RSP: 002b:00007f813166f038 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 00007f81309b5fa0 RCX: 00007f813078e969
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040000080
RBP: 00007f8130810ab1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f81309b5fa0 R15: 00007ffd0ce59ab8
 </TASK>

Showing all locks held in the system:
1 lock held by kthreadd/2:
1 lock held by kworker/R-kvfre/6:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
3 locks held by kworker/0:0/9:
3 locks held by kworker/0:1/10:
7 locks held by kworker/u8:0/12:
4 locks held by kworker/u8:1/13:
 #0: ffff88801c2f6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc90000127d18 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffffffff90114550 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xc9/0xb30 net/core/net_namespace.c:608
 #3: ffffffff9012a3e8 (rtnl_mutex){+.+.}-{4:4}, at: ppp_exit_net+0xad/0x3b0 drivers/net/ppp/ppp_generic.c:1158
1 lock held by kworker/R-mm_pe/14:
4 locks held by kworker/1:0/24:
 #0: ffff88814c0b6948 ((wq_completion)wg-kex-wg2#4){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc900001e7d18 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((__typeof__(*((worker))) *)(( unsigned long)((worker))))); (typeof((__typeof__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff88807bf75308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x1c2/0x880 drivers/net/wireguard/noise.c:598
 #3: ffff88805dc42ad8 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x5ac/0x880 drivers/net/wireguard/noise.c:632
1 lock held by khungtaskd/31:
 #0: ffffffff8e3bfa80 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3bfa80 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e3bfa80 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6764
5 locks held by kworker/u8:2/36:
3 locks held by kworker/u8:3/53:
7 locks held by kworker/u9:0/55:
 #0: ffff8880353cc948 ((wq_completion)hci3){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc9000100fd18 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888078960d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x175/0x430 net/bluetooth/hci_sync.c:331
 #3: ffff888078960078 (&hdev->lock){+.+.}-{4:4}, at: hci_abort_conn_sync+0x146/0xb40 net/bluetooth/hci_sync.c:5597
 #4: ffffffff90397ec8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:2050 [inline]
 #4: ffffffff90397ec8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_failed+0x14f/0x330 net/bluetooth/hci_conn.c:1269
 #5: ffff888027c3a338 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x80/0x730 net/bluetooth/l2cap_core.c:1761
 #6: ffffffff8e3cafb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:336
4 locks held by kworker/u8:4/63:
3 locks held by kworker/u8:5/1089:
3 locks held by kworker/u8:6/1148:
3 locks held by kworker/u8:7/1157:
3 locks held by kworker/1:2/1208:
 #0: ffff88801b481d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc90003fbfd18 ((crda_timeout).work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffffffff9012a3e8 (rtnl_mutex){+.+.}-{4:4}, at: crda_timeout_work+0x15/0x50 net/wireless/reg.c:541
4 locks held by kworker/0:2/1215:
1 lock held by kworker/R-dm_bu/2812:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
3 locks held by kworker/R-ipv6_/3168:
 #0: ffff888030f7c948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc9000b507cb0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffffffff9012a3e8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff9012a3e8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4738
2 locks held by kworker/R-bat_e/3398:
3 locks held by kworker/u8:8/3497:
3 locks held by kworker/u8:9/4813:
3 locks held by kworker/u8:10/4841:
2 locks held by klogd/5191:
1 lock held by udevd/5202:
2 locks held by dhcpcd/5496:
5 locks held by dhcpcd/5497:
2 locks held by crond/5574:
2 locks held by getty/5597:
 #0: ffff88814d5be0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002ffe2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
1 lock held by syz-executor/5818:
2 locks held by syz-executor/5828:
4 locks held by kworker/0:3/5834:
 #0: ffff88814371f548 ((wq_completion)wg-kex-wg2#2){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc900043ffd18 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((__typeof__(*((worker))) *)(( unsigned long)((worker))))); (typeof((__typeof__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888034d1d308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x1c2/0x880 drivers/net/wireguard/noise.c:598
 #3: ffff88807e4520f0 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x666/0x880 drivers/net/wireguard/noise.c:643
5 locks held by kworker/u9:6/5843:
 #0: ffff888028f70148 ((wq_completion)hci0){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc9000448fd18 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888079a10d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x175/0x430 net/bluetooth/hci_sync.c:331
 #3: ffff888079a10078 (&hdev->lock){+.+.}-{4:4}, at: hci_abort_conn_sync+0x146/0xb40 net/bluetooth/hci_sync.c:5597
 #4: ffffffff90397ec8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:2050 [inline]
 #4: ffffffff90397ec8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_failed+0x14f/0x330 net/bluetooth/hci_conn.c:1269
5 locks held by kworker/u9:8/5845:
 #0: ffff88814d410148 ((wq_completion)hci1){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc900044afd18 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888078964d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x175/0x430 net/bluetooth/hci_sync.c:331
 #3: ffff888078964078 (&hdev->lock){+.+.}-{4:4}, at: hci_abort_conn_sync+0x146/0xb40 net/bluetooth/hci_sync.c:5597
 #4: ffffffff90397ec8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_connect_cfm include/net/bluetooth/hci_core.h:2050 [inline]
 #4: ffffffff90397ec8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_failed+0x14f/0x330 net/bluetooth/hci_conn.c:1269
1 lock held by kworker/R-wg-cr/5858:
1 lock held by kworker/R-wg-cr/5859:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5860:
1 lock held by kworker/R-wg-cr/5861:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5862:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5863:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5864:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5865:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5866:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5867:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5868:
1 lock held by kworker/R-wg-cr/5869:
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e277588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x839/0xea0 kernel/workqueue.c:3529
4 locks held by kworker/1:3/5870:
3 locks held by kworker/1:4/5872:
 #0: ffff88801b481d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc90004a17d18 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffffffff9012a3e8 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x83/0x1170 net/wireless/reg.c:2483
2 locks held by kworker/1:5/5874:
3 locks held by kworker/1:6/5875:
2 locks held by kworker/0:5/5907:
2 locks held by syz.0.9/5942:
6 locks held by syz.2.26/6035:
3 locks held by syz.2.26/6036:
2 locks held by syz.1.28/6046:
 #0: ffffffff90114550 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x286/0x5f0 net/core/net_namespace.c:514
 #1: ffffffff9012a3e8 (rtnl_mutex){+.+.}-{4:4}, at: register_nexthop_notifier+0x1b/0x70 net/ipv4/nexthop.c:3918
4 locks held by kworker/u8:11/6053:
1 lock held by dhcpcd/6054:
 #0: ffff88805d811a08 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:867 [inline]
 #0: ffff88805d811a08 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: __sock_release+0x86/0x270 net/socket.c:646
4 locks held by kworker/u8:12/6055:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.15.0-rc7-syzkaller-00002-gb36ddb9210e6 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:274 [inline]
 watchdog+0xf70/0x12c0 kernel/hung_task.c:437
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 10 Comm: kworker/0:1 Not tainted 6.15.0-rc7-syzkaller-00002-gb36ddb9210e6 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: wg-kex-wg2 wg_packet_handshake_receive_worker
RIP: 0010:check_wait_context kernel/locking/lockdep.c:4869 [inline]
RIP: 0010:__lock_acquire+0x23e/0x1ba0 kernel/locking/lockdep.c:5185
Code: 00 41 8b be e8 0a 00 00 40 84 f6 41 0f 44 f5 41 89 fd 41 83 ed 01 0f 88 58 0d 00 00 49 63 c5 48 8d 04 80 48 8d 44 c5 00 eb 12 <41> 83 ed 01 48 83 e8 28 41 83 fd ff 0f 84 64 05 00 00 0f b6 50 21
RSP: 0018:ffffc90000006fc8 EFLAGS: 00000046
RAX: ffff88801dada968 RBX: ffff88801dada9b8 RCX: 0000000000000020
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000005
RBP: ffff88801dada8f0 R08: 0000000000000000 R09: 0000000000000001
R10: 0000000000000000 R11: ffffffff8e3bfa80 R12: 0000000000000000
R13: 0000000000000003 R14: ffff88801dad9e00 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8881249e7000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005572127704b0 CR3: 0000000076c7a000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 lock_acquire kernel/locking/lockdep.c:5866 [inline]
 lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823
 rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 rcu_read_lock include/linux/rcupdate.h:841 [inline]
 net_generic+0x36/0x2a0 include/net/netns/generic.h:45
 is_vlan_arp net/bridge/br_netfilter_hooks.c:109 [inline]
 br_nf_forward_finish+0x18f/0xba0 net/bridge/br_netfilter_hooks.c:642
 NF_HOOK include/linux/netfilter.h:314 [inline]
 NF_HOOK include/linux/netfilter.h:308 [inline]
 br_nf_forward_ip.part.0+0x609/0x810 net/bridge/br_netfilter_hooks.c:719
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:679 [inline]
 br_nf_forward+0xf0f/0x1be0 net/bridge/br_netfilter_hooks.c:776
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xbe/0x200 net/netfilter/core.c:626
 nf_hook+0x45e/0x780 include/linux/netfilter.h:269
 NF_HOOK include/linux/netfilter.h:312 [inline]
 __br_forward+0x1be/0x5b0 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 br_flood+0x39c/0x650 net/bridge/br_forward.c:249
 br_handle_frame_finish+0xe8e/0x1c20 net/bridge/br_input.c:220
 br_nf_hook_thresh+0x304/0x410 net/bridge/br_netfilter_hooks.c:1170
 br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x3cd/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:282 [inline]
 br_handle_frame+0xad8/0x14a0 net/bridge/br_input.c:433
 __netif_receive_skb_core.constprop.0+0xa23/0x4a00 net/core/dev.c:5773
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:5885
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6000
 process_backlog+0x442/0x15e0 net/core/dev.c:6352
 __napi_poll.constprop.0+0xba/0x550 net/core/dev.c:7324
 napi_poll net/core/dev.c:7388 [inline]
 net_rx_action+0xa97/0x1010 net/core/dev.c:7510
 handle_softirqs+0x216/0x8e0 kernel/softirq.c:579
 do_softirq kernel/softirq.c:480 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:467
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:407
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 fpregs_unlock arch/x86/include/asm/fpu/api.h:77 [inline]
 kernel_fpu_end+0x5e/0x70 arch/x86/kernel/fpu/core.c:460
 blake2s_compress+0x7f/0xe0 arch/x86/crypto/blake2s-glue.c:49
 blake2s_final+0xc9/0x150 lib/crypto/blake2s.c:54
 hmac.constprop.0+0x252/0x420 drivers/net/wireguard/noise.c:325
 kdf.constprop.0+0x122/0x280 drivers/net/wireguard/noise.c:360
 mix_precomputed_dh drivers/net/wireguard/noise.c:426 [inline]
 wg_noise_handshake_consume_initiation+0x4b9/0x880 drivers/net/wireguard/noise.c:623
 wg_receive_handshake_packet+0x219/0xbf0 drivers/net/wireguard/receive.c:144
 wg_packet_handshake_receive_worker+0x17f/0x3a0 drivers/net/wireguard/receive.c:213
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
net_ratelimit: 13840 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:62:02:f5:81:c2:4f, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:62:02:f5:81:c2:4f, vlan:0)
net_ratelimit: 13961 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:62:02:f5:81:c2:4f, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:62:02:f5:81:c2:4f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 14146 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:62:02:f5:81:c2:4f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:62:02:f5:81:c2:4f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:62:02:f5:81:c2:4f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (131):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/05/21 09:57 upstream b36ddb9210e6 b47f9e02 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in ppp_exit_net
2025/04/02 11:42 upstream 91e5bfe317d8 c799dfdd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in ppp_exit_net
2025/03/29 08:11 upstream eff5f16bfd87 cf25e2c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in ppp_exit_net
2025/03/17 15:02 upstream 4701f33a1070 948c34e4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in ppp_exit_net
2024/12/17 12:08 upstream f44d154d6e3d f93b2b55 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in ppp_exit_net
2024/12/15 05:53 upstream a0e3919a2df2 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in ppp_exit_net
2024/11/17 21:36 upstream f66d6acccbc0 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in ppp_exit_net
2024/11/15 20:57 upstream f868cd251776 f6ede3a3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in ppp_exit_net
2024/10/26 10:50 upstream 850925a8133c 65e8686b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in ppp_exit_net
2024/10/12 07:52 upstream 9e4c6c1ad9a1 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in ppp_exit_net
2024/10/10 22:41 upstream d3d1556696c1 8fbfc0c8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in ppp_exit_net
2024/10/10 04:23 upstream b983b271662b 0278d004 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in ppp_exit_net
2024/10/06 22:13 upstream 8f602276d390 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in ppp_exit_net
2024/10/06 11:58 upstream 8f602276d390 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in ppp_exit_net
2024/10/01 11:43 upstream e32cde8d2bd7 bbd4e0a4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in ppp_exit_net
2024/09/21 23:23 upstream 88264981f208 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in ppp_exit_net
2024/06/12 03:59 upstream 2ef5971ff345 4d75f4f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in ppp_exit_net
2024/06/09 08:59 upstream 771ed66105de 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in ppp_exit_net
2024/10/15 17:39 upstream eca631b8fe80 14943bb8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in ppp_exit_net
2024/10/14 04:59 upstream ba01565ced22 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in ppp_exit_net
2024/10/10 17:04 upstream d3d1556696c1 8fbfc0c8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in ppp_exit_net
2024/10/06 14:28 upstream 8f602276d390 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in ppp_exit_net
2024/10/05 15:58 upstream 27cc6fdf7201 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in ppp_exit_net
2024/09/30 15:06 upstream 9852d85ec9d4 bbd4e0a4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in ppp_exit_net
2024/10/14 17:14 net 0b84db5d8f25 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in ppp_exit_net
2024/10/11 18:59 net 1d227fcc7222 cd942402 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in ppp_exit_net
2024/10/08 16:42 net f15b8d6eb638 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in ppp_exit_net
2024/10/07 10:31 net 9234a2549cb6 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in ppp_exit_net
2024/06/02 17:45 net 33700a0c9b56 3113787f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in ppp_exit_net
2025/02/15 04:49 net-next 7a7e0197133d 40a34ec9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/11/26 18:47 net-next fcc79e1714e8 11dbc254 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/11/07 19:43 net-next 2a6f99ee1a80 c069283c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/27 08:40 net-next 6d858708d465 65e8686b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/26 03:58 net-next 6d858708d465 65e8686b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/25 20:43 net-next 6d858708d465 65e8686b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/25 11:29 net-next 6d858708d465 c79b8ca5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/23 21:50 net-next 6d858708d465 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/23 16:04 net-next 6d858708d465 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/22 23:59 net-next 6d858708d465 9d74f456 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/21 19:51 net-next 6d858708d465 a93682b3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/21 00:25 net-next 6d858708d465 cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/18 21:48 net-next 6d858708d465 cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/15 15:35 net-next 60b4d49b9621 14943bb8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/13 00:07 net-next c531f2269a53 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/12 22:39 net-next c531f2269a53 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/12 04:30 net-next d677aebd663d 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/06 10:31 net-next d521db38f339 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/03 19:27 net-next 7c2f1c2690a5 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/02 20:47 net-next 44badc908f2c a4c7fd36 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/02 16:03 net-next 44badc908f2c a4c7fd36 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/09/30 11:44 net-next c824deb1a897 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/09/29 18:11 net-next c824deb1a897 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in ppp_exit_net
2024/10/15 09:41 linux-next b852e1e7a038 14943bb8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ppp_exit_net
* Struck through repros no longer work on HEAD.