syzbot


INFO: task hung in peer_remove_after_dead

Status: auto-obsoleted due to no activity on 2025/04/06 11:59
Subsystems: wireguard
[Documentation on labels]
First crash: 479d, last: 212d

Sample crash report:
INFO: task kworker/u8:21:16217 blocked for more than 143 seconds.
      Not tainted 6.13.0-rc5-syzkaller-00163-gab75170520d4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:21   state:D stack:22968 pid:16217 tgid:16217 ppid:2      flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_timeout+0xb0/0x290 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
 __flush_workqueue+0x575/0x1280 kernel/workqueue.c:3991
 peer_remove_after_dead+0x9d/0x1a0 drivers/net/wireguard/peer.c:116
 wg_peer_remove_all+0x453/0x4f0 drivers/net/wireguard/peer.c:183
 wg_destruct+0x173/0x2e0 drivers/net/wireguard/device.c:254
 netdev_run_todo+0xe1a/0x1000 net/core/dev.c:10919
 default_device_exit_batch+0xa24/0xaa0 net/core/dev.c:12076
 ops_exit_list net/core/net_namespace.c:177 [inline]
 cleanup_net+0x89d/0xd50 net/core/net_namespace.c:648
 process_one_work kernel/workqueue.c:3229 [inline]
 process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
 worker_thread+0x870/0xd30 kernel/workqueue.c:3391
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/0:0/8:
2 locks held by kworker/0:1/9:
1 lock held by kworker/R-mm_pe/13:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by khungtaskd/30:
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744
2 locks held by kworker/0:2/973:
3 locks held by kworker/u8:7/3571:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310
 #1: ffffc9000d397d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc9000d397d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310
 #2: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:281
1 lock held by dhcpcd/5497:
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline]
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011
2 locks held by getty/5588:
 #0: ffff88803130f0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
2 locks held by kworker/0:3/5877:
2 locks held by kworker/0:4/5878:
2 locks held by kworker/0:6/6814:
1 lock held by kworker/R-wg-cr/7799:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/7807:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/7869:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/9154:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
3 locks held by kworker/u8:16/11557:
1 lock held by kworker/R-wg-cr/12044:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/12045:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
4 locks held by kworker/u8:21/16217:
 #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310
 #1: ffffc9000b66fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc9000b66fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310
 #2: ffffffff8fca6810 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xd50 net/core/net_namespace.c:602
 #3: ffff88806bd554e8 (&wg->device_update_lock){+.+.}-{4:4}, at: wg_destruct+0x110/0x2e0 drivers/net/wireguard/device.c:249
1 lock held by kworker/R-wg-cr/20158:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/20160:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
3 locks held by kworker/u8:23/20727:
2 locks held by kworker/0:8/21146:
2 locks held by kworker/0:9/21185:
1 lock held by kworker/R-wg-cr/21221:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
2 locks held by kworker/R-wg-cr/21261:
1 lock held by kworker/R-wg-cr/21262:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/25356:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/25357:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/25392:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
2 locks held by kworker/R-wg-cr/25409:
2 locks held by kworker/R-wg-cr/25614:
2 locks held by kworker/R-wg-cr/25616:
5 locks held by kworker/R-wg-cr/25617:
 #0: ffff8880b863e8d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:598
 #1: ffff8880b8628948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x41d/0x7a0 kernel/sched/psi.c:987
 #2: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #2: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #2: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: netif_receive_skb_internal net/core/dev.c:5860 [inline]
 #2: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: netif_receive_skb+0x131/0x890 net/core/dev.c:5932
 #3: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #3: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #3: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: nf_hook include/linux/netfilter.h:238 [inline]
 #3: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: NF_HOOK+0x9a/0x450 include/linux/netfilter.h:312
 #4: ffffffff9a5f8558 (&obj_hash[i].lock){-.-.}-{2:2}, at: __debug_check_no_obj_freed lib/debugobjects.c:1088 [inline]
 #4: ffffffff9a5f8558 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_check_no_obj_freed+0x234/0x580 lib/debugobjects.c:1129
1 lock held by syz.3.3035/25971:
1 lock held by kworker/0:5/26103:
1 lock held by kworker/R-wg-cr/26465:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
4 locks held by syz-executor/26803:
 #0: ffff8880305bc420 (sb_writers#8){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2964 [inline]
 #0: ffff8880305bc420 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x225/0xd30 fs/read_write.c:675
 #1: ffff888064e1c888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1ea/0x500 fs/kernfs/file.c:325
 #2: ffff8880276a92d8 (kn->active#50){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20e/0x500 fs/kernfs/file.c:326
 #3: ffffffff8f55e7a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: new_device_store+0x1b4/0x890 drivers/net/netdevsim/bus.c:166
8 locks held by syz-executor/26866:
 #0: ffff8880305bc420 (sb_writers#8){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2964 [inline]
 #0: ffff8880305bc420 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x225/0xd30 fs/read_write.c:675
 #1: ffff888070cbac88 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1ea/0x500 fs/kernfs/file.c:325
 #2: ffff8880276a93c8 (kn->active#49){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20e/0x500 fs/kernfs/file.c:326
 #3: ffffffff8f55e7a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xfc/0x480 drivers/net/netdevsim/bus.c:216
 #4: ffff88805bb330e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1014 [inline]
 #4: ffff88805bb330e8 (&dev->mutex){....}-{4:4}, at: __device_driver_lock drivers/base/dd.c:1095 [inline]
 #4: ffff88805bb330e8 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0xce/0x7c0 drivers/base/dd.c:1293
 #5: ffff888065d7b250 (&devlink->lock_key#58){+.+.}-{4:4}, at: nsim_drv_remove+0x50/0x160 drivers/net/netdevsim/dev.c:1675
 #6: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: nsim_destroy+0x71/0x5c0 drivers/net/netdevsim/netdev.c:816
 #7: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:297 [inline]
 #7: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:976
4 locks held by syz-executor/26878:
 #0: ffff8880305bc420 (sb_writers#8){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2964 [inline]
 #0: ffff8880305bc420 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x225/0xd30 fs/read_write.c:675
 #1: ffff88806d0d8488 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1ea/0x500 fs/kernfs/file.c:325
 #2: ffff8880276a93c8 (kn->active#49){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20e/0x500 fs/kernfs/file.c:326
 #3: ffffffff8f55e7a8 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xfc/0x480 drivers/net/netdevsim/bus.c:216
1 lock held by syz-executor/27015:
 #0: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:329 [inline]
 #0: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:976

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.13.0-rc5-syzkaller-00163-gab75170520d4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:234 [inline]
 watchdog+0xff6/0x1040 kernel/hung_task.c:397
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 26465 Comm: kworker/R-wg-cr Not tainted 6.13.0-rc5-syzkaller-00163-gab75170520d4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue:  0x0 (wg-crypt-wg1)
RIP: 0010:hlock_class kernel/locking/lockdep.c:228 [inline]
RIP: 0010:__lock_acquire+0x12be/0x2100 kernel/locking/lockdep.c:5223
Code: 10 8b 18 81 e3 ff 1f 00 00 48 89 d8 48 c1 e8 06 48 8d 3c c5 80 48 2a 94 be 08 00 00 00 e8 ba 16 8b 00 48 0f a3 1d a2 70 af 12 <73> 1d 48 69 c3 c8 00 00 00 48 8d 98 40 c7 c1 93 48 ba 00 00 00 00
RSP: 0018:ffffc900000069f0 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 0000000000000021 RCX: ffffffff817ad7d6
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff942a4880
RBP: 3d352eea67da5f0d R08: ffffffff942a4887 R09: 1ffffffff2854910
R10: dffffc0000000000 R11: fffffbfff2854911 R12: ffff888028990000
R13: ffff888028990000 R14: 1ffff1100513216f R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffd448fbe80 CR3: 000000000e736000 CR4: 00000000003526f0
DR0: 0000000000006260 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 rcu_read_lock include/linux/rcupdate.h:849 [inline]
 net_generic+0x3c/0x240 include/net/netns/generic.h:45
 is_pppoe_ip net/bridge/br_netfilter_hooks.c:122 [inline]
 br_nf_forward+0x298/0x18b0 net/bridge/br_netfilter_hooks.c:800
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626
 nf_hook include/linux/netfilter.h:269 [inline]
 NF_HOOK+0x2a7/0x460 include/linux/netfilter.h:312
 __br_forward+0x489/0x660 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 maybe_deliver+0xb3/0x150 net/bridge/br_forward.c:190
 br_flood+0x2e4/0x660 net/bridge/br_forward.c:236
 br_handle_frame_finish+0x18ba/0x1fe0 net/bridge/br_input.c:215
 br_nf_hook_thresh+0x472/0x590
 br_nf_pre_routing_finish_ipv6+0xaa0/0xdd0
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x379/0x770 net/bridge/br_netfilter_ipv6.c:184
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
 br_handle_frame+0x9fd/0x1530 net/bridge/br_input.c:424
 __netif_receive_skb_core+0x14eb/0x4690 net/core/dev.c:5568
 __netif_receive_skb_one_core net/core/dev.c:5672 [inline]
 __netif_receive_skb+0x12f/0x650 net/core/dev.c:5787
 process_backlog+0x662/0x15b0 net/core/dev.c:6119
 __napi_poll+0xcb/0x490 net/core/dev.c:6885
 napi_poll net/core/dev.c:6954 [inline]
 net_rx_action+0x89b/0x1240 net/core/dev.c:7076
 handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
 __do_softirq kernel/softirq.c:595 [inline]
 invoke_softirq kernel/softirq.c:435 [inline]
 __irq_exit_rcu+0xf7/0x220 kernel/softirq.c:662
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:678
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
 sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1049
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:finish_task_switch+0x1ea/0x870 kernel/sched/core.c:5243
Code: c9 50 e8 49 0c 0c 00 48 83 c4 08 4c 89 f7 e8 ed 39 00 00 0f 1f 44 00 00 4c 89 f7 e8 a0 d9 5c 0a e8 0b 8c 38 00 fb 48 8b 5d c0 <48> 8d bb f8 15 00 00 48 89 f8 48 c1 e8 03 49 be 00 00 00 00 00 fc
RSP: 0018:ffffc900044678e8 EFLAGS: 00000282
RAX: e1813a1002109900 RBX: ffff888028990000 RCX: ffffffff817b378a
RDX: dffffc0000000000 RSI: ffffffff8c0a98e0 RDI: ffffffff8c5fb0e0
RBP: ffffc90004467930 R08: ffffffff942a4897 R09: 1ffffffff2854912
R10: dffffc0000000000 R11: fffffbfff2854913 R12: 1ffff110170c7edc
R13: dffffc0000000000 R14: ffff8880b863e8c0 R15: ffff8880b863f6e0
 context_switch kernel/sched/core.c:5372 [inline]
 __schedule+0x1858/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (23):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/06 11:52 upstream ab75170520d4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in peer_remove_after_dead
2024/12/14 22:02 upstream a0e3919a2df2 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/11/27 05:16 upstream 7eef7e306d3c 52b38cc1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/11/16 14:13 upstream e8bdb3c8be08 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/08/07 00:07 upstream eb5e56d14912 e1bdb00a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in peer_remove_after_dead
2024/08/04 18:15 upstream a5dbd76a8942 1786a2a8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/07/22 03:25 upstream 7846b618e0a4 b88348e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/07/05 09:20 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in peer_remove_after_dead
2024/06/14 17:46 upstream 2ccbdf43d5e7 8d849073 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in peer_remove_after_dead
2024/06/06 15:52 upstream 2df0193e62cf 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in peer_remove_after_dead
2024/06/01 00:41 upstream d8ec19857b09 3113787f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in peer_remove_after_dead
2024/05/29 08:46 upstream e0cce98fe279 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in peer_remove_after_dead
2024/05/28 00:58 upstream 2bfcfd584ff5 f550015e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/05/27 23:56 upstream 2bfcfd584ff5 f550015e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/05/25 16:23 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in peer_remove_after_dead
2024/05/11 19:38 upstream cf87f46fd34d 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/04/24 18:03 upstream 9d1ddab261f3 8bdc0f22 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/08/25 00:19 upstream d2bafcf224f3 d7d32352 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in peer_remove_after_dead
2024/07/26 18:07 upstream 1722389b0d86 3f86dfed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in peer_remove_after_dead
2024/06/02 06:38 linux-next 0e1980c40b6e 3113787f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/05/27 00:09 linux-next 3689b0ef08b7 a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/05/17 12:17 linux-next c75962170e49 a12e99e7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in peer_remove_after_dead
2024/04/15 02:43 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in peer_remove_after_dead
* Struck through repros no longer work on HEAD.