syzbot


INFO: task hung in worker_attach_to_pool (2)

Status: upstream: reported on 2024/10/09 18:00
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+8b08b50984ccfdd38ce2@syzkaller.appspotmail.com
First crash: 310d, last: 69d
Discussions (2)
Title Replies (including bot) Last reply
Re: BUG: Stall on adding/removing wokers into workqueue pool 1 (1) 2024/11/01 17:37
[syzbot] [kernel?] INFO: task hung in worker_attach_to_pool (2) 0 (1) 2024/10/09 18:00
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in worker_attach_to_pool tipc 6 1588d 1592d 0/28 auto-closed as invalid on 2021/01/19 23:38

Sample crash report:
INFO: task kworker/R-wg-cr:9253 blocked for more than 143 seconds.
      Not tainted 6.13.0-rc7-syzkaller-00149-g9bffa1ad25b8 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:28720 pid:9253  tgid:9253  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x17fb/0x4be0 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3478
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Showing all locks held in the system:
2 locks held by ksoftirqd/0/16:
1 lock held by khungtaskd/30:
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744
3 locks held by kworker/u8:6/1135:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc900042cfd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc900042cfd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:285
6 locks held by kworker/u8:7/1146:
 #0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc9000451fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc9000451fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fc947d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xd50 net/core/net_namespace.c:602
 #3: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: ieee80211_unregister_hw+0x55/0x2c0 net/mac80211/main.c:1664
 #4: ffff88805e750768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: wiphy_lock include/net/cfg80211.h:6019 [inline]
 #4: ffff88805e750768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: ieee80211_remove_interfaces+0x12b/0x700 net/mac80211/iface.c:2282
 #5: ffffffff8e7d2ed0 (cpu_hotplug_lock){++++}-{0:0}, at: flush_all_backlogs net/core/dev.c:6063 [inline]
 #5: ffffffff8e7d2ed0 (cpu_hotplug_lock){++++}-{0:0}, at: unregister_netdevice_many_notify+0x5ea/0x1da0 net/core/dev.c:11526
1 lock held by dhcpcd/5488:
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:128 [inline]
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x31a/0x1ac0 net/ipv4/devinet.c:1129
2 locks held by getty/5584:
 #0: ffff88814d2c10a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
1 lock held by kworker/R-wg-cr/5857:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by kworker/R-wg-cr/5860:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/5868:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/5869:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by kworker/R-wg-cr/5870:
3 locks held by kworker/u8:9/5919:
 #0: ffff888022eb6948 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff888022eb6948 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc9000442fd00 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc9000442fd00 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: cfg80211_dfs_channels_update_work+0xbf/0x610 net/wireless/mlme.c:1015
2 locks held by kworker/u8:10/5980:
1 lock held by kworker/R-wg-cr/7661:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by kworker/R-wg-cr/7664:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2734 [inline]
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3533
1 lock held by kworker/u9:1/8928:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
1 lock held by kworker/R-wg-cr/9253:
 #0: ffffffff8e7e3348 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2676
2 locks held by syz-executor/9439:
 #0: ffffffff8fc947d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tc_action_net_exit include/net/act_api.h:173 [inline]
 #1: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: gate_exit_net+0x30/0x100 net/sched/act_gate.c:654
1 lock held by syz-executor/9444:
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
1 lock held by syz-executor/9450:
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
1 lock held by syz-executor/9457:
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
1 lock held by syz-executor/9465:
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
2 locks held by syz-executor/9479:
 #0: ffffffff8fc947d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: wg_netns_pre_exit+0x1f/0x1e0 drivers/net/wireguard/device.c:415
2 locks held by syz-executor/9486:
 #0: ffffffff8fc947d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: wg_netns_pre_exit+0x1f/0x1e0 drivers/net/wireguard/device.c:415
2 locks held by syz-executor/9489:
 #0: ffffffff8fc947d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159
2 locks held by syz-executor/9494:
 #0: ffffffff8fc947d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: wg_netns_pre_exit+0x1f/0x1e0 drivers/net/wireguard/device.c:415
2 locks held by syz-executor/9500:
 #0: ffffffff8fc947d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:512
 #1: ffffffff8fca0c88 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x20e/0x720 net/ipv4/ip_tunnel.c:1159

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.13.0-rc7-syzkaller-00149-g9bffa1ad25b8 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:234 [inline]
 watchdog+0xff6/0x1040 kernel/hung_task.c:397
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 16 Comm: ksoftirqd/0 Not tainted 6.13.0-rc7-syzkaller-00149-g9bffa1ad25b8 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
RIP: 0010:__sanitizer_cov_trace_pc+0x5d/0x70 kernel/kcov.c:235
Code: f8 15 00 00 83 fa 02 75 21 48 8b 91 00 16 00 00 48 8b 32 48 8d 7e 01 8b 89 fc 15 00 00 48 39 cf 73 08 48 89 3a 48 89 44 f2 08 <c3> cc cc cc cc 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 90 90 90
RSP: 0018:ffffc90000156738 EFLAGS: 00000246
RAX: ffffffff8a406947 RBX: ffffffff8a406920 RCX: ffff88801beeda00
RDX: 0000000000000100 RSI: ffffc90000156c20 RDI: ffff8880272c6d00
RBP: ffffc90000156890 R08: 0000000000000001 R09: ffffffff89a17b53
R10: 0000000000000002 R11: ffff88801beeda00 R12: ffffc900001568e0
R13: ffff8880272c6d00 R14: dffffc0000000000 R15: ffffc90000156c20
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2cf09ff8 CR3: 000000000e736000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 fib4_rule_action+0x27/0x330 net/ipv4/fib_rules.c:112
 fib_rules_lookup+0x74e/0xdb0 net/core/fib_rules.c:319
 __fib_lookup+0x16b/0x2c0 net/ipv4/fib_rules.c:94
 ip_route_output_key_hash_rcu+0x284/0x2390 net/ipv4/route.c:2780
 ip_route_output_key_hash+0x193/0x2b0 net/ipv4/route.c:2670
 __ip_route_output_key include/net/route.h:141 [inline]
 ip_route_output_flow+0x29/0x140 net/ipv4/route.c:2898
 ip_route_output_key include/net/route.h:151 [inline]
 ip_route_me_harder+0x877/0x1360 net/ipv4/netfilter.c:53
 synproxy_send_tcp+0x356/0x6c0 net/netfilter/nf_synproxy_core.c:431
 synproxy_send_client_synack+0x8a4/0xe20 net/netfilter/nf_synproxy_core.c:484
 nft_synproxy_eval_v4+0x3ca/0x610 net/netfilter/nft_synproxy.c:59
 nft_synproxy_do_eval+0x362/0xa60 net/netfilter/nft_synproxy.c:141
 expr_call_ops_eval net/netfilter/nf_tables_core.c:240 [inline]
 nft_do_chain+0x4ad/0x1da0 net/netfilter/nf_tables_core.c:288
 nft_do_chain_inet+0x418/0x6b0 net/netfilter/nft_chain_filter.c:161
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626
 nf_hook include/linux/netfilter.h:269 [inline]
 NF_HOOK+0x29e/0x450 include/linux/netfilter.h:312
 NF_HOOK+0x3a4/0x450 include/linux/netfilter.h:314
 __netif_receive_skb_one_core net/core/dev.c:5704 [inline]
 __netif_receive_skb+0x2bf/0x650 net/core/dev.c:5817
 process_backlog+0x662/0x15b0 net/core/dev.c:6149
 __napi_poll+0xcb/0x490 net/core/dev.c:6902
 napi_poll net/core/dev.c:6971 [inline]
 net_rx_action+0x89b/0x1240 net/core/dev.c:7093
 handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
 run_ksoftirqd+0xca/0x130 kernel/softirq.c:950
 smpboot_thread_fn+0x544/0xa30 kernel/smpboot.c:164
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (47):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/17 14:40 upstream 9bffa1ad25b8 953d1c45 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_attach_to_pool
2024/12/23 13:10 upstream 4bbf9020becb 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/12/10 02:43 upstream 7cb1b4663150 deb72877 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in worker_attach_to_pool
2024/12/08 14:55 upstream 7503345ac5f5 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in worker_attach_to_pool
2024/11/20 16:12 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/11/14 07:19 upstream f1b785f4c787 a8c99394 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/11/11 08:04 upstream a9cda7c0ffed 6b856513 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_attach_to_pool
2024/11/07 00:01 upstream 7758b206117d df3dc63b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/23 07:27 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/17 01:05 upstream c964ced77262 666f77ed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in worker_attach_to_pool
2024/10/13 07:44 upstream 36c254515dc6 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/08 14:09 upstream 87d6aab2389e 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/08 11:22 upstream 87d6aab2389e 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_attach_to_pool
2024/10/03 21:44 upstream 7ec462100ef9 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/01 18:07 upstream e32cde8d2bd7 ea2b66a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/27 06:26 upstream 075dbe9f6e3c 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/26 03:21 upstream aa486552a110 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/09/24 14:32 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/24 13:59 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/23 15:44 upstream de5cb0dcb74c 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/23 15:27 upstream de5cb0dcb74c 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/21 16:40 upstream 1868f9d0260e 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/21 00:23 upstream baeb9a7d8b60 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/08/24 17:40 upstream d2bafcf224f3 d7d32352 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_attach_to_pool
2024/08/21 14:09 upstream b311c1b497e5 db5852f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/08/13 05:03 upstream d74da846046a 7b0f4b46 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/08/06 07:13 upstream b446a2dae984 e1bdb00a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/07/29 04:13 upstream 5437f30d3458 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/07/24 06:36 upstream 28bbe4ea686a 57b2edb1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/07/05 11:59 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/07/03 11:12 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in worker_attach_to_pool
2024/06/18 18:33 upstream 2ccbdf43d5e7 639d6cdf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/06/10 15:19 upstream 83a7eefedc9b 048c640a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/06/09 06:40 upstream 771ed66105de 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/05/29 20:37 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/05/29 14:14 upstream e0cce98fe279 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in worker_attach_to_pool
2024/12/02 20:07 upstream e70140ba0d2b bb326ffb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/20 16:12 upstream 715ca9dd687f cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/14 02:49 upstream ba01565ced22 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/05 16:10 upstream 27cc6fdf7201 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/05 06:46 upstream 27cc6fdf7201 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/07/21 12:43 upstream 2c9b3512402e b88348e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/05/30 03:19 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/05 17:45 net-next d521db38f339 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/11 07:02 linux-next 0cca97bf2364 cd942402 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/10/04 16:41 linux-next c02d24a5af66 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/05/22 02:33 linux-next 124cfbcd6d18 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in worker_attach_to_pool
* Struck through repros no longer work on HEAD.