syzbot


INFO: task hung in worker_attach_to_pool (2)

Status: upstream: reported on 2024/10/09 18:00
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+8b08b50984ccfdd38ce2@syzkaller.appspotmail.com
First crash: 183d, last: 14h27m
Discussions (2)
Title Replies (including bot) Last reply
Re: BUG: Stall on adding/removing wokers into workqueue pool 1 (1) 2024/11/01 17:37
[syzbot] [kernel?] INFO: task hung in worker_attach_to_pool (2) 0 (1) 2024/10/09 18:00
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in worker_attach_to_pool tipc 6 1461d 1465d 0/28 auto-closed as invalid on 2021/01/19 23:38

Sample crash report:
INFO: task kworker/R-mm_pe:13 blocked for more than 143 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-mm_pe state:D stack:29232 pid:13    tgid:13    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/1:0:25 blocked for more than 144 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:0     state:D stack:22192 pid:25    tgid:25    ppid:2      flags:0x00004000
Workqueue:  0x0 (wg-crypt-wg2)
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 create_worker+0x491/0x720 kernel/workqueue.c:2811
 maybe_create_worker kernel/workqueue.c:3054 [inline]
 manage_workers kernel/workqueue.c:3106 [inline]
 worker_thread+0x318/0xd30 kernel/workqueue.c:3366
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:5884 blocked for more than 145 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:26200 pid:5884  tgid:5884  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:5893 blocked for more than 146 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:26200 pid:5893  tgid:5893  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:5894 blocked for more than 147 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:29232 pid:5894  tgid:5894  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:5899 blocked for more than 147 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:29080 pid:5899  tgid:5899  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:5901 blocked for more than 148 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:27312 pid:5901  tgid:5901  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:5902 blocked for more than 148 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:27312 pid:5902  tgid:5902  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:5903 blocked for more than 149 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:29080 pid:5903  tgid:5903  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task kworker/R-wg-cr:6603 blocked for more than 149 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:28088 pid:6603  tgid:6603  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
INFO: task kworker/R-wg-cr:6816 blocked for more than 150 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:27312 pid:6816  tgid:6816  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings
INFO: task kworker/1:2:7086 blocked for more than 150 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:2     state:D stack:30096 pid:7086  tgid:7086  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 kthread+0x23b/0x390 kernel/kthread.c:382
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Future hung task reports are suppressed, see sysctl kernel.hung_task_warnings

Showing all locks held in the system:
1 lock held by kworker/R-mm_pe/13:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
2 locks held by ksoftirqd/1/24:
1 lock held by kworker/1:0/25:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by khungtaskd/30:
 #0: ffffffff8e93c7e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e93c7e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e93c7e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744
3 locks held by kworker/u8:4/67:
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900015d7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900015d7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
4 locks held by kworker/u8:6/1516:
 #0: ffff88801bae5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801bae5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90005427d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90005427d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:580
 #3: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: wg_destruct+0x25/0x2e0 drivers/net/wireguard/device.c:246
3 locks held by kworker/u8:7/2962:
 #0: ffff8880305cb148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff8880305cb148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc9000c37fd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc9000c37fd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4196
3 locks held by kworker/u9:1/5151:
 #0: ffff88803a421948 ((wq_completion)hci15){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88803a421948 ((wq_completion)hci15){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc9001008fd00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc9001008fd00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffff888058ba4d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:331
1 lock held by dhcpcd/5508:
 #0: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6672
2 locks held by getty/5595:
 #0: ffff888034faa0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
3 locks held by kworker/1:3/5852:
 #0: ffff88801ac79948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac79948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900032efd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900032efd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x99/0xfd0 net/wireless/reg.c:2480
1 lock held by kworker/u9:4/5855:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5880:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/5884:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5888:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5893:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5894:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5895:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/5896:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/5897:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/5899:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5900:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/5901:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5902:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/5903:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
3 locks held by kworker/1:4/5906:
 #0: ffff88801ac78948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac78948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900042c7d00 (reg_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900042c7d00 (reg_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: reg_todo+0x1c/0x8d0 net/wireless/reg.c:3218
3 locks held by kworker/0:6/5962:
 #0: ffff88801ac78948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac78948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90004417d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90004417d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
1 lock held by kworker/R-wg-cr/6601:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/6602:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/6603:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/6654:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/6655:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/6656:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/6814:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
1 lock held by kworker/R-wg-cr/6815:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline]
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526
1 lock held by kworker/R-wg-cr/6816:
 #0: ffffffff8e7e6ba8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669
2 locks held by syz-executor/6997:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490
 #1: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: mpls_net_exit+0x7d/0x2a0 net/mpls/af_mpls.c:2706
2 locks held by syz-executor/7034:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490
 #1: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: mpls_net_exit+0x7d/0x2a0 net/mpls/af_mpls.c:2706
2 locks held by syz-executor/7037:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490
 #1: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: mpls_net_exit+0x7d/0x2a0 net/mpls/af_mpls.c:2706
2 locks held by syz-executor/7040:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490
 #1: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: cangw_pernet_exit_batch+0x20/0x90 net/can/gw.c:1257
2 locks held by syz-executor/7043:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490
 #1: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: fib_net_exit_batch+0x20/0x90 net/ipv4/fib_frontend.c:1638
2 locks held by syz-executor/7052:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490
 #1: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: mpls_net_exit+0x7d/0x2a0 net/mpls/af_mpls.c:2706
2 locks held by syz-executor/7057:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490
 #1: ffffffff8fcdd8c8 (rtnl_mutex){+.+.}-{4:4}, at: mpls_net_exit+0x7d/0x2a0 net/mpls/af_mpls.c:2706
2 locks held by syz-executor/7087:
 #0: ffffffff8fcd0d90 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490

Crashes (42):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/20 16:12 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/11/14 07:19 upstream f1b785f4c787 a8c99394 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/11/11 08:04 upstream a9cda7c0ffed 6b856513 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_attach_to_pool
2024/11/07 00:01 upstream 7758b206117d df3dc63b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/23 07:27 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/17 01:05 upstream c964ced77262 666f77ed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in worker_attach_to_pool
2024/10/13 07:44 upstream 36c254515dc6 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/08 14:09 upstream 87d6aab2389e 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/08 11:22 upstream 87d6aab2389e 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_attach_to_pool
2024/10/03 21:44 upstream 7ec462100ef9 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/01 18:07 upstream e32cde8d2bd7 ea2b66a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/27 06:26 upstream 075dbe9f6e3c 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/26 03:21 upstream aa486552a110 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/09/24 14:32 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/24 13:59 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/23 15:44 upstream de5cb0dcb74c 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/23 15:27 upstream de5cb0dcb74c 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/21 16:40 upstream 1868f9d0260e 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/09/21 00:23 upstream baeb9a7d8b60 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/08/24 17:40 upstream d2bafcf224f3 d7d32352 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in worker_attach_to_pool
2024/08/21 14:09 upstream b311c1b497e5 db5852f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/08/13 05:03 upstream d74da846046a 7b0f4b46 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/08/06 07:13 upstream b446a2dae984 e1bdb00a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/07/29 04:13 upstream 5437f30d3458 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/07/24 06:36 upstream 28bbe4ea686a 57b2edb1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/07/05 11:59 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/07/03 11:12 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in worker_attach_to_pool
2024/06/18 18:33 upstream 2ccbdf43d5e7 639d6cdf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/06/10 15:19 upstream 83a7eefedc9b 048c640a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/06/09 06:40 upstream 771ed66105de 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in worker_attach_to_pool
2024/05/29 20:37 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/05/29 14:14 upstream e0cce98fe279 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in worker_attach_to_pool
2024/10/20 16:12 upstream 715ca9dd687f cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/14 02:49 upstream ba01565ced22 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/05 16:10 upstream 27cc6fdf7201 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/05 06:46 upstream 27cc6fdf7201 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/07/21 12:43 upstream 2c9b3512402e b88348e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/05/30 03:19 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in worker_attach_to_pool
2024/10/05 17:45 net-next d521db38f339 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in worker_attach_to_pool
2024/10/11 07:02 linux-next 0cca97bf2364 cd942402 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/10/04 16:41 linux-next c02d24a5af66 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in worker_attach_to_pool
2024/05/22 02:33 linux-next 124cfbcd6d18 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in worker_attach_to_pool
* Struck through repros no longer work on HEAD.