INFO: task kworker/R-mm_pe:13 blocked for more than 143 seconds.
Not tainted 6.11.0-rc2-syzkaller-00004-gb446a2dae984 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-mm_pe state:D
stack:27024 pid:13 tgid:13 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f2/0x390 kernel/kthread.c:389
ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-dm_bu:2367 blocked for more than 144 seconds.
Not tainted 6.11.0-rc2-syzkaller-00004-gb446a2dae984 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-dm_bu state:D stack:27408 pid:2367 tgid:2367 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f2/0x390 kernel/kthread.c:389
ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/0:6:5336 blocked for more than 145 seconds.
Not tainted 6.11.0-rc2-syzkaller-00004-gb446a2dae984 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:6 state:D stack:20016 pid:5336 tgid:5336 ppid:2 flags:0x00004000
Workqueue: 0x0 (wg-crypt-wg1)
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
create_worker+0x491/0x720 kernel/workqueue.c:2813
maybe_create_worker kernel/workqueue.c:3056 [inline]
manage_workers kernel/workqueue.c:3108 [inline]
worker_thread+0x317/0xd40 kernel/workqueue.c:3365
kthread+0x2f2/0x390 kernel/kthread.c:389
ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:9206 blocked for more than 146 seconds.
Not tainted 6.11.0-rc2-syzkaller-00004-gb446a2dae984 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:27408 pid:9206 tgid:9206 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f2/0x390 kernel/kthread.c:389
ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:9211 blocked for more than 147 seconds.
Not tainted 6.11.0-rc2-syzkaller-00004-gb446a2dae984 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:27408 pid:9211 tgid:9211 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f2/0x390 kernel/kthread.c:389
ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/0:8:13472 blocked for more than 149 seconds.
Not tainted 6.11.0-rc2-syzkaller-00004-gb446a2dae984 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:8 state:D stack:30128 pid:13472 tgid:13472 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
kthread+0x23b/0x390 kernel/kthread.c:382
ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Showing all locks held in the system:
1 lock held by kworker/R-slub_/6:
#0:
ffffffff8e7e2b88
(wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
(wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
9 locks held by kworker/0:0/8:
2 locks held by kworker/0:1/9:
1 lock held by kworker/R-mm_pe/13:
1 lock held by khungtaskd/30:
#0: ffffffff8e9382a0
(
rcu_read_lock
){....}-{1:2}
, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline]
, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6620
6 locks held by kworker/0:2/58:
1 lock held by kworker/R-dm_bu/2367:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
3 locks held by kworker/u8:8/2915:
#0: ffff8880b933ea18 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:560
#1: ffffc90009a07d00 ((work_completion)(&(&forw_packet_aggr->delayed_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc90009a07d00 ((work_completion)(&(&forw_packet_aggr->delayed_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffff8880b932a718 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x112/0x240 kernel/time/timer.c:1051
3 locks held by kworker/u8:10/2971:
#0:
ffff888015881148
((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1:
ffffc90009c57d00 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
ffffc90009c57d00 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2:
ffffffff8e7e2b88 (wq_pool_attach_mutex
){+.+.}-{3:3}
, at: idle_cull_fn+0xd5/0x760 kernel/workqueue.c:2953
2 locks held by getty/4980:
#0: ffff888022d550a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc900031332f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6ac/0x1e00 drivers/tty/n_tty.c:2211
2 locks held by kworker/0:3/5232:
3 locks held by kworker/1:6/5296:
#0: ffff888015878948
(
(wq_completion)events
){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc9000301fd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc9000301fd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffffffff8fc81688 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
3 locks held by kworker/0:5/5298:
1 lock held by kworker/0:6/5336:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
2 locks held by kworker/0:7/5339:
1 lock held by kworker/R-wg-cr/9206:
#0:
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/9210:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/9211:
#0:
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/10722:
#0:
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/10724:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/10730:
#0:
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
2 locks held by kworker/0:4/11200:
3 locks held by kworker/u8:12/11903:
#0:
ffff888015881148
((wq_completion)events_unbound
){+.+.}-{0:0}
, at: process_one_work kernel/workqueue.c:3206 [inline]
, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc900044c7d00 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}
, at: process_one_work kernel/workqueue.c:3207 [inline]
, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2:
ffffffff8e7e2b88
(wq_pool_attach_mutex){+.+.}-{3:3}, at: idle_cull_fn+0xd5/0x760 kernel/workqueue.c:2953
3 locks held by kworker/u8:15/11908:
#0: ffff88802a8cd148
((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc900035bfd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc900035bfd00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffffffff8fc81688 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4194
4 locks held by kworker/u8:18/11914:
#0: ffff8880166e5948 ((wq_completion)netns){+.+.}-{0:0}
, at: process_one_work kernel/workqueue.c:3206 [inline]
, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc90004c2fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc90004c2fd00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffffffff8fc74b10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:594
#3: ffff88805abbd428 (&wg->device_update_lock){+.+.}-{3:3}, at: wg_destruct+0x110/0x2e0 drivers/net/wireguard/device.c:249
1 lock held by kworker/R-wg-cr/12456:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/12460:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/12461:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/12490:
#0:
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/12491:
#0:
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/12495:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/12500:
#0:
ffffffff8e7e2b88
(wq_pool_attach_mutex){+.+.}-{3:3}
, at: set_pf_worker kernel/workqueue.c:3318 [inline]
, at: rescuer_thread+0xfbf/0x10a0 kernel/workqueue.c:3534
3 locks held by kworker/1:3/12671:
#0:
ffff888015878948
((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc9000cea7d00 (drain_vmap_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc9000cea7d00 (drain_vmap_work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2:
ffffffff8ea2d5e8 (vmap_purge_lock){+.+.}-{3:3}, at: drain_vmap_area_work+0x17/0x40 mm/vmalloc.c:2323
1 lock held by kworker/R-bond0/13481:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-wg-cr/13483:
#0:
ffffffff8e7e2b88
(wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
(wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-wg-cr/13484:
#0:
ffffffff8e7e2b88
(wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
(wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-wg-cr/13485:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-bond0/13492:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-wg-cr/13494:
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
#0: ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-wg-cr/13495:
#0:
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
ffffffff8e7e2b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-wg-cr/13496:
#0: ffffffff8e7e2b88
(wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
(wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
4 locks held by syz-executor/13498:
#0: ffff88802fd8c420 (sb_writers#8){.+.+}-{0:0}
, at: file_start_write include/linux/fs.h:2876 [inline]
, at: vfs_write+0x227/0xc90 fs/read_write.c:586
#1:
ffff88806dbce488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1eb/0x500 fs/kernfs/file.c:325
#2: ffff8880232850f8 (kn->active#51){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20f/0x500 fs/kernfs/file.c:326
#3: ffffffff8f51e4c8 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: new_device_store+0x1b4/0x890 drivers/net/netdevsim/bus.c:166
1 lock held by kworker/R-bond0/13501:
#0:
ffffffff8e7e2b88
(wq_pool_attach_mutex
){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3318 [inline]
){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3442
1 lock held by kworker/R-wg-cr/13504: