INFO: task kworker/R-wg-cr:12952 blocked for more than 143 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D
stack:27408 pid:12952 tgid:12952 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:21067 blocked for more than 144 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D
stack:27024 pid:21067 tgid:21067 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:21069 blocked for more than 145 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:27024 pid:21069 tgid:21069 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:22334 blocked for more than 145 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D
stack:27024 pid:22334 tgid:22334 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:22850 blocked for more than 146 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D
stack:27024 pid:22850 tgid:22850 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:22852 blocked for more than 147 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D
stack:27024 pid:22852 tgid:22852 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/R-wg-cr:22874 blocked for more than 148 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/R-wg-cr state:D stack:27024 pid:22874 tgid:22874 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
__mutex_lock_common kernel/locking/mutex.c:684 [inline]
__mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3470
kthread+0x2f0/0x390 kernel/kthread.c:389
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
INFO: task kworker/1:8:23260 blocked for more than 148 seconds.
Not tainted 6.10.0-syzkaller-12888-g5437f30d3458 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:8 state:D
stack:30128 pid:23260 tgid:23260 ppid:2 flags:0x00004000
Call Trace:
context_switch kernel/sched/core.c:5188 [inline]
__schedule+0x1800/0x4a60 kernel/sched/core.c:6529
__schedule_loop kernel/sched/core.c:6606 [inline]
schedule+0x14b/0x320 kernel/sched/core.c:6621
schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6678
kthread+0x23b/0x390 kernel/kthread.c:382
ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
Showing all locks held in the system:
3 locks held by kworker/u8:0/11:
#0:
ffff88802a59d148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
ffff88802a59d148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1:
ffffc90000107d00
(
(work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
(work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2:
ffffffff8fc7fdc8
(rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4734
4 locks held by kworker/u8:1/12:
#0: ffff8880166e5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
#0: ffff8880166e5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc90000117d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc90000117d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffffffff8fc73250 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:594
#3:
ffffffff8fc7fdc8
(
rtnl_mutex){+.+.}-{3:3}, at: wiphy_unregister+0x236/0xb00 net/wireless/core.c:1100
1 lock held by kworker/R-mm_pe/13:
#0: ffffffff8e7e20c8
(wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
(wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
3 locks held by kworker/1:0/25:
1 lock held by khungtaskd/30:
#0:
ffffffff8e937660
(
rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:326 [inline]
rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6620
3 locks held by kworker/1:1/47:
1 lock held by kworker/R-dm_bu/2373:
#0:
ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
3 locks held by kworker/u8:8/2493:
#0:
ffff888015889148
(
(wq_completion)events_unbound
){+.+.}-{0:0}
, at: process_one_work kernel/workqueue.c:3206 [inline]
, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1:
ffffc9000953fd00
(
(work_completion)(&pool->idle_cull_work)
){+.+.}-{0:0}
, at: process_one_work kernel/workqueue.c:3207 [inline]
, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2:
ffffffff8e7e20c8
(
wq_pool_attach_mutex
){+.+.}-{3:3}
, at: idle_cull_fn+0xd5/0x760 kernel/workqueue.c:2953
2 locks held by kworker/u8:10/2519:
1 lock held by kworker/R-mld/2733:
4 locks held by udevd/4684:
#0:
ffff888055a20668 (&p->lock){+.+.}-{3:3}, at: seq_read_iter+0xb7/0xd60 fs/seq_file.c:182
#1:
ffff88807ccfc888 (&of->mutex#2){+.+.}-{3:3}, at: kernfs_seq_start+0x53/0x3b0 fs/kernfs/file.c:154
#2:
ffff8880664302d8
(
kn->active#5
){++++}-{0:0}
, at: kernfs_seq_start+0x72/0x3b0 fs/kernfs/file.c:155
#3:
ffff888064784190
(
&dev->mutex
){....}-{3:3}, at: device_lock include/linux/device.h:1009 [inline]
){....}-{3:3}, at: uevent_show+0x17d/0x340 drivers/base/core.c:2743
2 locks held by dhcpcd/4897:
#0:
ffffffff8fc64a28 (vlan_ioctl_mutex){+.+.}-{3:3}, at: sock_ioctl+0x664/0x8e0 net/socket.c:1303
#1: ffffffff8fc7fdc8 (rtnl_mutex){+.+.}-{3:3}, at: vlan_ioctl_handler+0x112/0x9d0 net/8021q/vlan.c:553
2 locks held by getty/4980:
#0:
ffff88802b0530a0
(
&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1:
ffffc9000311b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2211
8 locks held by kworker/1:3/5226:
2 locks held by kworker/1:5/5275:
3 locks held by kworker/0:6/5301:
#0:
ffff888015880948
((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1:
ffffc900041dfd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
ffffc900041dfd00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffffffff8fc7fdc8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
3 locks held by kworker/0:7/7055:
#0:
ffff888015880948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3206 [inline]
ffff888015880948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3312
#1: ffffc90003437d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3207 [inline]
#1: ffffc90003437d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3312
#2: ffffffff8fc7fdc8 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
4 locks held by kworker/1:6/11795:
1 lock held by kworker/R-wg-cr/12952:
#0:
ffffffff8e7e20c8
(
wq_pool_attach_mutex){+.+.}-{3:3}
, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/12954:
#0:
ffffffff8e7e20c8
(
wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/1:4/18280:
#0:
ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
2 locks held by kworker/1:7/18664:
1 lock held by kworker/R-wg-cr/21067:
#0:
ffffffff8e7e20c8
(
wq_pool_attach_mutex
){+.+.}-{3:3}
, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
2 locks held by kworker/R-wg-cr/21068:
1 lock held by kworker/R-wg-cr/21069:
#0:
ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/22332:
#0:
ffffffff8e7e20c8
(wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
(wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/22333:
#0: ffffffff8e7e20c8
(wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/22334:
#0:
ffffffff8e7e20c8
(wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/22847:
#0: ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
#0: ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/22850:
#0:
ffffffff8e7e20c8
(
wq_pool_attach_mutex){+.+.}-{3:3}
, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/22852:
#0:
ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/22872:
#0: ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
1 lock held by kworker/R-wg-cr/22873:
#0:
ffffffff8e7e20c8
(
wq_pool_attach_mutex
){+.+.}-{3:3}
, at: worker_detach_from_pool kernel/workqueue.c:2730 [inline]
, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3525
1 lock held by kworker/R-wg-cr/22874:
#0: ffffffff8e7e20c8 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2671
2 locks held by syz-executor/23180:
#0:
ffffffff8fc73250 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x4c6/0x7b0 net/core/net_namespace.c:504
#1:
ffffffff8fc7fdc8
(
rtnl_mutex
){+.+.}-{3:3}
, at: ipmr_net_exit_batch+0x20/0x90 net/ipv4/ipmr.c:3130
2 locks held by syz-executor/23183:
#0: ffffffff8fc73250 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x4c6/0x7b0 net/core/net_namespace.c:504
#1:
ffffffff8fc7fdc8
(
rtnl_mutex
){+.+.}-{3:3}, at: tc_action_net_exit include/net/act_api.h:173 [inline]
){+.+.}-{3:3}, at: gate_exit_net+0x30/0x100 net/sched/act_gate.c:659
2 locks held by syz-executor/23186:
#0:
ffffffff8fc73250
(
pernet_ops_rwsem){++++}-{3:3}
, at: copy_net_ns+0x4c6/0x7b0 net/core/net_namespace.c:504
#1: ffffffff8fc7fdc8
(
rtnl_mutex
){+.+.}-{3:3}
, at: fib_net_exit_batch+0x20/0x90 net/ipv4/fib_frontend.c:1638
1 lock held by syz-executor/23198:
#0:
ffffffff8fc7fdc8
(
rtnl_mutex
){+.+.}-{3:3}
, at: tun_detach drivers/net/tun.c:698 [inline]
, at: tun_chr_close+0x3e/0x1b0 drivers/net/tun.c:3510
1 lock held by syz-executor/23207:
#0:
ffffffff8fc7fdc8
(
rtnl_mutex
){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
){+.+.}-{3:3}, at: tun_chr_close+0x3e/0x1b0 drivers/net/tun.c:3510