INFO: task kworker/1:1:46 blocked for more than 143 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:1 state:D stack:19856 pid:46 tgid:46 ppid:2 flags:0x00004000 Workqueue: 0x0 (mld) Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 create_worker+0x491/0x720 kernel/workqueue.c:2811 maybe_create_worker kernel/workqueue.c:3054 [inline] manage_workers kernel/workqueue.c:3106 [inline] worker_thread+0x318/0xd30 kernel/workqueue.c:3366 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:28482 blocked for more than 144 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:28928 pid:28482 tgid:28482 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:2254 blocked for more than 145 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:29184 pid:2254 tgid:2254 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:2284 blocked for more than 145 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:28928 pid:2284 tgid:2284 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:5722 blocked for more than 146 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:28256 pid:5722 tgid:5722 ppid:2 flags:0x00004000 Workqueue: 0x0 (wg-crypt-wg1) Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:6265 blocked for more than 147 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:29296 pid:6265 tgid:6265 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 rescuer_thread+0x3ed/0x10a0 kernel/workqueue.c:3471 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/R-wg-cr:6391 blocked for more than 148 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/R-wg-cr state:D stack:28088 pid:6391 tgid:6391 ppid:2 flags:0x00004000 Workqueue: 0x0 (wg-crypt-wg2) Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 __mutex_lock_common kernel/locking/mutex.c:684 [inline] __mutex_lock+0x6a7/0xd70 kernel/locking/mutex.c:752 worker_detach_from_pool kernel/workqueue.c:2727 [inline] rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 INFO: task kworker/1:5:6901 blocked for more than 149 seconds. Not tainted 6.12.0-rc6-syzkaller-00099-g7758b206117d #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:5 state:D stack:30096 pid:6901 tgid:6901 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5328 [inline] __schedule+0x184f/0x4c30 kernel/sched/core.c:6690 __schedule_loop kernel/sched/core.c:6767 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6782 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6839 kthread+0x23b/0x390 kernel/kthread.c:382 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Showing all locks held in the system: 1 lock held by kworker/R-mm_pe/13: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by khungtaskd/30: #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline] #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline] #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6720 1 lock held by kworker/1:1/46: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/u9:0/54: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 4 locks held by kworker/u8:8/1139: #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc90003d77d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc90003d77d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff8fcc6dd0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:580 #3: ffffffff8e93d200 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x530 kernel/rcu/tree.c:4562 1 lock held by kworker/R-krxrp/3373: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 3 locks held by kworker/u8:9/3469: #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310 #1: ffffc9000c47fd00 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc9000c47fd00 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310 #2: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: idle_cull_fn+0xd5/0x760 kernel/workqueue.c:2951 2 locks held by getty/5594: #0: ffff88814cc760a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211 3 locks held by kworker/1:6/5894: 1 lock held by kworker/R-wg-cr/25650: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/25653: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/25661: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by kworker/R-wg-cr/28424: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/28425: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/28426: 1 lock held by kworker/R-wg-cr/28482: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/28483: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/28486: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/28493: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/28494: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/28498: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/31506: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by kworker/R-wg-cr/31507: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/31511: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/419: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/420: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/2254: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/2284: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/2285: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/5712: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/5722: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/5723: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6253: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6256: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6265: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6375: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6389: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6391: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by kworker/R-wg-cr/6425: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_detach_from_pool kernel/workqueue.c:2727 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xaf5/0x10a0 kernel/workqueue.c:3526 1 lock held by kworker/R-wg-cr/6432: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6434: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6435: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6436: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6446: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6447: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by kworker/R-wg-cr/6448: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x31/0x390 kernel/workqueue.c:2669 1 lock held by syz.5.5913/6534: #0: ffffffff8e93d200 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x530 kernel/rcu/tree.c:4562 3 locks held by kworker/u8:7/6891: 2 locks held by syz-executor/6907: #0: ffffffff8fcc6dd0 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd3908 (rtnl_mutex){+.+.}-{3:3}, at: ppp_exit_net+0xe3/0x3d0 drivers/net/ppp/ppp_generic.c:1146 2 locks held by syz-executor/6915: #0: ffffffff8fcc6dd0 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd3908 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xe9/0xaa0 net/core/dev.c:11938 2 locks held by syz-executor/6919: #0: ffffffff8fcc6dd0 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x328/0x570 net/core/net_namespace.c:490 #1: ffffffff8fcd3908 (rtnl_mutex){+.+.}-{3:3}, at: ppp_exit_net+0xe3/0x3d0 drivers/net/ppp/ppp_generic.c:1146 1 lock held by kworker/R-bond0/6938: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/6945: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/6946: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 1 lock held by kworker/R-wg-cr/6947: #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:3316 [inline] #0: ffffffff8e7e2368 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd0/0x10a0 kernel/workqueue.c:3443 7 locks held by syz-executor/6967: #0: ffff88807c81e420 (sb_writers#8){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2931 [inline] #0: ffff88807c81e420 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x225/0xd30 fs/read_write.c:679 #1: ffff888067991488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1ea/0x500 fs/kernfs/file.c:325 #2: ffff888027edc2d8 (kn->active#49){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20e/0x500 fs/kernfs/file.c:326