INFO: task kworker/0:2:895 blocked for more than 430 seconds.
Not tainted 5.15.0-rc1-syzkaller-00001-g64a19591a293 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:2 state:D stack: 0 pid: 895 ppid: 2 flags:0x00000000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
[<ffffffff82bdaf8a>] context_switch kernel/sched/core.c:4940 [inline]
[<ffffffff82bdaf8a>] __schedule+0x506/0x1048 kernel/sched/core.c:6287
[<ffffffff82bdbb32>] schedule+0x66/0x168 kernel/sched/core.c:6366
[<ffffffff82bdc2fa>] schedule_preempt_disabled+0x16/0x28 kernel/sched/core.c:6425
[<ffffffff82bddefe>] __mutex_lock_common kernel/locking/mutex.c:669 [inline]
[<ffffffff82bddefe>] __mutex_lock+0x310/0xa60 kernel/locking/mutex.c:729
[<ffffffff82bde662>] mutex_lock_nested+0x14/0x1c kernel/locking/mutex.c:743
[<ffffffff8223fbd8>] rtnl_lock+0x22/0x2a net/core/rtnetlink.c:72
[<ffffffff827c8f0e>] addrconf_verify_work+0x18/0x2c net/ipv6/addrconf.c:4590
[<ffffffff80064612>] process_one_work+0x5e4/0xf5c kernel/workqueue.c:2297
[<ffffffff800652e0>] worker_thread+0x356/0x8e6 kernel/workqueue.c:2444
[<ffffffff800770a8>] kthread+0x25c/0x2c6 kernel/kthread.c:319
[<ffffffff800051aa>] ret_from_exception+0x0/0x14
Showing all locks held in the system:
1 lock held by khungtaskd/27:
#0: ffffffff83d2b3e8 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x32/0x1fa kernel/locking/lockdep.c:6448
3 locks held by kworker/0:2/895:
#0: ffffffe00d4a1d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
#0: ffffffe00d4a1d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
#0: ffffffe00d4a1d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
#1: ffffffe00bb43d40 ((addr_chk_work).work){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
#1: ffffffe00bb43d40 ((addr_chk_work).work){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
#1: ffffffe00bb43d40 ((addr_chk_work).work){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
#2: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x22/0x2a net/core/rtnetlink.c:72
1 lock held by klogd/1789:
1 lock held by dhcpcd/1833:
#0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x294/0x90e net/core/rtnetlink.c:5569
2 locks held by getty/1953:
#0: ffffffe00da8d098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x48 drivers/tty/tty_ldsem.c:340
#1: ffffffd0107f52e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x9a0/0xafa drivers/tty/n_tty.c:2113
3 locks held by kworker/0:3/2338:
#0: ffffffe005618d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
#0: ffffffe005618d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
#0: ffffffe005618d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
#1: ffffffe0216d3d40 ((linkwatch_work).work){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
#1: ffffffe0216d3d40 ((linkwatch_work).work){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
#1: ffffffe0216d3d40 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
#2: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x22/0x2a net/core/rtnetlink.c:72
2 locks held by syz-executor.0/3292:
#0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
#0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x294/0x90e net/core/rtnetlink.c:5569
#1: ffffffff83d2be68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
#1: ffffffff83d2be68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x33a/0x3f4 kernel/rcu/tree_exp.h:837
2 locks held by kworker/1:0/3314:
=============================================