syzbot


INFO: task hung in rtnl_lock (2)

Status: auto-closed as invalid on 2022/05/28 17:21
Reported-by: syzbot+@syzkaller.appspotmail.com
First crash: 278d, last: 278d
similar bugs (4):
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in rtnl_lock 41 1032d 1132d 0/1 auto-closed as invalid on 2020/06/04 06:12
linux-4.14 INFO: task hung in rtnl_lock 12 1022d 1133d 0/1 auto-closed as invalid on 2020/06/13 20:56
android-49 INFO: task hung in rtnl_lock 7 1095d 1134d 0/3 auto-closed as invalid on 2020/04/01 22:32
upstream INFO: task hung in rtnl_lock 118 933d 879d 0/24 auto-closed as invalid on 2020/09/10 12:52

Sample crash report:
INFO: task kworker/0:2:895 blocked for more than 430 seconds.
      Not tainted 5.15.0-rc1-syzkaller-00001-g64a19591a293 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:2     state:D stack:    0 pid:  895 ppid:     2 flags:0x00000000
Workqueue: ipv6_addrconf addrconf_verify_work
Call Trace:
[<ffffffff82bdaf8a>] context_switch kernel/sched/core.c:4940 [inline]
[<ffffffff82bdaf8a>] __schedule+0x506/0x1048 kernel/sched/core.c:6287
[<ffffffff82bdbb32>] schedule+0x66/0x168 kernel/sched/core.c:6366
[<ffffffff82bdc2fa>] schedule_preempt_disabled+0x16/0x28 kernel/sched/core.c:6425
[<ffffffff82bddefe>] __mutex_lock_common kernel/locking/mutex.c:669 [inline]
[<ffffffff82bddefe>] __mutex_lock+0x310/0xa60 kernel/locking/mutex.c:729
[<ffffffff82bde662>] mutex_lock_nested+0x14/0x1c kernel/locking/mutex.c:743
[<ffffffff8223fbd8>] rtnl_lock+0x22/0x2a net/core/rtnetlink.c:72
[<ffffffff827c8f0e>] addrconf_verify_work+0x18/0x2c net/ipv6/addrconf.c:4590
[<ffffffff80064612>] process_one_work+0x5e4/0xf5c kernel/workqueue.c:2297
[<ffffffff800652e0>] worker_thread+0x356/0x8e6 kernel/workqueue.c:2444
[<ffffffff800770a8>] kthread+0x25c/0x2c6 kernel/kthread.c:319
[<ffffffff800051aa>] ret_from_exception+0x0/0x14

Showing all locks held in the system:
1 lock held by khungtaskd/27:
 #0: ffffffff83d2b3e8 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x32/0x1fa kernel/locking/lockdep.c:6448
3 locks held by kworker/0:2/895:
 #0: ffffffe00d4a1d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
 #0: ffffffe00d4a1d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
 #0: ffffffe00d4a1d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
 #1: ffffffe00bb43d40 ((addr_chk_work).work){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
 #1: ffffffe00bb43d40 ((addr_chk_work).work){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
 #1: ffffffe00bb43d40 ((addr_chk_work).work){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
 #2: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x22/0x2a net/core/rtnetlink.c:72
1 lock held by klogd/1789:
1 lock held by dhcpcd/1833:
 #0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
 #0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x294/0x90e net/core/rtnetlink.c:5569
2 locks held by getty/1953:
 #0: ffffffe00da8d098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x48 drivers/tty/tty_ldsem.c:340
 #1: ffffffd0107f52e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x9a0/0xafa drivers/tty/n_tty.c:2113
3 locks held by kworker/0:3/2338:
 #0: ffffffe005618d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
 #0: ffffffe005618d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
 #0: ffffffe005618d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
 #1: ffffffe0216d3d40 ((linkwatch_work).work){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:633 [inline]
 #1: ffffffe0216d3d40 ((linkwatch_work).work){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:661 [inline]
 #1: ffffffe0216d3d40 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x4b6/0xf5c kernel/workqueue.c:2268
 #2: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x22/0x2a net/core/rtnetlink.c:72
2 locks held by syz-executor.0/3292:
 #0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline]
 #0: ffffffff83e8c178 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x294/0x90e net/core/rtnetlink.c:5569
 #1: ffffffff83d2be68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
 #1: ffffffff83d2be68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x33a/0x3f4 kernel/rcu/tree_exp.h:837
2 locks held by kworker/1:0/3314:

=============================================


Crashes (1):
Manager Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Title
ci-qemu2-riscv64 2022/02/27 17:20 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git fixes 64a19591a293 45a13a73 .config log report info INFO: task hung in rtnl_lock
* Struck through repros no longer work on HEAD.