syzbot


INFO: task hung in rtnl_lock

Status: auto-closed as invalid on 2020/04/01 22:32
Reported-by: syzbot+4f5fae4d36ab8c13a4dd@syzkaller.appspotmail.com
First crash: 1653d, last: 1615d
Similar bugs (9)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in rtnl_lock 41 1551d 1652d 0/1 auto-closed as invalid on 2020/06/04 06:12
upstream INFO: task hung in rtnl_lock (2) net 1 798d 798d 0/26 auto-closed as invalid on 2022/05/28 17:21
linux-5.15 INFO: task hung in rtnl_lock (2) 1 117d 117d 0/3 auto-obsoleted due to no activity on 2024/04/18 21:58
linux-4.14 INFO: task hung in rtnl_lock 12 1542d 1653d 0/1 auto-closed as invalid on 2020/06/13 20:56
linux-6.1 INFO: task hung in rtnl_lock 7 273d 396d 0/3 auto-obsoleted due to no activity on 2023/11/14 10:24
linux-6.1 INFO: task hung in rtnl_lock (2) origin:lts-only C 9 17d 146d 0/3 upstream: reported C repro on 2023/12/11 07:58
upstream INFO: task hung in rtnl_lock (3) net C 128 45d 397d 26/26 fixed on 2024/03/26 17:39
upstream INFO: task hung in rtnl_lock net 118 1453d 1399d 0/26 auto-closed as invalid on 2020/09/10 12:52
linux-5.15 INFO: task hung in rtnl_lock 4 270d 395d 0/3 auto-obsoleted due to no activity on 2023/11/17 15:29

Sample crash report:
ip6_tunnel: l0 xmit: Local address not yet configured!
ip6_tunnel: l0 xmit: Local address not yet configured!
ip6_tunnel: l0 xmit: Local address not yet configured!
ip6_tunnel: l0 xmit: Local address not yet configured!
INFO: task syz-executor.5:1697 blocked for more than 140 seconds.
      Not tainted 4.9.141+ #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.5  D29312  1697   2099 0x00000004
 ffff88017bfcaf80 ffff8801cfded800 ffff88019b7fd800 ffff8801d8424740
 ffff8801db621018 ffff880116c57a38 ffffffff828075c2 0000000000000000
 ffff88017bfcb830 ffffed002f7f9705 00ff88017bfcaf80 ffff8801db6218f0
Call Trace:
 [<ffffffff82808aef>] schedule+0x7f/0x1b0 kernel/sched/core.c:3553
 [<ffffffff828094a3>] schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:3586
 [<ffffffff8280b51d>] __mutex_lock_common kernel/locking/mutex.c:582 [inline]
 [<ffffffff8280b51d>] mutex_lock_nested+0x38d/0x900 kernel/locking/mutex.c:621
 [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
 [<ffffffff82369c4a>] dev_ioctl+0x81a/0xd60 net/core/dev_ioctl.c:406
 [<ffffffff8229c039>] sock_do_ioctl+0x99/0xb0 net/socket.c:912
 [<ffffffff8229cabd>] sock_ioctl+0x32d/0x3c0 net/socket.c:991
 [<ffffffff81546dec>] vfs_ioctl fs/ioctl.c:43 [inline]
 [<ffffffff81546dec>] file_ioctl fs/ioctl.c:493 [inline]
 [<ffffffff81546dec>] do_vfs_ioctl+0x1ac/0x11a0 fs/ioctl.c:677
 [<ffffffff81547e6f>] SYSC_ioctl fs/ioctl.c:694 [inline]
 [<ffffffff81547e6f>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:685
 [<ffffffff810056ef>] do_syscall_64+0x19f/0x550 arch/x86/entry/common.c:285
 [<ffffffff82817893>] entry_SYSCALL_64_after_swapgs+0x5d/0xdb

Showing all locks held in the system:
2 locks held by khungtaskd/24:
 #0:  (rcu_read_lock){......}, at: [<ffffffff8131c0cc>] check_hung_uninterruptible_tasks kernel/hung_task.c:168 [inline]
 #0:  (rcu_read_lock){......}, at: [<ffffffff8131c0cc>] watchdog+0x11c/0xa20 kernel/hung_task.c:239
 #1:  (tasklist_lock){.+.+..}, at: [<ffffffff813fe63f>] debug_show_all_locks+0x79/0x218 kernel/locking/lockdep.c:4336
1 lock held by rsyslogd/1897:
 #0:  (&f->f_pos_lock){+.+.+.}, at: [<ffffffff8156cc7c>] __fdget_pos+0xac/0xd0 fs/file.c:781
2 locks held by getty/2024:
 #0:  (&tty->ldisc_sem){++++++}, at: [<ffffffff82815952>] ldsem_down_read+0x32/0x40 drivers/tty/tty_ldsem.c:367
 #1:  (&ldata->atomic_read_lock){+.+.+.}, at: [<ffffffff81d37362>] n_tty_read+0x202/0x16e0 drivers/tty/n_tty.c:2142
1 lock held by syz-executor.2/2573:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.2/2619:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.0/14657:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.2/20726:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.1/21998:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.1/22590:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.1/22620:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.1/24804:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
2 locks held by kworker/1:0/3654:
 #0:  ("events"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&rew.rew_work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
1 lock held by syz-executor.0/24916:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
3 locks held by kworker/0:2/31107:
 #0:  ("%s"("ipv6_addrconf")){.+.+..}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((addr_chk_work).work){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
 #2:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.0/8841:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
4 locks held by kworker/u4:10/31465:
 #0:  ("%s""netns"){.+.+.+}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  (net_cleanup_work){+.+.+.}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
 #2:  (net_mutex){+.+.+.}, at: [<ffffffff822e681f>] cleanup_net+0x13f/0x8b0 net/core/net_namespace.c:439
 #3:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.0/1683:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.2/1668:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
2 locks held by syz-executor.5/1693:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff8234abeb>] rtnl_lock net/core/rtnetlink.c:70 [inline]
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff8234abeb>] rtnetlink_rcv+0x1b/0x40 net/core/rtnetlink.c:4083
 #1:  (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a749>] exp_funnel_lock kernel/rcu/tree_exp.h:256 [inline]
 #1:  (rcu_preempt_state.exp_mutex){+.+...}, at: [<ffffffff8124a749>] _synchronize_rcu_expedited+0x339/0x840 kernel/rcu/tree_exp.h:569
1 lock held by syz-executor.5/1697:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
1 lock held by syz-executor.5/1699:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff8234abeb>] rtnl_lock net/core/rtnetlink.c:70 [inline]
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff8234abeb>] rtnetlink_rcv+0x1b/0x40 net/core/rtnetlink.c:4083
1 lock held by syz-executor.1/1689:
 #0:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70
3 locks held by kworker/1:6/1710:
 #0:  ("%s"("ipv6_addrconf")){.+.+..}, at: [<ffffffff81130f0c>] process_one_work+0x73c/0x15f0 kernel/workqueue.c:2085
 #1:  ((&(&ifa->dad_work)->work)){+.+...}, at: [<ffffffff81130f44>] process_one_work+0x774/0x15f0 kernel/workqueue.c:2089
 #2:  (rtnl_mutex){+.+.+.}, at: [<ffffffff823412d7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:70

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 24 Comm: khungtaskd Not tainted 4.9.141+ #1
 ffff8801d9907d08 ffffffff81b42e79 0000000000000000 0000000000000001
 0000000000000001 0000000000000001 ffffffff810983b0 ffff8801d9907d40
 ffffffff81b4df89 0000000000000001 0000000000000000 0000000000000002
Call Trace:
 [<ffffffff81b42e79>] __dump_stack lib/dump_stack.c:15 [inline]
 [<ffffffff81b42e79>] dump_stack+0xc1/0x128 lib/dump_stack.c:51
 [<ffffffff81b4df89>] nmi_cpu_backtrace.cold.0+0x48/0x87 lib/nmi_backtrace.c:99
 [<ffffffff81b4df1c>] nmi_trigger_cpumask_backtrace+0x12c/0x151 lib/nmi_backtrace.c:60
 [<ffffffff810984b4>] arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:37
 [<ffffffff8131c65d>] trigger_all_cpu_backtrace include/linux/nmi.h:58 [inline]
 [<ffffffff8131c65d>] check_hung_task kernel/hung_task.c:125 [inline]
 [<ffffffff8131c65d>] check_hung_uninterruptible_tasks kernel/hung_task.c:182 [inline]
 [<ffffffff8131c65d>] watchdog+0x6ad/0xa20 kernel/hung_task.c:239
 [<ffffffff81142c3d>] kthread+0x26d/0x300 kernel/kthread.c:211
 [<ffffffff82817a5c>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:373
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at pc 0xffffffff82816496

Crashes (7):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2019/12/03 22:31 https://android.googlesource.com/kernel/common android-4.9 8fe428403e30 ae13a849 .config console log report ci-android-49-kasan-gce
2019/11/06 01:27 https://android.googlesource.com/kernel/common android-4.9 7fe05eede1c8 0f3ec414 .config console log report ci-android-49-kasan-gce-root
2019/11/04 17:19 https://android.googlesource.com/kernel/common android-4.9 7fe05eede1c8 18e12644 .config console log report ci-android-49-kasan-gce-root
2019/10/31 16:54 https://android.googlesource.com/kernel/common android-4.9 7fe05eede1c8 a41ca8fa .config console log report ci-android-49-kasan-gce-root
2019/10/30 16:16 https://android.googlesource.com/kernel/common android-4.9 8fe428403e30 5ea87a66 .config console log report ci-android-49-kasan-gce
2019/10/26 04:16 https://android.googlesource.com/kernel/common android-4.9 7fe05eede1c8 c2e837da .config console log report ci-android-49-kasan-gce-root
2019/11/11 06:39 https://android.googlesource.com/kernel/common android-4.9 8fe428403e30 dc438b91 .config console log report ci-android-49-kasan-gce-386
* Struck through repros no longer work on HEAD.