syzbot


INFO: task hung in devinet_ioctl (2)

Status: closed as invalid on 2022/02/07 19:19
Subsystems: net
[Documentation on labels]
First crash: 937d, last: 808d
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in devinet_ioctl 1 380d 380d 0/3 auto-obsoleted due to no activity on 2023/08/07 17:54
linux-6.1 INFO: task hung in devinet_ioctl (2) 1 209d 209d 0/3 auto-obsoleted due to no activity on 2024/01/05 10:55
upstream INFO: task hung in devinet_ioctl net 1 2012d 2012d 0/26 auto-closed as invalid on 2019/04/18 15:55
linux-5.15 INFO: task hung in devinet_ioctl (2) 3 65d 78d 0/3 upstream: reported on 2024/02/05 21:12
upstream INFO: task hung in devinet_ioctl (3) net 825 43d 693d 0/26 closed as invalid on 2024/03/11 20:24
upstream INFO: task hung in devinet_ioctl (4) net 5 42d 42d 26/26 fixed on 2024/04/12 18:02
linux-5.15 INFO: task hung in devinet_ioctl 1 289d 289d 0/3 auto-obsoleted due to no activity on 2023/10/17 09:29

Sample crash report:
INFO: task dhcpcd:3181 blocked for more than 143 seconds.
      Not tainted 5.17.0-rc1-syzkaller-00186-g23a46422c561 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:dhcpcd          state:D stack:23144 pid: 3181 ppid:  3180 flags:0x00000000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:4986 [inline]
 __schedule+0xab2/0x4db0 kernel/sched/core.c:6295
 schedule+0xd2/0x260 kernel/sched/core.c:6368
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6427
 __mutex_lock_common kernel/locking/mutex.c:673 [inline]
 __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733
 devinet_ioctl+0x1b3/0x1ca0 net/ipv4/devinet.c:1068
 inet_ioctl+0x1e6/0x320 net/ipv4/af_inet.c:969
 sock_do_ioctl+0xcc/0x230 net/socket.c:1122
 sock_ioctl+0x2f1/0x640 net/socket.c:1239
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:874 [inline]
 __se_sys_ioctl fs/ioctl.c:860 [inline]
 __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:860
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f56967730e7
RSP: 002b:00007ffcce081fc8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f56966856c8 RCX: 00007f56967730e7
RDX: 00007ffcce0921b8 RSI: 0000000000008914 RDI: 0000000000000029
RBP: 00007ffcce0a2368 R08: 00007ffcce092178 R09: 00007ffcce092128
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffcce0921b8 R14: 0000000000000028 R15: 0000000000008914
 </TASK>
INFO: task kworker/0:16:21974 blocked for more than 143 seconds.
      Not tainted 5.17.0-rc1-syzkaller-00186-g23a46422c561 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:16    state:D stack:27992 pid:21974 ppid:     2 flags:0x00004000
Workqueue: events linkwatch_event
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:4986 [inline]
 __schedule+0xab2/0x4db0 kernel/sched/core.c:6295
 schedule+0xd2/0x260 kernel/sched/core.c:6368
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6427
 __mutex_lock_common kernel/locking/mutex.c:673 [inline]
 __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733
 linkwatch_event+0xb/0x60 net/core/link_watch.c:262
 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
 worker_thread+0x657/0x1110 kernel/workqueue.c:2454
 kthread+0x2e9/0x3a0 kernel/kthread.c:377
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
INFO: task kworker/1:5:30693 blocked for more than 143 seconds.
      Not tainted 5.17.0-rc1-syzkaller-00186-g23a46422c561 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:5     state:D stack:28120 pid:30693 ppid:     2 flags:0x00004000
Workqueue: events switchdev_deferred_process_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:4986 [inline]
 __schedule+0xab2/0x4db0 kernel/sched/core.c:6295
 schedule+0xd2/0x260 kernel/sched/core.c:6368
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6427
 __mutex_lock_common kernel/locking/mutex.c:673 [inline]
 __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733
 switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:75
 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307
 worker_thread+0x657/0x1110 kernel/workqueue.c:2454
 kthread+0x2e9/0x3a0 kernel/kthread.c:377
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
INFO: task syz-executor.5:6589 blocked for more than 143 seconds.
      Not tainted 5.17.0-rc1-syzkaller-00186-g23a46422c561 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.5  state:D stack:28112 pid: 6589 ppid:     1 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:4986 [inline]
 __schedule+0xab2/0x4db0 kernel/sched/core.c:6295
 schedule+0xd2/0x260 kernel/sched/core.c:6368
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6427
 __mutex_lock_common kernel/locking/mutex.c:673 [inline]
 __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733
 smc_pnet_create_pnetids_list net/smc/smc_pnet.c:800 [inline]
 smc_pnet_net_init+0x1f9/0x410 net/smc/smc_pnet.c:869
 ops_init+0xaf/0x470 net/core/net_namespace.c:140
 setup_net+0x554/0xbb0 net/core/net_namespace.c:330
 copy_net_ns+0x318/0x760 net/core/net_namespace.c:474
 create_new_namespaces+0x3f6/0xb20 kernel/nsproxy.c:110
 unshare_nsproxy_namespaces+0xc1/0x1f0 kernel/nsproxy.c:226
 ksys_unshare+0x445/0x920 kernel/fork.c:3048
 __do_sys_unshare kernel/fork.c:3119 [inline]
 __se_sys_unshare kernel/fork.c:3117 [inline]
 __x64_sys_unshare+0x2d/0x40 kernel/fork.c:3117
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f0f8545e617
RSP: 002b:00007f0f85aa3fa8 EFLAGS: 00000202 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f0f8545e617
RDX: 00007f0f854c87df RSI: 00007f0f85aa3f40 RDI: 0000000040000000
RBP: 0000000000000000 R08: 0000000000000000 R09: 00007f0f85aa3d50
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000006
R13: 00007ffd72556df0 R14: 00007f0f855704d8 R15: 0000000000000006
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/26:
 #0: ffffffff8bb83ae0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460
1 lock held by klogd/2959:
 #0: ffff8880b9d39c58 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2b/0x120 kernel/sched/core.c:489
1 lock held by dhcpcd/3181:
 #0: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: devinet_ioctl+0x1b3/0x1ca0 net/ipv4/devinet.c:1068
2 locks held by getty/3282:
 #0: ffff88802412d098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244
 #1: ffffc90002b632e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2077
3 locks held by kworker/1:14/28618:
 #0: ffff88802302a938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff88802302a938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff88802302a938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff88802302a938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline]
 #0: ffff88802302a938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline]
 #0: ffff88802302a938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278
 #1: ffffc90003cc7db8 ((addr_chk_work).work){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282
 #2: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4608
3 locks held by kworker/u4:5/5794:
 #0: ffff8880b9c39c58 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2b/0x120 kernel/sched/core.c:489
 #1: ffff8880b9c27948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x176/0x4e0 kernel/sched/psi.c:882
 #2: ffffffff8bb83ae0 (rcu_read_lock){....}-{1:2}, at: __debug_check_no_obj_freed lib/debugobjects.c:980 [inline]
 #2: ffffffff8bb83ae0 (rcu_read_lock){....}-{1:2}, at: debug_check_no_obj_freed+0xc7/0x420 lib/debugobjects.c:1023
5 locks held by kworker/u4:10/5819:
 #0: ffff888144581938 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888144581938 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888144581938 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888144581938 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline]
 #0: ffff888144581938 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline]
 #0: ffff888144581938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278
 #1: ffffc900105d7db8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282
 #2: ffffffff8d327d50 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:559
 #3: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock_unregistering net/core/dev.c:10898 [inline]
 #3: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xe8/0x3c0 net/core/dev.c:10936
 #4: ffffffff8bb8d6e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
 #4: ffffffff8bb8d6e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x4fa/0x620 kernel/rcu/tree_exp.h:840
2 locks held by kworker/0:12/21966:
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278
 #1: ffffc90007f6fdb8 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282
3 locks held by kworker/0:16/21974:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278
 #1: ffffc9000aac7db8 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282
 #2: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xb/0x60 net/core/link_watch.c:262
3 locks held by kworker/1:5/30693:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278
 #1: ffffc90004d57db8 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282
 #2: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:75
2 locks held by syz-executor.5/6589:
 #0: ffffffff8d327d50 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x2f5/0x760 net/core/net_namespace.c:470
 #1: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: smc_pnet_create_pnetids_list net/smc/smc_pnet.c:800 [inline]
 #1: ffffffff8d33c868 (rtnl_mutex){+.+.}-{3:3}, at: smc_pnet_net_init+0x1f9/0x410 net/smc/smc_pnet.c:869

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 26 Comm: khungtaskd Not tainted 5.17.0-rc1-syzkaller-00186-g23a46422c561 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
 watchdog+0xc1d/0xf50 kernel/hung_task.c:369
 kthread+0x2e9/0x3a0 kernel/kthread.c:377
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt drivers/acpi/processor_idle.c:110 [inline]
NMI backtrace for cpu 1 skipped: idling at acpi_idle_do_entry+0x1c6/0x250 drivers/acpi/processor_idle.c:551

Crashes (27):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/01/28 13:04 upstream 23a46422c561 495e00c5 .config console log report info ci-upstream-kasan-gce INFO: task hung in devinet_ioctl
2022/01/13 04:47 upstream f079ab01b560 44d1319a .config console log report info ci-upstream-kasan-gce-smack-root INFO: task hung in devinet_ioctl
2022/01/12 05:16 upstream 6f38be8f2ccd 44d1319a .config console log report info ci-upstream-kasan-gce INFO: task hung in devinet_ioctl
2021/09/29 06:40 upstream a4e6f95a891a d82cb927 .config console log report info ci-upstream-kasan-gce-selinux-root INFO: task hung in devinet_ioctl
2022/01/06 06:22 upstream 49ef78e59b07 6acc789a .config console log report info ci-upstream-kasan-gce-386 INFO: task hung in devinet_ioctl
2022/01/26 14:35 net-old 429c3be8a5e2 2cbffd88 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/19 16:48 net-old 99845220d3c3 0620189b .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/19 07:00 net-old 2836615aa22d 731a2d23 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/18 09:27 net-old 5765cee119bf 731a2d23 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/14 14:17 net-old fb80445c438c b8d780ab .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/10 16:52 net-old dd3ca4c5184e 2ca0d385 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/04 20:42 net-old 7d18a07897d0 0a2584dd .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/04 11:39 net-old 065e1ae02fbe 7f723fbe .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/01/03 12:59 net-old 29262e1f773b e1768e9c .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2021/12/17 14:41 net-old 6441998e2e37 44068e19 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2021/12/16 22:10 net-old ef8a0f6eab1c 8dd6a5e3 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2021/12/15 10:44 net-old 3dd7d40b4366 f752fb53 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2021/12/15 06:29 net-old 3dd7d40b4366 f752fb53 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in devinet_ioctl
2022/02/05 20:26 net-next-old ed8c8f605c0b a7dab638 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2022/02/01 13:42 net-next-old 9a90986efcff c1c1631d .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2022/01/26 14:26 net-next-old ab14f1802cfb 2cbffd88 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2022/01/24 22:21 net-next-old de8a820df2ac 2cbffd88 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2022/01/11 06:53 net-next-old fe8152b38d3a ddb0ab8c .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2022/01/10 08:54 net-next-old 8aaaf2f3af2a 2ca0d385 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2021/12/11 23:50 net-next-old 77ab714f0070 49ca1f59 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2021/12/10 06:35 net-next-old 3150a73366b6 4d4ce9bc .config console log report info ci-upstream-net-kasan-gce INFO: task hung in devinet_ioctl
2022/01/15 11:24 linux-next bd8d9cef2a79 723cfaf0 .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in devinet_ioctl
* Struck through repros no longer work on HEAD.