syzbot


INFO: task can't die in vlan_ioctl_handler

Status: upstream: reported on 2022/05/30 07:14
Subsystems: net
[Documentation on labels]
Reported-by: syzbot+6db61674290152a463a0@syzkaller.appspotmail.com
First crash: 735d, last: 31d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] INFO: task can't die in vlan_ioctl_handler 0 (1) 2022/05/30 07:14
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in vlan_ioctl_handler 2 52d 92d 0/3 upstream: reported on 2023/08/31 10:59

Sample crash report:
INFO: task syz-executor.1:28061 can't die for more than 143 seconds.
task:syz-executor.1  state:D stack:27480 pid:28061 ppid: 26621 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:4983 [inline]
 __schedule+0xab2/0x4d90 kernel/sched/core.c:6293
 schedule+0xd2/0x260 kernel/sched/core.c:6366
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
 __mutex_lock_common kernel/locking/mutex.c:680 [inline]
 __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
 vlan_ioctl_handler+0xb7/0xec0 net/8021q/vlan.c:557
 sock_ioctl+0x1d8/0x640 net/socket.c:1199
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:874 [inline]
 __se_sys_ioctl fs/ioctl.c:860 [inline]
 __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:860
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f6130d89ae9
RSP: 002b:00007f612e2ff188 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f6130e9cf60 RCX: 00007f6130d89ae9
RDX: 0000000020000000 RSI: 0000000000008982 RDI: 0000000000000005
RBP: 00007f6130de3f6d R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff3c576def R14: 00007f612e2ff300 R15: 0000000000022000
 </TASK>
INFO: task syz-executor.1:28061 blocked for more than 143 seconds.
      Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1  state:D stack:27480 pid:28061 ppid: 26621 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:4983 [inline]
 __schedule+0xab2/0x4d90 kernel/sched/core.c:6293
 schedule+0xd2/0x260 kernel/sched/core.c:6366
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
 __mutex_lock_common kernel/locking/mutex.c:680 [inline]
 __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
 vlan_ioctl_handler+0xb7/0xec0 net/8021q/vlan.c:557
 sock_ioctl+0x1d8/0x640 net/socket.c:1199
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:874 [inline]
 __se_sys_ioctl fs/ioctl.c:860 [inline]
 __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:860
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7f6130d89ae9
RSP: 002b:00007f612e2ff188 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f6130e9cf60 RCX: 00007f6130d89ae9
RDX: 0000000020000000 RSI: 0000000000008982 RDI: 0000000000000005
RBP: 00007f6130de3f6d R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff3c576def R14: 00007f612e2ff300 R15: 0000000000022000
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/27:
 #0: ffffffff8bb83220 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6458
3 locks held by kworker/1:3/2985:
 #0: ffff88814a039d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff88814a039d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff88814a039d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
 #0: ffff88814a039d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
 #0: ffff88814a039d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
 #0: ffff88814a039d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
 #1: ffffc90001acfdb0 ((addr_chk_work).work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
 #2: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xa/0x20 net/ipv6/addrconf.c:4595
1 lock held by in:imklog/6221:
 #0: ffff88807e445270 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:990
2 locks held by agetty/6296:
 #0: ffff888023ca4098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:252
 #1: ffffc9000274c2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2113
2 locks held by agetty/6327:
 #0: ffff88807b627098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:252
 #1: ffffc90001f682e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2113
3 locks held by kworker/1:2/1415:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
 #1: ffffc9000b15fdb0 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
 #2: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xa/0x20 net/switchdev/switchdev.c:74
5 locks held by kworker/u4:1/22852:
 #0: ffff8880119f3138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880119f3138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff8880119f3138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
 #0: ffff8880119f3138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
 #0: ffff8880119f3138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
 #0: ffff8880119f3138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
 #1: ffffc9000e7f7db0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
 #2: ffffffff8d2f8850 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:555
 #3: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock_unregistering net/core/dev.c:10879 [inline]
 #3: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: default_device_exit_batch+0xe8/0x3c0 net/core/dev.c:10917
 #4: ffffffff8bb8cb30 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x44/0x440 kernel/rcu/tree.c:4026
3 locks held by kworker/1:4/23005:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
 #1: ffffc9000ecf7db0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
 #2: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xb/0x60 net/core/link_watch.c:251
1 lock held by syz-executor.1/27113:
 #0: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:684 [inline]
 #0: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3a/0x180 drivers/net/tun.c:3402
2 locks held by syz-executor.1/28061:
 #0: ffffffff8d2e9a28 (vlan_ioctl_mutex){+.+.}-{3:3}, at: sock_ioctl+0x1bf/0x640 net/socket.c:1197
 #1: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: vlan_ioctl_handler+0xb7/0xec0 net/8021q/vlan.c:557
3 locks held by kworker/1:5/28145:
 #0: ffff888010c65d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c65d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c65d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
 #0: ffff888010c65d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
 #0: ffff888010c65d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
 #0: ffff888010c65d38 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
 #1: ffffc9000b83fdb0 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
 #2: ffffffff8d30cce8 (rtnl_mutex){+.+.}-{3:3}, at: reg_check_chans_work+0x83/0xe10 net/wireless/reg.c:2423

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:256 [inline]
 watchdog+0xcb7/0xed0 kernel/hung_task.c:413
 kthread+0x405/0x4f0 kernel/kthread.c:345
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 6221 Comm: in:imklog Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:mark_usage kernel/locking/lockdep.c:4485 [inline]
RIP: 0010:__lock_acquire+0x77a/0x54a0 kernel/locking/lockdep.c:4981
Code: 00 00 44 8b 54 24 08 45 85 d2 0f 84 37 01 00 00 49 8d 7c 24 21 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 0f b6 04 02 <48> 89 fa 83 e2 07 38 d0 7f 08 84 c0 0f 85 19 39 00 00 49 8d 6c 24
RSP: 0018:ffffc9000b59f770 EFLAGS: 00000806
RAX: 0000000000000000 RBX: 0000000000000552 RCX: ffffffff815c786d
RDX: 1ffff1100eecf156 RSI: 0000000000000008 RDI: ffff888077678ab1
RBP: 0000000000000003 R08: 0000000000000000 R09: ffffffff8ff819ef
R10: 0000000000000001 R11: 0000000000000000 R12: ffff888077678a90
R13: ffff888077678000 R14: ffff888077678a68 R15: dffffc0000000000
FS:  00007fb231d4f700(0000) GS:ffff8880b9d00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f93356ac000 CR3: 0000000070edc000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 lock_acquire kernel/locking/lockdep.c:5637 [inline]
 lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5602
 __mutex_lock_common kernel/locking/mutex.c:607 [inline]
 __mutex_lock+0x12f/0x12f0 kernel/locking/mutex.c:740
 syslog_print+0x39a/0x580 kernel/printk/printk.c:1557
 do_syslog.part.0+0x202/0x640 kernel/printk/printk.c:1658
 do_syslog+0x49/0x60 kernel/printk/printk.c:1643
 kmsg_read+0x90/0xb0 fs/proc/kmsg.c:40
 pde_read fs/proc/inode.c:311 [inline]
 proc_reg_read+0x119/0x300 fs/proc/inode.c:321
 vfs_read+0x1b5/0x600 fs/read_write.c:479
 ksys_read+0x12d/0x250 fs/read_write.c:619
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7fb23439222d
Code: c1 20 00 00 75 10 b8 00 00 00 00 0f 05 48 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 4e fc ff ff 48 89 04 24 b8 00 00 00 00 0f 05 <48> 8b 3c 24 48 89 c2 e8 97 fc ff ff 48 89 d0 48 83 c4 08 48 3d 01
RSP: 002b:00007fb231d2e580 EFLAGS: 00000293 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb23439222d
RDX: 0000000000001fa0 RSI: 00007fb231d2eda0 RDI: 0000000000000004
RBP: 000055ab9ea8d9d0 R08: 0000000000000000 R09: 0000000000000000
R10: 2ce33e6c02ce33e7 R11: 0000000000000293 R12: 00007fb231d2eda0
R13: 0000000000001fa0 R14: 0000000000001f9f R15: 00007fb231d2ee1e
 </TASK>
----------------
Code disassembly (best guess):
   0:	00 00                	add    %al,(%rax)
   2:	44 8b 54 24 08       	mov    0x8(%rsp),%r10d
   7:	45 85 d2             	test   %r10d,%r10d
   a:	0f 84 37 01 00 00    	je     0x147
  10:	49 8d 7c 24 21       	lea    0x21(%r12),%rdi
  15:	48 b8 00 00 00 00 00 	movabs $0xdffffc0000000000,%rax
  1c:	fc ff df
  1f:	48 89 fa             	mov    %rdi,%rdx
  22:	48 c1 ea 03          	shr    $0x3,%rdx
  26:	0f b6 04 02          	movzbl (%rdx,%rax,1),%eax
* 2a:	48 89 fa             	mov    %rdi,%rdx <-- trapping instruction
  2d:	83 e2 07             	and    $0x7,%edx
  30:	38 d0                	cmp    %dl,%al
  32:	7f 08                	jg     0x3c
  34:	84 c0                	test   %al,%al
  36:	0f 85 19 39 00 00    	jne    0x3955
  3c:	49                   	rex.WB
  3d:	8d                   	.byte 0x8d
  3e:	6c                   	insb   (%dx),%es:(%rdi)
  3f:	24                   	.byte 0x24

Crashes (33):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2021/11/26 12:13 linux-next f81e94e91878 63eeac02 .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task can't die in vlan_ioctl_handler
2023/10/31 22:13 upstream 5a6a09e97199 58499c95 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in vlan_ioctl_handler
2023/08/28 11:19 upstream 2dde18cd1d8f 03d9c195 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vlan_ioctl_handler
2023/04/10 13:24 upstream 09a9639e56c0 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in vlan_ioctl_handler
2023/01/22 20:17 upstream 2241ab53cbb5 559a440a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in vlan_ioctl_handler
2022/09/30 23:16 upstream 5a77386984b5 feb56351 .config console log report info [disk image] [vmlinux] ci-upstream-kasan-gce-root INFO: task hung in vlan_ioctl_handler
2023/05/23 09:10 upstream 421ca22e3138 4bce1a3e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in vlan_ioctl_handler
2022/01/16 02:38 upstream a33f5c380c4b 723cfaf0 .config console log report info ci-upstream-kasan-gce-386 INFO: task hung in vlan_ioctl_handler
2023/09/16 00:39 net 615efed8b63f 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/08/19 08:56 net d44036cad311 d216d8a0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/07/06 19:31 net ceb20a3cc526 1a2f6297 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/07/01 15:17 net 3674fbf0451d af3053d2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/06/24 00:59 net 6f68fc395f49 09ffe269 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/06/20 05:55 net 0dbcac3a6dbb d521bc56 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/05/26 10:38 net ad42a35bdfc6 b40ef614 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/03/03 19:41 net-old 528125268588 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/12/13 10:20 net-old e095493091e8 67be1ae7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/11/26 04:26 net-old 31d929de5a11 74a66371 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/08/26 22:02 net-old 2e085ec0e2d7 e5a303f1 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/06/10 02:58 net-old 647df0d41b6b 0d5abf15 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/05/30 07:13 net-old 90343f573252 a46af346 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/04/14 21:01 net-old 00fa91bc9cc2 b17b2923 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/03/01 05:16 net-old caef14b7530c 45a13a73 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/10/20 21:13 net-next 7ce6936045ba a42250d2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/10/19 08:54 net-next c4eee56e14fe 342b9c55 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/08/06 17:51 net-next b1d13f7a3b53 4ffcc9ef .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/07/19 13:32 net-next 3223eeaf0545 022df2bb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/07/17 14:31 net-next 89e970ea7fba e5f10889 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/07/12 17:39 net-next e0f0a5db5f8c 979d5fe2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2023/06/30 20:04 net-next ae230642190a 01298212 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/10/06 15:17 net-next-old 0326074ff465 131b38ac .config console log report info [disk image] [vmlinux] ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/08/13 09:19 net-next-old 7ebfc85e2cd7 8dfcaa3d .config console log report info ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
2022/06/27 23:16 net-next-old c83bc86a0596 ef82eb2c .config console log report info ci-upstream-net-kasan-gce INFO: task hung in vlan_ioctl_handler
* Struck through repros no longer work on HEAD.