syzbot


INFO: task hung in __vhost_worker_flush

Status: upstream: reported on 2024/05/29 22:08
Subsystems: net virt
[Documentation on labels]
Reported-by: syzbot+7f3bbe59e8dd2328a990@syzkaller.appspotmail.com
First crash: 158d, last: 1d22h
Discussions (2)
Title Replies (including bot) Last reply
[syzbot] [kvm?] [net?] [virt?] INFO: task hung in __vhost_worker_flush 4 (6) 2024/08/19 15:19
[syzbot] Monthly kvm report (Jul 2024) 0 (1) 2024/07/22 21:16

Sample crash report:
INFO: task syz.2.953:8715 blocked for more than 143 seconds.
      Not tainted 6.12.0-rc2-syzkaller-00320-gba01565ced22 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.953       state:D stack:25920 pid:8715  tgid:8715  ppid:7729   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5322 [inline]
 __schedule+0x1895/0x4b30 kernel/sched/core.c:6682
 __schedule_loop kernel/sched/core.c:6759 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6774
 schedule_timeout+0xb0/0x310 kernel/time/timer.c:2591
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
 __vhost_worker_flush+0x1e6/0x280 drivers/vhost/vhost.c:288
 vhost_worker_flush drivers/vhost/vhost.c:295 [inline]
 vhost_dev_flush+0xc9/0x150 drivers/vhost/vhost.c:305
 vhost_vsock_flush drivers/vhost/vsock.c:697 [inline]
 vhost_vsock_dev_release+0x222/0x410 drivers/vhost/vsock.c:749
 __fput+0x23f/0x880 fs/file_table.c:431
 task_work_run+0x24f/0x310 kernel/task_work.c:228
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x168/0x370 kernel/entry/common.c:218
 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff91dd7dff9
RSP: 002b:00007ff91e05fb88 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007ff91df37a80 RCX: 00007ff91dd7dff9
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007ff91df37a80 R08: 0000000000000006 R09: 00007ff91e05fe7f
R10: 00000000005f9280 R11: 0000000000000246 R12: 0000000000035fe7
R13: 00007ff91e05fc90 R14: 0000000000000032 R15: ffffffffffffffff
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:1/12:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90000117d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90000117d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
1 lock held by khungtaskd/30:
 #0: ffffffff8e937de0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e937de0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e937de0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6720
3 locks held by kworker/u8:5/1106:
 #0: ffff88801bf22948 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801bf22948 ((wq_completion)cfg80211){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90004617d00 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90004617d00 ((work_completion)(&(&rdev->dfs_update_channels_wk)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: cfg80211_dfs_channels_update_work+0xbf/0x610 net/wireless/mlme.c:1021
2 locks held by kworker/u8:8/2588:
2 locks held by getty/4987:
 #0: ffff88802e5b60a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
3 locks held by kworker/0:3/5233:
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900037ffd00 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900037ffd00 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8e93d378 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:329 [inline]
 #2: ffffffff8e93d378 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:976
3 locks held by kworker/1:6/5289:
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900042f7d00 ((work_completion)(&rfkill_global_led_trigger_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900042f7d00 ((work_completion)(&rfkill_global_led_trigger_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8ffa95e8 (rfkill_global_mutex){+.+.}-{3:3}, at: rfkill_global_led_trigger_worker+0x27/0xd0 net/rfkill/core.c:182
1 lock held by iou-wrk-11172/11176:
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:249 [inline]
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0xd0b/0x1010 io_uring/kbuf.c:580
1 lock held by iou-wrk-11172/11179:
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:249 [inline]
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0xd0b/0x1010 io_uring/kbuf.c:580
1 lock held by iou-wrk-11172/11180:
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:249 [inline]
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0xd0b/0x1010 io_uring/kbuf.c:580
1 lock held by iou-wrk-11172/11181:
1 lock held by iou-wrk-11172/11182:
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:249 [inline]
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0xd0b/0x1010 io_uring/kbuf.c:580
1 lock held by iou-wrk-11172/11183:
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:249 [inline]
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0xd0b/0x1010 io_uring/kbuf.c:580
1 lock held by iou-wrk-11172/11184:
1 lock held by iou-wrk-11172/11185:
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:249 [inline]
 #0: ffff88806e2160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0xd0b/0x1010 io_uring/kbuf.c:580
1 lock held by syz-executor/11258:
 #0: ffffffff8ffa95e8 (rfkill_global_mutex){+.+.}-{3:3}, at: rfkill_unregister+0xd0/0x230 net/rfkill/core.c:1145
1 lock held by syz-executor/11281:
 #0: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
4 locks held by syz.0.1570/11712:
 #0: ffffffff8ffa95e8 (rfkill_global_mutex){+.+.}-{3:3}, at: rfkill_fop_write+0x1a6/0x790 net/rfkill/core.c:1293
 #1: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: cfg80211_rfkill_set_block+0x1e/0x50 net/wireless/core.c:311
 #2: ffff888047d40768 (&rdev->wiphy.mtx){+.+.}-{3:3}, at: wiphy_lock include/net/cfg80211.h:6014 [inline]
 #2: ffff888047d40768 (&rdev->wiphy.mtx){+.+.}-{3:3}, at: ieee80211_stop+0x3e9/0x4a0 net/mac80211/iface.c:777
 #3: ffffffff8e93d378 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:329 [inline]
 #3: ffffffff8e93d378 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:976
3 locks held by syz.4.1582/11746:
 #0: ffffffff8fd37ef0 (cb_lock){++++}-{3:3}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8fd37da8 (genl_mutex){+.+.}-{3:3}, at: genl_lock net/netlink/genetlink.c:35 [inline]
 #1: ffffffff8fd37da8 (genl_mutex){+.+.}-{3:3}, at: genl_op_lock net/netlink/genetlink.c:60 [inline]
 #1: ffffffff8fd37da8 (genl_mutex){+.+.}-{3:3}, at: genl_rcv_msg+0x121/0xec0 net/netlink/genetlink.c:1209
 #2: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: __tipc_nl_compat_doit net/tipc/netlink_compat.c:358 [inline]
 #2: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: tipc_nl_compat_doit+0x21e/0x610 net/tipc/netlink_compat.c:393
1 lock held by syz.2.1585/11753:
 #0: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcd2188 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6672
1 lock held by syz.1.1587/11758:
 #0: ffffffff8ffa95e8 (rfkill_global_mutex){+.+.}-{3:3}, at: rfkill_unregister+0xd0/0x230 net/rfkill/core.c:1145

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.12.0-rc2-syzkaller-00320-gba01565ced22 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xff4/0x1040 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 5287 Comm: kworker/1:4 Not tainted 6.12.0-rc2-syzkaller-00320-gba01565ced22 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue: events nsim_dev_trap_report_work
RIP: 0010:variable_test_bit arch/x86/include/asm/bitops.h:227 [inline]
RIP: 0010:arch_test_bit arch/x86/include/asm/bitops.h:239 [inline]
RIP: 0010:_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:142 [inline]
RIP: 0010:cpumask_test_cpu include/linux/cpumask.h:570 [inline]
RIP: 0010:cpu_online include/linux/cpumask.h:1117 [inline]
RIP: 0010:trace_lock_release include/trace/events/lock.h:69 [inline]
RIP: 0010:lock_release+0xb0/0xa30 kernel/locking/lockdep.c:5836
Code: 8b 05 30 5a 93 7e 83 f8 08 0f 83 fe 05 00 00 89 c3 48 89 d8 48 c1 e8 06 48 8d 3c c5 68 f3 1c 90 be 08 00 00 00 e8 80 1c 8e 00 <48> 0f a3 1d a0 75 ac 0e 73 16 e8 41 58 0a 00 84 c0 75 0d 80 3d 90
RSP: 0018:ffffc900042d7680 EFLAGS: 00000056
RAX: 0000000000000001 RBX: 0000000000000001 RCX: ffffffff81707dc0
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff901cf368
RBP: ffffc900042d77b0 R08: ffffffff901cf36f R09: 1ffffffff2039e6d
R10: dffffc0000000000 R11: fffffbfff2039e6e R12: 1ffff9200085aedc
R13: ffffffff854d0196 R14: ffffc900042d78e0 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055f0a9216f20 CR3: 0000000027b9e000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 local_lock_release include/linux/local_lock_internal.h:38 [inline]
 crng_make_state+0x4c2/0xa80 drivers/char/random.c:393
 _get_random_bytes+0xd7/0x2c0 drivers/char/random.c:406
 nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:776 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
 nsim_dev_trap_report_work+0x630/0xaa0 drivers/net/netdevsim/dev.c:850
 process_one_work kernel/workqueue.c:3229 [inline]
 process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310
 worker_thread+0x870/0xd30 kernel/workqueue.c:3391
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
vkms_vblank_simulate: vblank timer overrun

Crashes (314):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/10/13 18:22 upstream ba01565ced22 084d8178 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __vhost_worker_flush
2024/09/27 16:26 upstream 075dbe9f6e3c 2b1784d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __vhost_worker_flush
2024/09/25 03:55 upstream 97d8894b6f4c 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __vhost_worker_flush
2024/09/25 01:47 upstream 97d8894b6f4c 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __vhost_worker_flush
2024/09/25 01:40 upstream 97d8894b6f4c 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __vhost_worker_flush
2024/09/25 00:26 upstream a430d95c5efa 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/09/01 20:32 upstream 431c1646e1f8 1eda0d14 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/17 16:04 upstream e5fa841af679 dbc93b08 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/14 05:37 upstream 6b0f8db921ab bde81f6f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/13 17:56 upstream 6b4aa469f049 f21a18ca .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/13 17:54 upstream 6b4aa469f049 f21a18ca .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/13 14:58 upstream d74da846046a f21a18ca .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/12 23:58 upstream d74da846046a 7b0f4b46 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __vhost_worker_flush
2024/08/12 22:29 upstream d74da846046a 7b0f4b46 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/12 09:00 upstream 7c626ce4bae1 6f4edef4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/11 20:52 upstream cb2e5ee8e7a0 6f4edef4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/08/10 23:05 upstream 5189dafa4cf9 6f4edef4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/05/26 06:33 upstream 9b62e02e6336 a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __vhost_worker_flush
2024/05/25 20:10 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __vhost_worker_flush
2024/05/25 08:00 upstream 0b32d436c015 a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __vhost_worker_flush
2024/10/11 19:47 upstream 1d227fcc7222 cd942402 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in __vhost_worker_flush
2024/10/10 20:18 upstream d3d1556696c1 8fbfc0c8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in __vhost_worker_flush
2024/08/11 03:26 upstream 34ac1e82e5a7 6f4edef4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in __vhost_worker_flush
2024/09/27 22:24 upstream 075dbe9f6e3c 2b1784d6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-arm64 INFO: task hung in __vhost_worker_flush
2024/09/27 22:23 upstream 075dbe9f6e3c 2b1784d6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-arm64 INFO: task hung in __vhost_worker_flush
2024/05/25 18:40 upstream 56fb6f92854f a10a183e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-arm32 INFO: task hung in __vhost_worker_flush
2024/10/02 18:40 linux-next fe2173353674 a4c7fd36 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/10/02 10:06 linux-next fe2173353674 ea2b66a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/10/01 19:31 linux-next 77df9e4bb222 ea2b66a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/10/01 19:30 linux-next 77df9e4bb222 ea2b66a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/10/01 10:32 linux-next 77df9e4bb222 bbd4e0a4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/30 20:02 linux-next cea5425829f7 bbd4e0a4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/30 10:05 linux-next cea5425829f7 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/28 21:14 linux-next 40e0c9d414f5 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/28 07:25 linux-next 40e0c9d414f5 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/27 20:03 linux-next 40e0c9d414f5 2b1784d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/27 07:12 linux-next 92fc9636d147 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/26 11:20 linux-next 92fc9636d147 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/25 09:35 linux-next 2b7275670032 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/24 20:53 linux-next 4d0326b60bb7 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/24 17:31 linux-next 4d0326b60bb7 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/09/24 11:20 linux-next 4d0326b60bb7 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/19 17:15 linux-next 367b5c3d53e5 9f0ab3fb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/17 14:40 linux-next 367b5c3d53e5 dbc93b08 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/17 14:39 linux-next 367b5c3d53e5 dbc93b08 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/16 07:46 linux-next 367b5c3d53e5 e4bacdaf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/15 08:42 linux-next edd1ec2e3a9f e4bacdaf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/15 00:48 linux-next 320eb81df4f6 e4bacdaf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/14 01:07 linux-next 033a4691702c bde81f6f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/13 16:07 linux-next 033a4691702c f21a18ca .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/13 08:13 linux-next 033a4691702c 7b0f4b46 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/12 18:04 linux-next 9e6869691724 7b0f4b46 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/08/12 06:20 linux-next 9e6869691724 6f4edef4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
2024/05/10 14:50 linux-next 75fa778d74b7 f7c35481 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __vhost_worker_flush
* Struck through repros no longer work on HEAD.