syzbot


INFO: task hung in vhost_worker_flush

Status: auto-obsoleted due to no activity on 2024/08/21 03:42
Subsystems: virt kvm net
[Documentation on labels]
First crash: 431d, last: 265d

Sample crash report:
INFO: task syz-executor.2:23342 blocked for more than 143 seconds.
      Not tainted 6.9.0-rc7-syzkaller-00136-gf4345f05c0df #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.2  state:D stack:24664 pid:23342 tgid:23342 ppid:20148  flags:0x00000006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5409 [inline]
 __schedule+0x17e8/0x4a50 kernel/sched/core.c:6746
 __schedule_loop kernel/sched/core.c:6823 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6838
 schedule_timeout+0xb0/0x310 kernel/time/timer.c:2558
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
 vhost_worker_flush+0x16b/0x1c0 drivers/vhost/vhost.c:293
 vhost_dev_flush+0x10d/0x1c0 drivers/vhost/vhost.c:307
 vhost_net_flush+0x24/0x160 drivers/vhost/net.c:1355
 vhost_net_release+0x10c/0x470 drivers/vhost/net.c:1376
 __fput+0x42b/0x8a0 fs/file_table.c:422
 __do_sys_close fs/open.c:1556 [inline]
 __se_sys_close fs/open.c:1541 [inline]
 __x64_sys_close+0x7f/0x110 fs/open.c:1541
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f24f927cc5a
RSP: 002b:00007fff40efd7d0 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
RAX: ffffffffffffffda RBX: 0000000000000005 RCX: 00007f24f927cc5a
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007f24f93ad980 R08: 0000001b2db20000 R09: 00000000000001c2
R10: 00000000819d1d13 R11: 0000000000000293 R12: 00000000000f8cf8
R13: 00007f24f93abf8c R14: 00007fff40efd8d0 R15: 0000000000000032
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/u8:0/10:
1 lock held by khungtaskd/29:
 #0: ffffffff8e334da0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e334da0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e334da0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
3 locks held by kworker/u8:4/60:
 #0: ffff88802a6ae948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
 #0: ffff88802a6ae948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
 #1: ffffc900015b7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
 #1: ffffc900015b7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
 #2: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4192
2 locks held by getty/4836:
 #0: ffff88802b1670a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900031332f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
2 locks held by kworker/u8:6/17981:
3 locks held by kworker/1:13/22808:
 #0: ffff888015078948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
 #0: ffff888015078948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
 #1: ffffc900098c7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
 #1: ffffc900098c7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
 #2: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
3 locks held by kworker/1:15/22810:
 #0: ffff888015078948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
 #0: ffff888015078948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
 #1: ffffc90009937d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
 #1: ffffc90009937d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
 #2: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
1 lock held by syz-executor.2/23342:
 #0: ffff8880296c8670 (&worker->mutex){+.+.}-{3:3}, at: vhost_dev_flush+0xcf/0x1c0 drivers/vhost/vhost.c:302
4 locks held by syz-executor.0/25235:
 #0: ffff888068185060 (&hdev->req_lock){+.+.}-{3:3}, at: hci_dev_do_close net/bluetooth/hci_core.c:552 [inline]
 #0: ffff888068185060 (&hdev->req_lock){+.+.}-{3:3}, at: hci_unregister_dev+0x1d3/0x4e0 net/bluetooth/hci_core.c:2771
 #1: ffff888068184078 (&hdev->lock){+.+.}-{3:3}, at: hci_dev_close_sync+0x48b/0x1050 net/bluetooth/hci_sync.c:5146
 #2: ffffffff8f70fce8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2025 [inline]
 #2: ffffffff8f70fce8 (hci_cb_list_lock){+.+.}-{3:3}, at: hci_conn_hash_flush+0xa6/0x240 net/bluetooth/hci_conn.c:2568
 #3: ffffffff8e33a138 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #3: ffffffff8e33a138 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x463/0x820 kernel/rcu/tree_exp.h:939
1 lock held by syz-executor.1/25359:
 #0: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x10d0 net/core/rtnetlink.c:6592
1 lock held by syz-executor.5/25373:
 #0: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x10d0 net/core/rtnetlink.c:6592
1 lock held by syz-executor.4/25423:
 #0: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x10d0 net/core/rtnetlink.c:6592
7 locks held by syz-executor.2/25513:
 #0: ffff88803063e420 (sb_writers#8){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2855 [inline]
 #0: ffff88803063e420 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x233/0xcb0 fs/read_write.c:586
 #1: ffff88805b9f4c88 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1eb/0x500 fs/kernfs/file.c:325
 #2: ffff888023180968 (kn->active#51){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20f/0x500 fs/kernfs/file.c:326
 #3: ffffffff8eee4ba8 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: new_device_store+0x1b4/0x890 drivers/net/netdevsim/bus.c:166
 #4: ffff88807c2c10e8 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:990 [inline]
 #4: ffff88807c2c10e8 (&dev->mutex){....}-{3:3}, at: __device_attach+0x8e/0x520 drivers/base/dd.c:1003
 #5: ffff88807c2c2250 (&devlink->lock_key#58){+.+.}-{3:3}, at: nsim_drv_probe+0xcb/0xb80 drivers/net/netdevsim/dev.c:1534
 #6: ffffffff8f5a6908 (rtnl_mutex){+.+.}-{3:3}, at: register_nexthop_notifier+0x84/0x290 net/ipv4/nexthop.c:3871

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.9.0-rc7-syzkaller-00136-gf4345f05c0df #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:380
 kthread+0x2f2/0x390 kernel/kthread.c:388
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 2879 Comm: kworker/u8:7 Not tainted 6.9.0-rc7-syzkaller-00136-gf4345f05c0df #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Workqueue: bat_events batadv_nc_worker
RIP: 0010:match_held_lock+0xb/0xb0 kernel/locking/lockdep.c:5200
Code: 41 5e 41 5f e9 81 ed 2f 00 e8 11 f9 ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 53 bd 01 00 00 00 48 39 77 10 <74> 67 48 89 fb 81 7f 20 00 00 20 00 72 59 48 8b 46 08 48 85 c0 75
RSP: 0018:ffffc900094b7948 EFLAGS: 00000046
RAX: 0000000000000003 RBX: 0000000000000002 RCX: ffffc900094b7903
RDX: 1ffff92001296f3c RSI: ffff88802bbc1498 RDI: ffff88802bae2928
RBP: 0000000000000001 R08: ffffffff8fa8fa2f R09: 1ffffffff1f51f45
R10: dffffc0000000000 R11: fffffbfff1f51f46 R12: 0000000000000002
R13: 000000000000000a R14: ffff88802bae28d8 R15: ffff88802bae2928
FS:  0000000000000000(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffcfd35dfe8 CR3: 000000000e134000 CR4: 0000000000350ef0
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 find_held_lock kernel/locking/lockdep.c:5244 [inline]
 __lock_release kernel/locking/lockdep.c:5429 [inline]
 lock_release+0x255/0x9f0 kernel/locking/lockdep.c:5774
 __raw_spin_unlock_bh include/linux/spinlock_api_smp.h:165 [inline]
 _raw_spin_unlock_bh+0x1b/0x40 kernel/locking/spinlock.c:210
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 batadv_nc_purge_paths+0x30f/0x3b0 net/batman-adv/network-coding.c:471
 batadv_nc_worker+0x328/0x610 net/batman-adv/network-coding.c:720
 process_one_work kernel/workqueue.c:3267 [inline]
 process_scheduled_works+0xa12/0x17c0 kernel/workqueue.c:3348
 worker_thread+0x86d/0xd70 kernel/workqueue.c:3429
 kthread+0x2f2/0x390 kernel/kthread.c:388
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (12):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/11 02:32 upstream f4345f05c0df 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vhost_worker_flush
2024/05/06 18:19 upstream dd5a440a31fa c035c6de .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in vhost_worker_flush
2024/05/05 15:08 upstream 7367539ad4b0 610f2a54 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in vhost_worker_flush
2024/05/02 02:00 upstream 0106679839f7 3ba885bc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vhost_worker_flush
2024/04/29 02:36 upstream e67572cd2204 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in vhost_worker_flush
2024/04/29 01:46 upstream e67572cd2204 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in vhost_worker_flush
2024/03/26 13:50 upstream 480e035fc4c7 bcd9b39f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in vhost_worker_flush
2023/12/27 08:24 upstream fbafc3e621c3 fb427a07 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vhost_worker_flush
2023/12/09 21:32 upstream f2e8a57ee903 28b24332 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in vhost_worker_flush
2024/05/23 03:39 upstream 8f6a15f095a6 4d098039 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in vhost_worker_flush
2024/01/22 02:59 upstream 6613476e225e 9bd8dcda .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in vhost_worker_flush
2024/04/12 12:52 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci fec50db7033e 27de0a5c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in vhost_worker_flush
* Struck through repros no longer work on HEAD.