syzbot


INFO: task hung in kvm_mmu_pre_destroy_vm

Status: auto-closed as invalid on 2020/04/30 03:32
Reported-by: syzbot+d015637785dc5569ccf7@syzkaller.appspotmail.com
First crash: 1634d, last: 1599d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.14 INFO: task hung in kvm_mmu_pre_destroy_vm (4) syz error 8 453d 832d 0/1 upstream: reported syz repro on 2022/02/05 19:19
linux-4.14 INFO: task hung in kvm_mmu_pre_destroy_vm (2) 1 1440d 1440d 0/1 auto-closed as invalid on 2020/10/05 15:28
linux-4.14 INFO: task hung in kvm_mmu_pre_destroy_vm (3) 5 1135d 1292d 0/1 auto-closed as invalid on 2021/08/07 02:23
upstream INFO: task hung in kvm_mmu_pre_destroy_vm kvm 1 328d 328d 0/26 auto-obsoleted due to no activity on 2023/09/22 14:35

Sample crash report:
INFO: task syz-executor.4:11764 blocked for more than 140 seconds.
      Not tainted 4.14.161-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor.4  D28528 11764   7171 0x00000004
Call Trace:
 context_switch kernel/sched/core.c:2808 [inline]
 __schedule+0x7b8/0x1cd0 kernel/sched/core.c:3384
 schedule+0x92/0x1c0 kernel/sched/core.c:3428
 schedule_timeout+0x93b/0xe10 kernel/time/timer.c:1723
 do_wait_for_common kernel/sched/completion.c:91 [inline]
 __wait_for_common kernel/sched/completion.c:112 [inline]
 wait_for_common kernel/sched/completion.c:123 [inline]
 wait_for_completion+0x27c/0x420 kernel/sched/completion.c:144
 kthread_stop+0xda/0x650 kernel/kthread.c:530
 kvm_mmu_pre_destroy_vm+0x46/0x57 arch/x86/kvm/mmu.c:5850
 kvm_arch_pre_destroy_vm+0x16/0x20 arch/x86/kvm/x86.c:8519
 kvm_destroy_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:769 [inline]
 kvm_put_kvm+0x319/0xaa0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:806
 kvm_vm_release+0x44/0x60 arch/x86/kvm/../../../virt/kvm/kvm_main.c:817
 __fput+0x275/0x7a0 fs/file_table.c:210
 ____fput+0x16/0x20 fs/file_table.c:244
 task_work_run+0x114/0x190 kernel/task_work.c:113
 tracehook_notify_resume include/linux/tracehook.h:191 [inline]
 exit_to_usermode_loop+0x1da/0x220 arch/x86/entry/common.c:164
 prepare_exit_to_usermode arch/x86/entry/common.c:199 [inline]
 syscall_return_slowpath arch/x86/entry/common.c:270 [inline]
 do_syscall_64+0x4bc/0x640 arch/x86/entry/common.c:297
 entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x414581
RSP: 002b:00007ffdd7478bc0 EFLAGS: 00000293 ORIG_RAX: 0000000000000003
RAX: 0000000000000000 RBX: 0000000000000005 RCX: 0000000000414581
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 0000000000000001 R08: 000000006e783fbb R09: 000000006e783fbf
R10: 00007ffdd7478ca0 R11: 0000000000000293 R12: 000000000075bf20
R13: 000000000007200d R14: 00000000007609c0 R15: 000000000075bf2c

Showing all locks held in the system:
1 lock held by khungtaskd/1045:
 #0:  (tasklist_lock){.+.+}, at: [<ffffffff8148c8d8>] debug_show_all_locks+0x7f/0x21f kernel/locking/lockdep.c:4544
1 lock held by rsyslogd/6998:
 #0:  (&f->f_pos_lock){+.+.}, at: [<ffffffff81966a5b>] __fdget_pos+0xab/0xd0 fs/file.c:769
2 locks held by getty/7120:
 #0:  (&tty->ldisc_sem){++++}, at: [<ffffffff86650bc3>] ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff83492216>] n_tty_read+0x1e6/0x17d0 drivers/tty/n_tty.c:2156
2 locks held by getty/7121:
 #0:  (&tty->ldisc_sem){++++}, at: [<ffffffff86650bc3>] ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff83492216>] n_tty_read+0x1e6/0x17d0 drivers/tty/n_tty.c:2156
2 locks held by getty/7122:
 #0:  (&tty->ldisc_sem){++++}, at: [<ffffffff86650bc3>] ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff83492216>] n_tty_read+0x1e6/0x17d0 drivers/tty/n_tty.c:2156
2 locks held by getty/7123:
 #0:  (&tty->ldisc_sem){++++}, at: [<ffffffff86650bc3>] ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff83492216>] n_tty_read+0x1e6/0x17d0 drivers/tty/n_tty.c:2156
2 locks held by getty/7124:
 #0:  (&tty->ldisc_sem){++++}, at: [<ffffffff86650bc3>] ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff83492216>] n_tty_read+0x1e6/0x17d0 drivers/tty/n_tty.c:2156
2 locks held by getty/7125:
 #0:  (&tty->ldisc_sem){++++}, at: [<ffffffff86650bc3>] ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff83492216>] n_tty_read+0x1e6/0x17d0 drivers/tty/n_tty.c:2156
2 locks held by getty/7126:
 #0:  (&tty->ldisc_sem){++++}, at: [<ffffffff86650bc3>] ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:376
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<ffffffff83492216>] n_tty_read+0x1e6/0x17d0 drivers/tty/n_tty.c:2156

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 1045 Comm: khungtaskd Not tainted 4.14.161-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:17 [inline]
 dump_stack+0x142/0x197 lib/dump_stack.c:58
 nmi_cpu_backtrace.cold+0x57/0x94 lib/nmi_backtrace.c:101
 nmi_trigger_cpumask_backtrace+0x141/0x189 lib/nmi_backtrace.c:62
 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
 trigger_all_cpu_backtrace include/linux/nmi.h:140 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:195 [inline]
 watchdog+0x5e7/0xb90 kernel/hung_task.c:274
 kthread+0x319/0x430 kernel/kthread.c:232
 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:404
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at pc 0xffffffff866516ae

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2020/01/01 03:31 linux-4.14.y 4c5bf01e16a7 25a0186e .config console log report ci2-linux-4-14
2019/12/27 02:50 linux-4.14.y e1f7d50ae3a3 be5c2c81 .config console log report ci2-linux-4-14
2019/11/26 04:56 linux-4.14.y 43598c571e7e 598ca6c8 .config console log report ci2-linux-4-14
* Struck through repros no longer work on HEAD.