syzbot


INFO: task hung in kvm_swap_active_memslots (2)

Status: upstream: reported C repro on 2025/11/17 10:44
Subsystems: kvm
[Documentation on labels]
Reported-by: syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com
First crash: 182d, last: 1d15h
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
bd8a9d33-6dae-4d42-a69b-57c1d1e69519 repro INFO: task hung in kvm_swap_active_memslots (2) 2026/03/08 10:09 2026/03/08 10:09 2026/03/08 10:20 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2) 2 (4) 2026/04/06 19:53
[syzbot] Monthly kvm report (Jan 2026) 0 (1) 2026/01/12 08:40
[syzbot] Monthly kvm report (Dec 2025) 0 (1) 2025/12/11 05:58
Similar bugs (1)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in kvm_swap_active_memslots kvm 1 3 376d 434d 0/29 auto-obsoleted due to no activity on 2025/07/02 12:01

Sample crash report:
INFO: task syz.0.17:6023 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:25752 pid:6023  tgid:6023  ppid:5958   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 kvm_swap_active_memslots+0x2e0/0x7c0 virt/kvm/kvm_main.c:1627
 kvm_activate_memslot virt/kvm/kvm_main.c:1786 [inline]
 kvm_create_memslot virt/kvm/kvm_main.c:1852 [inline]
 kvm_set_memslot+0xbde/0x1740 virt/kvm/kvm_main.c:1964
 kvm_set_memory_region+0xe1c/0x1570 virt/kvm/kvm_main.c:2120
 kvm_set_internal_memslot+0x9f/0xf0 virt/kvm/kvm_main.c:2143
 __x86_set_memory_region+0x2f6/0x730 arch/x86/kvm/x86.c:13355
 kvm_alloc_apic_access_page+0xc5/0x140 arch/x86/kvm/lapic.c:2861
 vmx_vcpu_create+0x79b/0xb90 arch/x86/kvm/vmx/vmx.c:7830
 kvm_arch_vcpu_create+0x683/0xac0 arch/x86/kvm/x86.c:12803
 kvm_vm_ioctl_create_vcpu virt/kvm/kvm_main.c:4207 [inline]
 kvm_vm_ioctl+0x756/0x4080 virt/kvm/kvm_main.c:5165
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl fs/ioctl.c:583 [inline]
 __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fbd53f9c819
RSP: 002b:00007fffae0c01c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fbd54215fa0 RCX: 00007fbd53f9c819
RDX: 0000000000000004 RSI: 000000000000ae41 RDI: 0000000000000003
RBP: 00007fbd54032c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fbd54215fac R14: 00007fbd54215fa0 R15: 00007fbd54215fa0
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
2 locks held by getty/5584:
 #0: ffff8880394da0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz.0.17/6023:
 #0: ffff8880785f00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880785f00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff8880785f0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.1.18/6062:
 #0: ffff88807cae00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88807cae00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88807cae0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.2.19/6085:
 #0: ffff8880790a00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880790a00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff8880790a0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.3.20/6109:
 #0: ffff88807b3340a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88807b3340a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88807b334138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.4.21/6138:
 #0: ffff8880343100a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880343100a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff888034310138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.5.22/6172:
 #0: ffff88802b0e40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88802b0e40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88802b0e4138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.6.23/6201:
 #0: ffff88805ead00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88805ead00a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff88805ead0138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.7.24/6230:
 #0: ffff8880582a40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880582a40a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff8880582a4138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915
2 locks held by syz.8.25/6259:
 #0: ffff8880556000a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff8880556000a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff888055600138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1915

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 175 Comm: kworker/u8:6 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:chacha_block_generic+0x102/0x360 lib/crypto/chacha-block-generic.c:80
Code: df 48 89 44 24 38 49 8b 45 08 48 89 44 24 40 49 8b 45 10 48 89 44 24 48 49 8b 45 18 48 89 44 24 50 49 8b 45 20 48 89 44 24 58 <49> 8b 45 28 48 89 44 24 60 49 8b 45 30 48 89 44 24 68 49 8b 45 38
RSP: 0018:ffffc90003067820 EFLAGS: 00000046
RAX: 21eb668f03ac7932 RBX: ffffc90003067940 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 0000000000000014 RDI: ffffc90003067a70
RBP: ffffc90003067a70 R08: 0000000000000001 R09: 0000000000000000
R10: ffffc90003067aa0 R11: 3a60345ebc2c5805 R12: dffffc0000000000
R13: ffffc90003067a70 R14: 0000000000000000 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff888124340000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000563c2aaa9660 CR3: 000000000e598000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 chacha20_block include/crypto/chacha.h:45 [inline]
 crng_fast_key_erasure+0x1a5/0x260 drivers/char/random.c:319
 crng_make_state+0x1c2/0x6c0 drivers/char/random.c:385
 _get_random_bytes+0x11c/0x220 drivers/char/random.c:399
 nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:846 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:876 [inline]
 nsim_dev_trap_report_work+0x79b/0xd10 drivers/net/netdevsim/dev.c:922
 process_one_work+0xa23/0x19a0 kernel/workqueue.c:3276
 process_scheduled_works kernel/workqueue.c:3359 [inline]
 worker_thread+0x5ef/0xe50 kernel/workqueue.c:3440
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (18):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/06 19:53 upstream 591cd656a1bf 4440e7c2 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/04/13 09:54 upstream 028ef9c96e96 38c8e246 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/04/08 17:19 upstream 3036cd0d3328 d9b7f621 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/04/06 17:22 upstream 591cd656a1bf 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/02/16 17:31 upstream 0f2acd3148e0 84656fa6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/27 13:52 upstream fcb70a56f4d8 43e1df1d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/07 04:47 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/03 04:08 upstream 9b0436804460 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/31 00:03 upstream dbf8fe85a16a d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/25 04:32 upstream ccd1cdca5cd4 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/15 14:33 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/13 15:18 upstream 9551a26f17d9 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/07 20:58 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/29 01:25 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/28 19:51 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/19 13:20 upstream 8b690556d8fe 82d7b894 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/18 03:20 upstream e7c375b18160 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/10/14 04:02 upstream 3a8660878839 b6605ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
* Struck through repros no longer work on HEAD.