syzbot


INFO: task hung in kvm_swap_active_memslots (2)

Status: upstream: reported on 2025/11/17 10:44
Subsystems: kvm
[Documentation on labels]
Reported-by: syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com
First crash: 139d, last: 13d
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly kvm report (Jan 2026) 0 (1) 2026/01/12 08:40
[syzbot] Monthly kvm report (Dec 2025) 0 (1) 2025/12/11 05:58
[syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2) 2 (3) 2025/11/17 16:54
Similar bugs (1)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in kvm_swap_active_memslots kvm 1 3 332d 391d 0/29 auto-obsoleted due to no activity on 2025/07/02 12:01

Sample crash report:
INFO: task syz.1.4128:22557 blocked for more than 143 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.4128      state:D stack:25656 pid:22557 tgid:22556 ppid:14530  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 kvm_swap_active_memslots+0x2e0/0x7c0 virt/kvm/kvm_main.c:1643
 kvm_activate_memslot virt/kvm/kvm_main.c:1802 [inline]
 kvm_create_memslot virt/kvm/kvm_main.c:1868 [inline]
 kvm_set_memslot+0xbde/0x1740 virt/kvm/kvm_main.c:1980
 kvm_set_memory_region+0xe1c/0x1570 virt/kvm/kvm_main.c:2136
 kvm_set_internal_memslot+0x9f/0xf0 virt/kvm/kvm_main.c:2159
 __x86_set_memory_region+0x2f6/0x730 arch/x86/kvm/x86.c:13356
 kvm_alloc_apic_access_page+0xc5/0x140 arch/x86/kvm/lapic.c:2861
 vmx_vcpu_create+0x79b/0xb90 arch/x86/kvm/vmx/vmx.c:7830
 kvm_arch_vcpu_create+0x683/0xac0 arch/x86/kvm/x86.c:12804
 kvm_vm_ioctl_create_vcpu virt/kvm/kvm_main.c:4223 [inline]
 kvm_vm_ioctl+0x756/0x4080 virt/kvm/kvm_main.c:5180
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl fs/ioctl.c:583 [inline]
 __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb73539bf79
RSP: 002b:00007fb73632e028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fb735615fa0 RCX: 00007fb73539bf79
RDX: 0000000000000000 RSI: 000000000000ae41 RDI: 0000000000000003
RBP: 00007fb7354327e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fb735616038 R14: 00007fb735615fa0 R15: 00007ffecd9f29e8
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/0:1/10:
 #0: ffff88813fe5d148 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x1287/0x1920 kernel/workqueue.c:3250
 #1: ffffc900000f7d08 ((work_completion)(&(&ssp->srcu_sup->work)->work)){+.+.}-{0:0}, at: process_one_work+0x93c/0x1920 kernel/workqueue.c:3251
 #2: ffffffff8e874f38 (&ssp->srcu_sup->srcu_gp_mutex){+.+.}-{4:4}, at: srcu_advance_state kernel/rcu/srcutree.c:1836 [inline]
 #2: ffffffff8e874f38 (&ssp->srcu_sup->srcu_gp_mutex){+.+.}-{4:4}, at: process_srcu+0x77/0x1ab0 kernel/rcu/srcutree.c:1996
 #3: ffffffff8e7f4ef8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x27f/0x3c0 kernel/rcu/tree_exp.h:311
1 lock held by khungtaskd/30:
 #0: ffffffff8e7e92e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e92e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e92e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
2 locks held by getty/5583:
 #0: ffff8880331450a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz.1.4128/22557:
 #0: ffff888061a080a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff888061a080a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2855
 #1: ffff888061a08138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1931
1 lock held by syz-executor/22790:
 #0: ffffffff9060f2e8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff9060f2e8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
3 locks held by kworker/0:3/24620:
 #0: ffff88813fe5f548 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1287/0x1920 kernel/workqueue.c:3250
 #1: ffffc900033d7d08 (free_ipc_work){+.+.}-{0:0}, at: process_one_work+0x93c/0x1920 kernel/workqueue.c:3251
 #2: ffffffff8e7f4ef8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
1 lock held by syz.2.4723/24845:
 #0: ffffffff9060f2e8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff9060f2e8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x220 drivers/net/tun.c:3436
1 lock held by syz.3.4736/24888:
 #0: ffffffff8e7f4dc0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3828

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (14):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/16 17:31 upstream 0f2acd3148e0 84656fa6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/27 13:52 upstream fcb70a56f4d8 43e1df1d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/07 04:47 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/03 04:08 upstream 9b0436804460 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/31 00:03 upstream dbf8fe85a16a d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/25 04:32 upstream ccd1cdca5cd4 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/15 14:33 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/13 15:18 upstream 9551a26f17d9 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/07 20:58 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/29 01:25 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/28 19:51 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/19 13:20 upstream 8b690556d8fe 82d7b894 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/18 03:20 upstream e7c375b18160 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/10/14 04:02 upstream 3a8660878839 b6605ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
* Struck through repros no longer work on HEAD.