syzbot


INFO: rcu detected stall in purge_vmap_node

Status: upstream: reported C repro on 2026/01/12 05:33
Subsystems: mm
[Documentation on labels]
Reported-by: syzbot+d8d4c31d40f868eaea30@syzkaller.appspotmail.com
Fix commit: mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_node
Patched on: [ci-upstream-linux-next-kasan-gce-root ci-upstream-rust-kasan-gce], missing on: [ci-qemu-gce-upstream-auto ci-qemu-native-arm64-kvm ci-qemu-upstream ci-qemu-upstream-386 ci-qemu2-arm32 ci-qemu2-arm64 ci-qemu2-arm64-compat ci-qemu2-arm64-mte ci-qemu2-riscv64 ci-snapshot-upstream-root ci-upstream-bpf-kasan-gce ci-upstream-bpf-next-kasan-gce ci-upstream-gce-arm64 ci-upstream-gce-leak ci-upstream-kasan-badwrites-root ci-upstream-kasan-gce ci-upstream-kasan-gce-386 ci-upstream-kasan-gce-root ci-upstream-kasan-gce-selinux-root ci-upstream-kasan-gce-smack-root ci-upstream-kmsan-gce-386-root ci-upstream-kmsan-gce-root ci-upstream-net-kasan-gce ci-upstream-net-this-kasan-gce ci2-upstream-fs ci2-upstream-kcsan-gce ci2-upstream-usb]
First crash: 70d, last: 8d01h
Cause bisection: failed (error log, bisect log)
  
Discussions (3)
Title Replies (including bot) Last reply
[PATCH v2] mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_node 5 (6) 2026/01/12 14:50
[PATCH] mm/vmalloc: prevent RCU stalls in kasan_release_vmalloc_node 2 (2) 2026/01/12 10:21
[syzbot] [mm?] INFO: rcu detected stall in purge_vmap_node 0 (3) 2026/01/12 09:39
Last patch testing requests (3)
Created Duration User Patch Repo Result
2026/01/12 12:09 42m hdanton@sina.com patch upstream report log
2026/01/12 09:39 26m kapoorarnav43@gmail.com patch upstream error
2026/01/12 07:56 42m kartikey406@gmail.com patch upstream report log

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P6229/1:b..l
rcu: 	(detected by 1, t=10502 jiffies, g=10385, q=373 ncpus=2)
task:kworker/0:17    state:R  running task     stack:28840 pid:6229  tgid:6229  ppid:2      task_flags:0x4208060 flags:0x00080000
Workqueue: events purge_vmap_node
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 preempt_schedule_irq+0x51/0x90 kernel/sched/core.c:7190
 irqentry_exit+0x1d8/0x8c0 kernel/entry/common.c:216
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
RIP: 0010:lock_acquire+0x62/0x330 kernel/locking/lockdep.c:5872
Code: b4 18 12 83 f8 07 0f 87 a2 02 00 00 89 c0 48 0f a3 05 22 bd ee 0e 0f 82 74 02 00 00 8b 35 ba ed ee 0e 85 f6 0f 85 8d 00 00 00 <48> 8b 44 24 30 65 48 2b 05 39 b4 18 12 0f 85 ad 02 00 00 48 83 c4
RSP: 0018:ffffc900035e7540 EFLAGS: 00000206
RAX: 0000000000000046 RBX: ffffffff8e3c96a0 RCX: 0000000019a1310f
RDX: 0000000000000000 RSI: ffffffff8daa7f9d RDI: ffffffff8bf2b380
RBP: 0000000000000002 R08: 000000007c8d0f89 R09: 0000000097c8d0f8
R10: 0000000000000002 R11: ffff888031598b30 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
 rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 rcu_read_lock include/linux/rcupdate.h:867 [inline]
 class_rcu_constructor include/linux/rcupdate.h:1195 [inline]
 unwind_next_frame+0xd1/0x20b0 arch/x86/kernel/unwind_orc.c:495
 __unwind_start+0x45f/0x7f0 arch/x86/kernel/unwind_orc.c:773
 unwind_start arch/x86/include/asm/unwind.h:64 [inline]
 arch_stack_walk+0x73/0x100 arch/x86/kernel/stacktrace.c:24
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 save_stack+0x160/0x1f0 mm/page_owner.c:165
 __reset_page_owner+0x84/0x1a0 mm/page_owner.c:320
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1406 [inline]
 __free_frozen_pages+0x7df/0x1170 mm/page_alloc.c:2943
 kasan_depopulate_vmalloc_pte+0x5b/0x80 mm/kasan/shadow.c:484
 apply_to_pte_range mm/memory.c:3182 [inline]
 apply_to_pmd_range mm/memory.c:3226 [inline]
 apply_to_pud_range mm/memory.c:3262 [inline]
 apply_to_p4d_range mm/memory.c:3298 [inline]
 __apply_to_page_range+0xac1/0x13f0 mm/memory.c:3334
 __kasan_release_vmalloc+0xd1/0xe0 mm/kasan/shadow.c:602
 kasan_release_vmalloc include/linux/kasan.h:593 [inline]
 kasan_release_vmalloc_node mm/vmalloc.c:2282 [inline]
 purge_vmap_node+0x1ba/0xad0 mm/vmalloc.c:2299
 process_one_work+0x9ba/0x1b20 kernel/workqueue.c:3257
 process_scheduled_works kernel/workqueue.c:3340 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3421
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
rcu: rcu_preempt kthread starved for 10534 jiffies! g10385 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:28440 pid:16    tgid:16    ppid:2      task_flags:0x208040 flags:0x00080000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1139/0x6150 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6960
 schedule_timeout+0x123/0x290 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x1ea/0xaf0 kernel/rcu/tree.c:2083
 rcu_gp_kthread+0x26d/0x380 kernel/rcu/tree.c:2285
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x983/0xb10 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 1 UID: 0 PID: 0 Comm: swapper/1 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:pv_native_safe_halt+0xf/0x20 arch/x86/kernel/paravirt.c:82
Code: b6 5f 02 c3 cc cc cc cc 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d 13 39 12 00 fb f4 <e9> cc 35 03 00 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90
RSP: 0000:ffffc90000197de8 EFLAGS: 000002c6
RAX: 00000000000f349b RBX: 0000000000000001 RCX: ffffffff8b7826d9
RDX: 0000000000000000 RSI: ffffffff8dace031 RDI: ffffffff8bf2b380
RBP: ffffed1003b58498 R08: 0000000000000001 R09: ffffed10170a673d
R10: ffff8880b85339eb R11: ffff88801dac2ff0 R12: 0000000000000001
R13: ffff88801dac24c0 R14: ffffffff9088b8d0 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8881249f5000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055555f9257e0 CR3: 000000004b1c0000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 arch_safe_halt arch/x86/include/asm/paravirt.h:107 [inline]
 default_idle+0x13/0x20 arch/x86/kernel/process.c:767
 default_idle_call+0x6c/0xb0 kernel/sched/idle.c:122
 cpuidle_idle_call kernel/sched/idle.c:191 [inline]
 do_idle+0x38d/0x510 kernel/sched/idle.c:332
 cpu_startup_entry+0x4f/0x60 kernel/sched/idle.c:430
 start_secondary+0x21d/0x2d0 arch/x86/kernel/smpboot.c:312
 common_startup_64+0x13e/0x148
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/08 05:21 upstream f0b9d8eb98df d6526ea3 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in purge_vmap_node
2025/11/06 15:12 upstream dc77806cf3b4 a6c9c731 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in purge_vmap_node
* Struck through repros no longer work on HEAD.