syzbot


INFO: rcu detected stall in shmem_fault

Status: upstream: reported on 2025/06/16 23:06
Reported-by: syzbot+48b58a534e3bab8b39ae@syzkaller.appspotmail.com
First crash: 267d, last: 14d
Similar bugs (10)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in shmem_fault (4) kernel 1 1 1166d 1166d 0/29 auto-obsoleted due to no activity on 2023/04/08 07:17
linux-5.15 INFO: rcu detected stall in shmem_fault (2) 1 23 15d 645d 0/3 upstream: reported on 2024/06/03 20:13
upstream INFO: rcu detected stall in shmem_fault mm 1 13 2549d 2709d 0/29 closed as dup on 2019/01/02 16:58
linux-6.1 INFO: rcu detected stall in shmem_fault 1 7 577d 641d 0/3 auto-obsoleted due to no activity on 2024/11/18 20:23
upstream INFO: rcu detected stall in shmem_fault (5) cgroups mm 1 3 936d 956d 23/29 fixed on 2023/10/12 12:48
linux-5.15 INFO: rcu detected stall in shmem_fault 1 1 920d 920d 0/3 auto-obsoleted due to no activity on 2023/12/11 11:17
upstream INFO: rcu detected stall in shmem_fault (3) mm fs 1 4 1414d 1415d 0/29 auto-closed as invalid on 2022/06/25 08:10
linux-6.1 INFO: rcu detected stall in shmem_fault (2) 1 21 18d 448d 0/3 upstream: reported on 2024/12/17 16:40
upstream INFO: rcu detected stall in shmem_fault (2) cgroups mm 1 1 2288d 2288d 0/29 closed as invalid on 2019/12/04 14:04
upstream INFO: rcu detected stall in shmem_fault (6) mm 1 C 214 4d10h 537d 0/29 upstream: reported C repro on 2024/09/19 14:28

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P13930/1:b..l P13834/1:b..l
rcu: 	(detected by 1, t=10502 jiffies, g=75213, q=103 ncpus=2)
task:syz.5.2073      state:R  running task     stack:23176 pid:13834 ppid:7473   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0x1553/0x45a0 kernel/sched/core.c:6700
 preempt_schedule_common+0x82/0xc0 kernel/sched/core.c:6867
 preempt_schedule+0xc0/0xd0 kernel/sched/core.c:6891
 preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
 unwind_next_frame+0x200f/0x2970 arch/x86/kernel/unwind_orc.c:672
 arch_stack_walk+0x144/0x190 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0xaa/0x100 kernel/stacktrace.c:122
 save_stack+0x125/0x230 mm/page_owner.c:128
 __reset_page_owner+0x4e/0x190 mm/page_owner.c:149
 reset_page_owner include/linux/page_owner.h:24 [inline]
 free_pages_prepare mm/page_alloc.c:1181 [inline]
 free_unref_page_prepare+0x7b2/0x8c0 mm/page_alloc.c:2365
 free_unref_page_list+0xbe/0x860 mm/page_alloc.c:2504
 release_pages+0x1f7a/0x2200 mm/swap.c:1022
 __folio_batch_release+0x71/0xe0 mm/swap.c:1042
 folio_batch_release include/linux/pagevec.h:83 [inline]
 shmem_undo_range+0x630/0x1b20 mm/shmem.c:1026
 shmem_truncate_range mm/shmem.c:1135 [inline]
 shmem_evict_inode+0x245/0x9e0 mm/shmem.c:1264
 evict+0x4ca/0x8d0 fs/inode.c:705
 __dentry_kill+0x431/0x650 fs/dcache.c:611
 dentry_kill+0xb8/0x290 fs/dcache.c:-1
 dput+0xfe/0x1e0 fs/dcache.c:918
 __fput+0x5e5/0x970 fs/file_table.c:392
 task_work_run+0x1d4/0x260 kernel/task_work.c:245
 exit_task_work include/linux/task_work.h:43 [inline]
 do_exit+0x95a/0x2460 kernel/exit.c:883
 do_group_exit+0x21b/0x2d0 kernel/exit.c:1024
 get_signal+0x12fc/0x13f0 kernel/signal.c:2902
 arch_do_signal_or_restart+0xc2/0x800 arch/x86/kernel/signal.c:310
 exit_to_user_mode_loop+0x70/0x110 kernel/entry/common.c:174
 exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210
 irqentry_exit_to_user_mode+0x9/0x30 kernel/entry/common.c:315
 exc_page_fault+0x8c/0x100 arch/x86/mm/fault.c:1519
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:608
RIP: 0033:0x7f486ef9c631
RSP: 002b:fffffffffffffe70 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 00007f486f215fa0 RCX: 00007f486ef9c629
RDX: 0000000000000000 RSI: fffffffffffffe70 RDI: 0000000000008000
RBP: 00007f486f032b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000206 R12: 0000000000000000
R13: 00007f486f216038 R14: 00007f486f215fa0 R15: 00007ffd70375598
 </TASK>
task:syz.6.2093      state:R  running task     stack:25768 pid:13930 ppid:11551  flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0x1553/0x45a0 kernel/sched/core.c:6700
 preempt_schedule_irq+0xbf/0x150 kernel/sched/core.c:7010
 irqentry_exit+0x67/0x70 kernel/entry/common.c:438
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:debug_lockdep_rcu_enabled+0x22/0x30 kernel/rcu/update.c:320
Code: cc cc cc cc cc cc cc cc f3 0f 1e fa 31 c0 83 3d 4f 6d 06 04 00 74 1d 83 3d d6 a0 06 04 00 74 14 65 48 8b 0d 20 43 7f 75 31 c0 <83> b9 dc 0a 00 00 00 0f 94 c0 c3 cc cc cc 66 0f 1f 00 48 8b 3c 24
RSP: 0018:ffffc900037674c0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffffffff81e72a49 RCX: ffff888024999e00
RDX: 0000000000000000 RSI: ffffffff8b1c82c0 RDI: ffffffff8b1c8280
RBP: 0000000000000cc0 R08: dffffc0000000000 R09: 1ffffffff2237ea0
R10: dffffc0000000000 R11: fffffbfff2237ea1 R12: ffffea000155de48
R13: dffffc0000000000 R14: ffff88806cc3a010 R15: dffffc0000000000
 rcu_read_unlock include/linux/rcupdate.h:815 [inline]
 percpu_ref_put_many include/linux/percpu-refcount.h:337 [inline]
 percpu_ref_put+0xac/0x180 include/linux/percpu-refcount.h:351
 css_put include/linux/cgroup_refcnt.h:79 [inline]
 __mem_cgroup_charge+0x56/0x80 mm/memcontrol.c:7112
 mem_cgroup_charge include/linux/memcontrol.h:686 [inline]
 shmem_add_to_page_cache+0x904/0x1b30 mm/shmem.c:784
 shmem_get_folio_gfp+0xf08/0x2aa0 mm/shmem.c:2071
 shmem_fault+0x1b8/0x810 mm/shmem.c:2248
 __do_fault+0x13b/0x4d0 mm/memory.c:4244
 do_read_fault mm/memory.c:4638 [inline]
 do_fault mm/memory.c:4775 [inline]
 do_pte_missing mm/memory.c:3689 [inline]
 handle_pte_fault mm/memory.c:5047 [inline]
 __handle_mm_fault mm/memory.c:5188 [inline]
 handle_mm_fault+0x2299/0x4c00 mm/memory.c:5353
 faultin_page mm/gup.c:868 [inline]
 __get_user_pages+0x5d0/0x1380 mm/gup.c:1167
 populate_vma_page_range+0x2c1/0x380 mm/gup.c:1593
 __mm_populate+0x260/0x390 mm/gup.c:1696
 mm_populate include/linux/mm.h:3383 [inline]
 vm_mmap_pgoff+0x2da/0x3f0 mm/util.c:561
 do_syscall_x64 arch/x86/entry/common.c:46 [inline]
 do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7efeaad9c629
RSP: 002b:00007efeabcaa028 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007efeab015fa0 RCX: 00007efeaad9c629
RDX: b635773f06ebbeee RSI: 0000000000b36000 RDI: 0000200000000000
RBP: 00007efeaae32b39 R08: ffffffffffffffff R09: 0000000000000000
R10: 0000000000008031 R11: 0000000000000246 R12: 0000000000000000
R13: 00007efeab016038 R14: 00007efeab015fa0 R15: 00007ffcf32ef418
 </TASK>
rcu: rcu_preempt kthread starved for 10565 jiffies! g75213 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:27720 pid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0x1553/0x45a0 kernel/sched/core.c:6700
 schedule+0xbd/0x170 kernel/sched/core.c:6774
 schedule_timeout+0x188/0x2d0 kernel/time/timer.c:2168
 rcu_gp_fqs_loop+0x313/0x1590 kernel/rcu/tree.c:1667
 rcu_gp_kthread+0x9d/0x3b0 kernel/rcu/tree.c:1866
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 0 Comm: swapper/0 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
RIP: 0010:arch_irqs_disabled_flags arch/x86/include/asm/irqflags.h:126 [inline]
RIP: 0010:seqcount_lockdep_reader_access+0xa0/0x1d0 include/linux/seqlock.h:101
Code: c7 44 24 40 00 00 00 00 9c 8f 44 24 40 4c 8b 64 24 40 43 c6 44 3e 08 f8 fa 4c 89 e6 48 81 e6 00 02 00 00 31 ff e8 d0 c9 0f 00 <49> 81 e4 00 02 00 00 75 3d e8 82 c5 0f 00 48 8b 5d 08 48 c7 c7 08
RSP: 0018:ffffc90000007e20 EFLAGS: 00000006
RAX: ffffffff81774f80 RBX: 0000000000000000 RCX: ffffffff8ce93440
RDX: 0000000000010000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90000007ed0 R08: ffffffff9730b367 R09: 1ffffffff2e6166c
R10: dffffc0000000000 R11: fffffbfff2e6166d R12: 0000000000000006
R13: 1ffff110171c5808 R14: 1ffff92000000fc4 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000110c2dcfe9 CR3: 000000001bfa6000 CR4: 00000000003506f0
Call Trace:
 <IRQ>
 ktime_get+0x35/0x280 kernel/time/timekeeping.c:846
 tick_nohz_irq_enter kernel/time/tick-sched.c:1439 [inline]
 tick_irq_enter+0xf2/0x310 kernel/time/tick-sched.c:1467
 irq_enter_rcu+0x9e/0xf0 kernel/softirq.c:624
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
 sysvec_apic_timer_interrupt+0x97/0xc0 arch/x86/kernel/apic/apic.c:1088
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:pv_native_safe_halt+0xf/0x10 arch/x86/kernel/paravirt.c:148
Code: c7 22 02 c3 cc cc cc cc cc cc cc f3 0f 1e fa 0f 0b 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 66 90 0f 00 2d 83 d1 43 00 fb f4 <c3> 66 0f 1f 00 55 41 57 41 56 41 54 53 50 8b 2f eb 2e 41 89 de 80
RSP: 0018:ffffffff8ce07d80 EFLAGS: 000002c2
RAX: b363a392c4ec7100 RBX: ffffffff8162a490 RCX: b363a392c4ec7100
RDX: 0000000000000001 RSI: ffffffff8acac900 RDI: ffffffff8b1c82e0
RBP: ffffffff8ce07eb8 R08: ffff8880b8e36b2b R09: 1ffff110171c6d65
R10: dffffc0000000000 R11: ffffed10171c6d66 R12: 1ffffffff19d2688
R13: 1ffffffff19c0fbc R14: 0000000000000000 R15: dffffc0000000000
 arch_safe_halt arch/x86/include/asm/paravirt.h:108 [inline]
 default_idle+0x13/0x20 arch/x86/kernel/process.c:753
 default_idle_call+0x6c/0xa0 kernel/sched/idle.c:97
 cpuidle_idle_call kernel/sched/idle.c:170 [inline]
 do_idle+0x1f0/0x4e0 kernel/sched/idle.c:282
 cpu_startup_entry+0x43/0x60 kernel/sched/idle.c:380
 rest_init+0x2e2/0x300 init/main.c:744
 arch_call_rest_init+0xe/0x10 init/main.c:841
 start_kernel+0x459/0x4e0 init/main.c:1086
 x86_64_start_reservations+0x2a/0x30 arch/x86/kernel/head64.c:555
 x86_64_start_kernel+0x60/0x60 arch/x86/kernel/head64.c:536
 secondary_startup_64_no_verify+0x179/0x17b
 </TASK>

Crashes (9):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/24 20:39 linux-6.6.y 7a137e9bfa0e 96b1aa46 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/12/14 06:19 linux-6.6.y 5fa4793a2d2d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/12/06 08:41 linux-6.6.y 4791134e4aeb d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/09/10 09:49 linux-6.6.y fe9731e10004 fdeaa69b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/08/20 00:52 linux-6.6.y bb9c90ab9c5a 254a27c1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/08/13 02:11 linux-6.6.y 3a8ababb8b6a 22ec1469 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/07/05 02:38 linux-6.6.y 3f5b4c104b7d d869b261 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/06/24 03:57 linux-6.6.y 6282921b6825 e2f27c35 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
2025/06/16 23:05 linux-6.6.y c2603c511feb d1716036 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan INFO: rcu detected stall in shmem_fault
* Struck through repros no longer work on HEAD.