syzbot


INFO: rcu detected stall in shmem_fault (5)

Status: fixed on 2023/10/12 12:48
Subsystems: cgroups mm
[Documentation on labels]
Fix commit: 8c21ab1bae94 net/sched: fq_pie: avoid stalls in fq_pie_timer()
First crash: 273d, last: 253d
Similar bugs (5)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in shmem_fault (4) kernel 1 483d 483d 0/26 auto-obsoleted due to no activity on 2023/04/08 07:17
upstream INFO: rcu detected stall in shmem_fault mm 13 1866d 2026d 0/26 closed as dup on 2019/01/02 16:58
linux-5.15 INFO: rcu detected stall in shmem_fault 1 237d 237d 0/3 auto-obsoleted due to no activity on 2023/12/11 11:17
upstream INFO: rcu detected stall in shmem_fault (3) fs mm 4 732d 732d 0/26 auto-closed as invalid on 2022/06/25 08:10
upstream INFO: rcu detected stall in shmem_fault (2) cgroups mm 1 1605d 1605d 0/26 closed as invalid on 2019/12/04 14:04

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P40/1:b..l P9/1:b..l P7177/1:b..l
rcu: 	(detected by 1, t=10502 jiffies, g=19417, q=462 ncpus=2)
task:syz-executor.0  state:R  running task     stack:26576 pid:7177  ppid:5048   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xee1/0x59f0 kernel/sched/core.c:6710
 preempt_schedule_irq+0x52/0x90 kernel/sched/core.c:7022
 irqentry_exit+0x35/0x80 kernel/entry/common.c:433
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:lock_acquire+0x1ef/0x510 kernel/locking/lockdep.c:5729
Code: c1 05 f5 57 9c 7e 83 f8 01 0f 85 b0 02 00 00 9c 58 f6 c4 02 0f 85 9b 02 00 00 48 85 ed 74 01 fb 48 b8 00 00 00 00 00 fc ff df <48> 01 c3 48 c7 03 00 00 00 00 48 c7 43 08 00 00 00 00 48 8b 84 24
RSP: 0018:ffffc90004fdf510 EFLAGS: 00000206
RAX: dffffc0000000000 RBX: 1ffff920009fbea4 RCX: 0000000000000001
RDX: 1ffff110105dd978 RSI: ffffffff8a6c6880 RDI: ffffffff8ac7e220
RBP: 0000000000000200 R08: 0000000000000000 R09: fffffbfff23081d0
R10: ffffffff91840e87 R11: 000000000000c26f R12: 0000000000000000
R13: 0000000000000000 R14: ffffffff8c9a3340 R15: 0000000000000000
 rcu_lock_acquire include/linux/rcupdate.h:303 [inline]
 rcu_read_lock include/linux/rcupdate.h:749 [inline]
 percpu_ref_put_many.constprop.0+0x2c/0x1b0 include/linux/percpu-refcount.h:330
 percpu_ref_put include/linux/percpu-refcount.h:351 [inline]
 css_put include/linux/cgroup_refcnt.h:79 [inline]
 css_put include/linux/cgroup_refcnt.h:76 [inline]
 __mem_cgroup_charge+0x71/0x90 mm/memcontrol.c:6998
 mem_cgroup_charge include/linux/memcontrol.h:679 [inline]
 shmem_add_to_page_cache+0x649/0xcb0 mm/shmem.c:714
 shmem_get_folio_gfp+0x6f1/0x1b60 mm/shmem.c:1978
 shmem_fault+0x1db/0x890 mm/shmem.c:2163
 __do_fault+0x107/0x5f0 mm/memory.c:4198
 do_read_fault mm/memory.c:4547 [inline]
 do_fault mm/memory.c:4670 [inline]
 do_pte_missing mm/memory.c:3664 [inline]
 handle_pte_fault mm/memory.c:4939 [inline]
 __handle_mm_fault+0x27e0/0x3b80 mm/memory.c:5079
 handle_mm_fault+0x2ab/0x9d0 mm/memory.c:5233
 faultin_page mm/gup.c:959 [inline]
 __get_user_pages+0x5c1/0x1010 mm/gup.c:1258
 populate_vma_page_range+0x2d4/0x410 mm/gup.c:1649
 __mm_populate+0x1d7/0x380 mm/gup.c:1758
 mm_populate include/linux/mm.h:3213 [inline]
 vm_mmap_pgoff+0x2c9/0x3b0 mm/util.c:548
 ksys_mmap_pgoff+0x7d/0x5b0 mm/mmap.c:1409
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fda7847cae9
RSP: 002b:00007fda792610c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007fda7859bf80 RCX: 00007fda7847cae9
RDX: 000000000380000c RSI: 0000000000600000 RDI: 00000000209fd000
RBP: 00007fda784c847a R08: ffffffffffffffff R09: 0000000000000000
R10: 0000000000006031 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fda7859bf80 R15: 00007ffee59f0598
 </TASK>
task:kworker/u4:0    state:R  running task     stack:25584 pid:9     ppid:2      flags:0x00004000
Workqueue: bat_events batadv_nc_worker
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xee1/0x59f0 kernel/sched/core.c:6710
 preempt_schedule_irq+0x52/0x90 kernel/sched/core.c:7022
 irqentry_exit+0x35/0x80 kernel/entry/common.c:433
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:lock_acquire+0x0/0x510 kernel/locking/lockdep.c:5729
Code: 8b 74 24 04 e9 e0 fe ff ff 89 74 24 04 e8 f8 19 72 00 8b 74 24 04 e9 25 ff ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 <f3> 0f 1e fa 48 b8 00 00 00 00 00 fc ff df 41 57 4d 89 cf 41 56 49
RSP: 0018:ffffc900002ffba8 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffff8880817ad770 RCX: 0000000000000002
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8c9a3340
RBP: 00000000000002ee R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000400 R11: 0000000000000000 R12: ffff8880183f6c00
R13: 0000000000000000 R14: dffffc0000000000 R15: 0000000000000001
 rcu_lock_acquire include/linux/rcupdate.h:303 [inline]
 rcu_read_lock include/linux/rcupdate.h:749 [inline]
 batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:408 [inline]
 batadv_nc_worker+0x175/0x10f0 net/batman-adv/network-coding.c:719
 process_one_work+0xaa2/0x16f0 kernel/workqueue.c:2597
 worker_thread+0x687/0x1110 kernel/workqueue.c:2748
 kthread+0x33a/0x430 kernel/kthread.c:389
 ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
 </TASK>
task:kworker/u4:2    state:R  running task     stack:26688 pid:40    ppid:2      flags:0x00004000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xee1/0x59f0 kernel/sched/core.c:6710
 preempt_schedule_irq+0x52/0x90 kernel/sched/core.c:7022
 irqentry_exit+0x35/0x80 kernel/entry/common.c:433
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:__sanitizer_cov_trace_pc+0x3b/0x70 kernel/kcov.c:207
Code: 81 e1 00 01 00 00 65 48 8b 14 25 c0 b9 03 00 a9 00 01 ff 00 74 0e 85 c9 74 35 8b 82 04 16 00 00 85 c0 74 2b 8b 82 e0 15 00 00 <83> f8 02 75 20 48 8b 8a e8 15 00 00 8b 92 e4 15 00 00 48 8b 01 48
RSP: 0018:ffffc90000d1faf0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
RDX: ffff8880142ea100 RSI: ffffffff88aa8a07 RDI: 0000000000000005
RBP: dffffc0000000000 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 1ffff1100324bcca R12: 0000000000000000
R13: 00000000000727f9 R14: ffffffff924f3fa0 R15: 00000000000727f9
 rcu_read_unlock include/linux/rcupdate.h:778 [inline]
 inet_twsk_purge+0x7e7/0x920 net/ipv4/inet_timewait_sock.c:336
 ops_exit_list+0x125/0x170 net/core/net_namespace.c:175
 cleanup_net+0x505/0xb20 net/core/net_namespace.c:614
 process_one_work+0xaa2/0x16f0 kernel/workqueue.c:2597
 worker_thread+0x687/0x1110 kernel/workqueue.c:2748
 kthread+0x33a/0x430 kernel/kthread.c:389
 ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
 </TASK>
rcu: rcu_preempt kthread starved for 10598 jiffies! g19417 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:28576 pid:15    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xee1/0x59f0 kernel/sched/core.c:6710
 schedule+0xe7/0x1b0 kernel/sched/core.c:6786
 schedule_timeout+0x157/0x2c0 kernel/time/timer.c:2167
 rcu_gp_fqs_loop+0x1ec/0xa50 kernel/rcu/tree.c:1609
 rcu_gp_kthread+0x249/0x380 kernel/rcu/tree.c:1808
 kthread+0x33a/0x430 kernel/kthread.c:389
 ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 1 PID: 0 Comm: swapper/1 Not tainted 6.5.0-rc4-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2023
RIP: 0010:native_irq_disable arch/x86/include/asm/irqflags.h:37 [inline]
RIP: 0010:arch_local_irq_disable arch/x86/include/asm/irqflags.h:72 [inline]
RIP: 0010:acpi_safe_halt+0x1b/0x20 drivers/acpi/processor_idle.c:113
Code: ed c3 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 65 48 8b 04 25 c0 b9 03 00 48 8b 00 a8 08 75 0c 66 90 0f 00 2d 37 1b a2 00 fb f4 <fa> c3 0f 1f 00 0f b6 47 08 3c 01 74 0b 3c 02 74 05 8b 7f 04 eb 9f
RSP: 0018:ffffc9000037fd60 EFLAGS: 00000246
RAX: 0000000000004000 RBX: 0000000000000001 RCX: ffffffff8a30d7ae
RDX: 0000000000000001 RSI: ffff888145e49000 RDI: ffff888145e49064
RBP: ffff888145e49064 R08: 0000000000000001 R09: ffffed1017326d9d
R10: ffff8880b9936ceb R11: 0000000000000000 R12: ffff8881422ee800
R13: ffffffff8d446200 R14: 0000000000000001 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f2dfe378038 CR3: 0000000022fa5000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 </IRQ>
 <TASK>
 acpi_idle_enter+0xc5/0x160 drivers/acpi/processor_idle.c:707
 cpuidle_enter_state+0x82/0x500 drivers/cpuidle/cpuidle.c:267
 cpuidle_enter+0x4e/0xa0 drivers/cpuidle/cpuidle.c:388
 cpuidle_idle_call kernel/sched/idle.c:215 [inline]
 do_idle+0x315/0x3f0 kernel/sched/idle.c:282
 cpu_startup_entry+0x18/0x20 kernel/sched/idle.c:379
 start_secondary+0x200/0x290 arch/x86/kernel/smpboot.c:326
 secondary_startup_64_no_verify+0x167/0x16b
 </TASK>

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/08/02 09:20 upstream 5d0c230f1de8 df07ffe8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in shmem_fault
2023/07/28 22:21 upstream f837f0a3c948 92476829 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in shmem_fault
2023/08/18 01:30 net e9bbd6016947 74b106b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in shmem_fault
* Struck through repros no longer work on HEAD.