syzbot


INFO: rcu detected stall in call_usermodehelper_exec_async (4)

Status: upstream: reported C repro on 2025/11/27 14:55
Subsystems: mm
[Documentation on labels]
Reported-by: syzbot+be81254ae29faa71cdfe@syzkaller.appspotmail.com
First crash: 284d, last: 5d17h
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [udf?] INFO: rcu detected stall in call_usermodehelper_exec_async (4) 0 (1) 2025/11/27 14:55
Similar bugs (7)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: rcu detected stall in call_usermodehelper_exec_async (2) 1 1 262d 262d 0/3 auto-obsoleted due to no activity on 2025/10/01 17:46
linux-6.1 INFO: rcu detected stall in call_usermodehelper_exec_async 1 1 643d 643d 0/3 auto-obsoleted due to no activity on 2024/09/16 03:00
upstream INFO: rcu detected stall in call_usermodehelper_exec_async mm 1 1 2292d 2292d 0/29 closed as invalid on 2019/12/04 14:14
linux-5.15 INFO: rcu detected stall in call_usermodehelper_exec_async 1 1 985d 985d 0/3 auto-obsoleted due to no activity on 2023/10/09 16:55
upstream INFO: rcu detected stall in call_usermodehelper_exec_async (3) mm 1 23 392d 888d 0/29 auto-obsoleted due to no activity on 2025/05/14 21:52
upstream INFO: rcu detected stall in call_usermodehelper_exec_async (2) cgroups mm 1 1 1382d 1382d 0/29 auto-closed as invalid on 2022/08/28 20:38
upstream BUG: soft lockup in call_usermodehelper_exec_async mm 1 2 1587d 1638d 0/29 closed as dup on 2021/09/17 07:37
Last patch testing requests (1)
Created Duration User Patch Repo Result
2025/12/07 22:22 37m retest repro linux-next OK log

Sample crash report:
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P30968/1:b..l
rcu: 	(detected by 0, t=10502 jiffies, g=182361, q=3077 ncpus=1)
task:kworker/u10:17  state:R  running task     stack:27056 pid:30968 tgid:30968 ppid:19748  task_flags:0x8040 flags:0x00080000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 preempt_schedule_irq+0x50/0x90 kernel/sched/core.c:7235
 irqentry_exit+0x17b/0x670 kernel/entry/common.c:239
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
RIP: 0010:lock_acquire+0x5e/0x380 kernel/locking/lockdep.c:5872
Code: 05 fb f8 28 12 83 f8 07 0f 87 f0 00 00 00 48 0f a3 05 c6 74 f5 0e 0f 82 c2 02 00 00 8b 35 8e a8 f5 0e 85 f6 0f 85 dd 00 00 00 <48> 8b 44 24 30 65 48 2b 05 9d f8 28 12 0f 85 02 03 00 00 48 83 c4
RSP: 0018:ffffc90004496f10 EFLAGS: 00000206
RAX: 0000000000000046 RBX: 0000000000000000 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff8de57650 RDI: ffffffff8c1af920
RBP: ffffffff8e7e9220 R08: 0000000028bf3b1f R09: 0000000000000007
R10: 0000000000000200 R11: 0000000000000000 R12: 0000000000000002
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
 rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 rcu_read_lock include/linux/rcupdate.h:850 [inline]
 class_rcu_constructor include/linux/rcupdate.h:1193 [inline]
 unwind_next_frame+0xd1/0x1ea0 arch/x86/kernel/unwind_orc.c:495
 arch_stack_walk+0x94/0xf0 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 save_stack+0x162/0x1e0 mm/page_owner.c:165
 __set_page_owner+0x8c/0x540 mm/page_owner.c:341
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x153/0x170 mm/page_alloc.c:1889
 prep_new_page mm/page_alloc.c:1897 [inline]
 get_page_from_freelist+0x111d/0x3140 mm/page_alloc.c:3962
 __alloc_frozen_pages_noprof+0x27c/0x2ba0 mm/page_alloc.c:5250
 alloc_pages_mpol+0x1fb/0x550 mm/mempolicy.c:2484
 folio_alloc_mpol_noprof+0x36/0x340 mm/mempolicy.c:2503
 vma_alloc_folio_noprof+0xed/0x1d0 mm/mempolicy.c:2538
 folio_prealloc mm/memory.c:1204 [inline]
 alloc_anon_folio mm/memory.c:5209 [inline]
 do_anonymous_page+0xb3a/0x1fb0 mm/memory.c:5266
 do_pte_missing mm/memory.c:4475 [inline]
 handle_pte_fault mm/memory.c:6317 [inline]
 __handle_mm_fault+0x1d42/0x2b60 mm/memory.c:6455
 handle_mm_fault+0x36d/0xa20 mm/memory.c:6624
 faultin_page mm/gup.c:1126 [inline]
 __get_user_pages+0xf9c/0x34d0 mm/gup.c:1428
 __get_user_pages_locked mm/gup.c:1692 [inline]
 get_user_pages_remote+0x3d2/0xb10 mm/gup.c:2614
 get_arg_page+0xf4/0x310 fs/exec.c:163
 copy_string_kernel+0x17d/0x500 fs/exec.c:566
 kernel_execve+0x215/0x3a0 fs/exec.c:1879
 call_usermodehelper_exec_async+0x239/0x4b0 kernel/umh.c:109
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
rcu: rcu_preempt kthread starved for 228 jiffies! g182361 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:27832 pid:16    tgid:16    ppid:2      task_flags:0x208040 flags:0x00080000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 __schedule_loop kernel/sched/core.c:6990 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7005
 schedule_timeout+0x127/0x280 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x1a9/0x900 kernel/rcu/tree.c:2095
 rcu_gp_kthread+0x179/0x230 kernel/rcu/tree.c:2297
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 UID: 0 PID: 3409 Comm: kworker/R-bat_e Tainted: G     U  W    L XTNJ syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [W]=WARN, [L]=SOFTLOCKUP, [X]=AUX, [T]=RANDSTRUCT, [N]=TEST, [J]=FWCTL
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: bat_events batadv_tt_purge
RIP: 0010:check_kcov_mode kernel/kcov.c:183 [inline]
RIP: 0010:__sanitizer_cov_trace_pc+0x2f/0x70 kernel/kcov.c:217
Code: 8b 05 85 40 05 12 48 8b 34 24 65 48 8b 15 61 40 05 12 a9 00 01 ff 00 74 1b f6 c4 01 74 07 a9 00 00 ff 00 74 05 e9 11 0c 88 09 <8b> 82 94 16 00 00 85 c0 74 f1 8b 82 70 16 00 00 83 f8 02 75 e6 48
RSP: 0018:ffffc90000006b48 EFLAGS: 00000246
RAX: 0000000080000101 RBX: ffffc90000006d10 RCX: 0000000000000001
RDX: ffff888033cddb80 RSI: ffffffff8a35fb9b RDI: 0000000000000006
RBP: ffffc90000006c90 R08: 0000000000000005 R09: 0000000000000003
R10: 0000000000000084 R11: 0000000000000000 R12: ffff88805192c010
R13: 000000000000003a R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff88812434b000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9dfc7e7c38 CR3: 000000007e362000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 ip6_multipath_l3_keys.constprop.0+0x3ab/0xbd0 net/ipv6/route.c:2392
 rt6_multipath_hash+0x617/0x1800 net/ipv6/route.c:2536
 ip6_route_input+0xad7/0xc50 net/ipv6/route.c:2653
 ip6_rcv_finish_core.isra.0+0x1a9/0x5a0 net/ipv6/ip6_input.c:66
 ip6_rcv_finish+0x130/0x300 net/ipv6/ip6_input.c:77
 ip_sabotage_in+0x21e/0x290 net/bridge/br_netfilter_hooks.c:990
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xbf/0x220 net/netfilter/core.c:623
 nf_hook.constprop.0+0x2a6/0x750 include/linux/netfilter.h:273
 NF_HOOK include/linux/netfilter.h:316 [inline]
 ipv6_rcv+0xa4/0x3d0 net/ipv6/ip6_input.c:311
 __netif_receive_skb_one_core+0x12d/0x1e0 net/core/dev.c:6164
 __netif_receive_skb+0x1f/0x120 net/core/dev.c:6277
 netif_receive_skb_internal net/core/dev.c:6363 [inline]
 netif_receive_skb+0x139/0x820 net/core/dev.c:6422
 NF_HOOK include/linux/netfilter.h:318 [inline]
 NF_HOOK include/linux/netfilter.h:312 [inline]
 br_pass_frame_up+0x346/0x490 net/bridge/br_input.c:70
 br_handle_frame_finish+0xa70/0x1f60 net/bridge/br_input.c:235
 br_nf_hook_thresh+0x30d/0x420 net/bridge/br_netfilter_hooks.c:1167
 br_nf_pre_routing_finish_ipv6+0x769/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:318 [inline]
 br_nf_pre_routing_ipv6+0x39c/0x8b0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x93b/0x1510 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:291 [inline]
 br_handle_frame+0xcdd/0x1520 net/bridge/br_input.c:442
 __netif_receive_skb_core.constprop.0+0x6c5/0x3550 net/core/dev.c:6051
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:6162
 __netif_receive_skb+0x1f/0x120 net/core/dev.c:6277
 process_backlog+0x37a/0x1580 net/core/dev.c:6628
 __napi_poll.constprop.0+0xaf/0x450 net/core/dev.c:7692
 napi_poll net/core/dev.c:7755 [inline]
 net_rx_action+0xa40/0xf20 net/core/dev.c:7912
 handle_softirqs+0x1eb/0x9e0 kernel/softirq.c:622
 do_softirq kernel/softirq.c:523 [inline]
 do_softirq+0xac/0xe0 kernel/softirq.c:510
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0xf8/0x120 kernel/softirq.c:450
 spin_unlock_bh include/linux/spinlock.h:395 [inline]
 batadv_tt_global_purge net/batman-adv/translation-table.c:2250 [inline]
 batadv_tt_purge+0x25d/0xbd0 net/batman-adv/translation-table.c:3510
 process_one_work+0x9d7/0x1920 kernel/workqueue.c:3275
 process_scheduled_works kernel/workqueue.c:3358 [inline]
 rescuer_thread+0x902/0x1490 kernel/workqueue.c:3582
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
net_ratelimit: 12753 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:06:45:b3:db:97:8f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:06:45:b3:db:97:8f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 13578 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:06:45:b3:db:97:8f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:06:45:b3:db:97:8f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:06:45:b3:db:97:8f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (9):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/03/07 18:48 upstream 4ae12d8bd9a8 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in call_usermodehelper_exec_async
2025/11/08 20:08 upstream e811c33b1f13 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in call_usermodehelper_exec_async
2025/10/30 20:52 upstream e53642b87a4f fd2207e7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in call_usermodehelper_exec_async
2025/08/19 15:22 upstream be48bcf004f9 523f460e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in call_usermodehelper_exec_async
2025/07/19 17:01 upstream 4871b7cb27f4 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in call_usermodehelper_exec_async
2025/07/11 04:54 upstream bc9ff192a6c9 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in call_usermodehelper_exec_async
2025/06/01 16:48 upstream 7d4e49a77d99 3d2f584d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in call_usermodehelper_exec_async
2025/12/31 03:21 net 1adaea51c61b d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in call_usermodehelper_exec_async
2025/11/23 14:41 linux-next d724c6f85e80 4fb8ef37 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in call_usermodehelper_exec_async
* Struck through repros no longer work on HEAD.