syzbot


INFO: rcu detected stall in mld_dad_work (3)

Status: auto-obsoleted due to no activity on 2025/11/18 11:46
Subsystems: mm
[Documentation on labels]
First crash: 109d, last: 98d
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in mld_dad_work (2) net 1 1 667d 667d 0/29 auto-obsoleted due to no activity on 2024/04/28 18:46
upstream INFO: rcu detected stall in mld_dad_work net 1 C error 3 853d 970d 0/29 closed as invalid on 2023/09/22 04:28
android-5-15 BUG: soft lockup in mld_dad_work origin:lts 1 syz 1 440d 440d 0/2 auto-obsoleted due to no activity on 2024/12/11 14:40

Sample crash report:
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P6019/1:b..l
rcu: 	(detected by 0, t=10502 jiffies, g=177057, q=370 ncpus=1)
task:kworker/0:3     state:R  running task     stack:24744 pid:6019  tgid:6019  ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: mld mld_dad_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 preempt_schedule_irq+0x51/0x90 kernel/sched/core.c:7288
 irqentry_exit+0x36/0x90 kernel/entry/common.c:197
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:lock_acquire+0x10/0x350 kernel/locking/lockdep.c:5828
Code: 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 41 57 4d 89 cf 41 56 41 89 f6 41 55 <41> 89 d5 41 54 45 89 c4 55 89 cd 53 48 89 fb 48 83 ec 38 65 48 8b
RSP: 0018:ffffc900047f73e8 EFLAGS: 00000246
RAX: ffffffff816ab56d RBX: 0000000000000001 RCX: 0000000000000002
RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8e5c1060
RBP: ffffc900047f74c8 R08: 0000000000000000 R09: 0000000000000000
R10: ffffc900047f7480 R11: 00000000000061b4 R12: ffffffff81a67470
R13: ffffc900047f7480 R14: 0000000000000000 R15: 0000000000000000
 rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 rcu_read_lock include/linux/rcupdate.h:841 [inline]
 class_rcu_constructor include/linux/rcupdate.h:1155 [inline]
 unwind_next_frame+0xd1/0x20a0 arch/x86/kernel/unwind_orc.c:479
 arch_stack_walk+0x94/0x100 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 save_stack+0x160/0x1f0 mm/page_owner.c:156
 __reset_page_owner+0x84/0x1a0 mm/page_owner.c:308
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1395 [inline]
 __free_frozen_pages+0x7d5/0x10f0 mm/page_alloc.c:2895
 discard_slab mm/slub.c:2753 [inline]
 __put_partials+0x165/0x1c0 mm/slub.c:3218
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x4d/0x120 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x195/0x1e0 mm/kasan/quarantine.c:286
 __kasan_kmalloc+0x8a/0xb0 mm/kasan/common.c:396
 kasan_kmalloc include/linux/kasan.h:260 [inline]
 __do_kmalloc_node mm/slub.c:4365 [inline]
 __kmalloc_node_track_caller_noprof+0x221/0x510 mm/slub.c:4384
 kmalloc_reserve+0xef/0x2c0 net/core/skbuff.c:600
 __alloc_skb+0x166/0x380 net/core/skbuff.c:669
 alloc_skb include/linux/skbuff.h:1336 [inline]
 mld_newpack.isra.0+0x18e/0xa20 net/ipv6/mcast.c:1780
 add_grhead+0x299/0x340 net/ipv6/mcast.c:1891
 add_grec+0x11b5/0x1720 net/ipv6/mcast.c:2030
 mld_send_initial_cr+0x151/0x320 net/ipv6/mcast.c:2273
 mld_dad_work+0x32/0x1f0 net/ipv6/mcast.c:2299
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3236
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
rcu: rcu_preempt kthread starved for 394 jiffies! g177057 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:28000 pid:16    tgid:16    ppid:2      task_flags:0x208040 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7058
 schedule_timeout+0x123/0x290 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x1ea/0xb00 kernel/rcu/tree.c:2083
 rcu_gp_kthread+0x270/0x380 kernel/rcu/tree.c:2285
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 UID: 0 PID: 3426 Comm: kworker/R-bat_e Tainted: G     U    I         syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [I]=FIRMWARE_WORKAROUND
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: bat_events batadv_dat_purge
RIP: 0010:rcu_preempt_read_exit kernel/rcu/tree_plugin.h:398 [inline]
RIP: 0010:__rcu_read_unlock+0x73/0x550 kernel/rcu/tree_plugin.h:435
Code: ab 3f 34 12 49 8d bc 24 44 04 00 00 8b 9d 44 04 00 00 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 83 eb 01 0f b6 14 02 <48> 89 f8 83 e0 07 83 c0 03 38 d0 7c 08 84 d2 0f 85 f3 01 00 00 41
RSP: 0018:ffffc90000006ed8 EFLAGS: 00000202
RAX: dffffc0000000000 RBX: 0000000000000001 RCX: ffffc90000008001
RDX: 0000000000000000 RSI: ffffffff8c162c80 RDI: ffff888031134044
RBP: ffff888031133c00 R08: 0000000000000001 R09: 0000000000000000
R10: ffffc90000006f78 R11: 0000000000086dc6 R12: ffff888031133c00
R13: ffffc90000006f78 R14: ffffc90000007f40 R15: ffffc90000006fac
FS:  0000000000000000(0000) GS:ffff8881246c4000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000555580b695c8 CR3: 0000000074cb6000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 rcu_read_unlock include/linux/rcupdate.h:873 [inline]
 class_rcu_destructor include/linux/rcupdate.h:1155 [inline]
 unwind_next_frame+0x3fe/0x20a0 arch/x86/kernel/unwind_orc.c:479
 arch_stack_walk+0x94/0x100 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
 kasan_save_track+0x14/0x30 mm/kasan/common.c:68
 unpoison_slab_object mm/kasan/common.c:330 [inline]
 __kasan_slab_alloc+0x89/0x90 mm/kasan/common.c:356
 kasan_slab_alloc include/linux/kasan.h:250 [inline]
 slab_post_alloc_hook mm/slub.c:4180 [inline]
 slab_alloc_node mm/slub.c:4229 [inline]
 kmem_cache_alloc_noprof+0x1cb/0x3b0 mm/slub.c:4236
 skb_clone+0x190/0x3f0 net/core/skbuff.c:2049
 deliver_clone net/bridge/br_forward.c:125 [inline]
 br_flood+0x37c/0x650 net/bridge/br_forward.c:249
 br_handle_frame_finish+0xf2d/0x1ca0 net/bridge/br_input.c:221
 br_nf_hook_thresh+0x304/0x410 net/bridge/br_netfilter_hooks.c:1170
 br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:318 [inline]
 br_nf_pre_routing_ipv6+0x3cd/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:283 [inline]
 br_handle_frame+0xad8/0x14b0 net/bridge/br_input.c:434
 __netif_receive_skb_core.constprop.0+0xa25/0x48c0 net/core/dev.c:5866
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:5977
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6092
 process_backlog+0x442/0x15e0 net/core/dev.c:6444
 __napi_poll.constprop.0+0xba/0x550 net/core/dev.c:7494
 napi_poll net/core/dev.c:7557 [inline]
 net_rx_action+0xa9f/0xfe0 net/core/dev.c:7684
 handle_softirqs+0x219/0x8e0 kernel/softirq.c:579
 do_softirq kernel/softirq.c:480 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:467
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:407
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 __batadv_dat_purge.part.0+0x279/0x3a0 net/batman-adv/distributed-arp-table.c:185
 __batadv_dat_purge net/batman-adv/distributed-arp-table.c:166 [inline]
 batadv_dat_purge+0x4b/0xa0 net/batman-adv/distributed-arp-table.c:204
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3236
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 rescuer_thread+0x620/0xea0 kernel/workqueue.c:3496
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
net_ratelimit: 11422 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:e4:09:42:e6:55, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:e4:09:42:e6:55, vlan:0)
net_ratelimit: 16757 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:e4:09:42:e6:55, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:96:e4:09:42:e6:55, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/20 11:40 upstream b19a97d57c15 bd178e57 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in mld_dad_work
2025/08/09 17:45 upstream c30a13538d9f 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in mld_dad_work
* Struck through repros no longer work on HEAD.