syzbot


INFO: rcu detected stall in sys_epoll_ctl (3)

Status: auto-obsoleted due to no activity on 2025/01/30 04:09
Subsystems: mm
[Documentation on labels]
First crash: 278d, last: 278d
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in sys_epoll_ctl (2) fs 1 1 497d 497d 0/29 auto-obsoleted due to no activity on 2024/06/25 13:25
upstream INFO: rcu detected stall in sys_epoll_ctl fs 1 1 1449d 1449d 0/29 auto-closed as invalid on 2021/10/17 15:58

Sample crash report:
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P11472/1:b..l
rcu: 	(detected by 0, t=10503 jiffies, g=30609, q=806 ncpus=2)
task:udevd           state:R  running task     stack:26496 pid:11472 tgid:11472 ppid:5214   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5328 [inline]
 __schedule+0x18af/0x4bd0 kernel/sched/core.c:6690
 preempt_schedule_irq+0xfb/0x1c0 kernel/sched/core.c:7012
 irqentry_exit+0x5e/0x90 kernel/entry/common.c:354
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:lock_is_held_type+0x13b/0x190
Code: 75 44 48 c7 04 24 00 00 00 00 9c 8f 04 24 f7 04 24 00 02 00 00 75 4c 41 f7 c4 00 02 00 00 74 01 fb 65 48 8b 04 25 28 00 00 00 <48> 3b 44 24 08 75 42 89 d8 48 83 c4 10 5b 41 5c 41 5d 41 5e 41 5f
RSP: 0018:ffffc9000319f7d8 EFLAGS: 00000206
RAX: bb512660e5f8aa00 RBX: 0000000000000001 RCX: 0000000080000000
RDX: 0000000000000000 RSI: ffffffff8c0adcc0 RDI: ffffffff8c6102e0
RBP: 0000000000000002 R08: ffffffff820b6c7e R09: 1ffffffff2859500
R10: dffffc0000000000 R11: fffffbfff2859501 R12: 0000000000000246
R13: ffff88807a9e8000 R14: 00000000ffffffff R15: ffffffff8e937e20
 lookup_page_ext mm/page_ext.c:254 [inline]
 page_ext_get+0x192/0x2a0 mm/page_ext.c:526
 __reset_page_owner+0x30/0x430 mm/page_owner.c:290
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1108 [inline]
 free_unref_page+0xcfb/0xf20 mm/page_alloc.c:2638
 discard_slab mm/slub.c:2677 [inline]
 __put_partials+0xeb/0x130 mm/slub.c:3145
 put_cpu_partial+0x17c/0x250 mm/slub.c:3220
 __slab_free+0x2ea/0x3d0 mm/slub.c:4449
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
 __kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329
 kasan_slab_alloc include/linux/kasan.h:247 [inline]
 slab_post_alloc_hook mm/slub.c:4085 [inline]
 slab_alloc_node mm/slub.c:4134 [inline]
 kmem_cache_alloc_noprof+0x135/0x2a0 mm/slub.c:4141
 ep_ptable_queue_proc+0x5b/0x210 fs/eventpoll.c:1423
 poll_wait include/linux/poll.h:45 [inline]
 signalfd_poll+0xba/0x1a0 fs/signalfd.c:56
 vfs_poll include/linux/poll.h:84 [inline]
 ep_item_poll fs/eventpoll.c:1030 [inline]
 ep_insert+0x10a3/0x1aa0 fs/eventpoll.c:1702
 do_epoll_ctl+0x8c9/0xf60 fs/eventpoll.c:2360
 __do_sys_epoll_ctl fs/eventpoll.c:2417 [inline]
 __se_sys_epoll_ctl fs/eventpoll.c:2408 [inline]
 __x64_sys_epoll_ctl+0x161/0x1a0 fs/eventpoll.c:2408
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f88b0323e5a
RSP: 002b:00007ffe713b1d58 EFLAGS: 00000202 ORIG_RAX: 00000000000000e9
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f88b0323e5a
RDX: 0000000000000003 RSI: 0000000000000001 RDI: 0000000000000004
RBP: 00005590edb4e980 R08: 0000000000000007 R09: 6f58f4f509dcdc2e
R10: 00007ffe713b1d80 R11: 0000000000000202 R12: 00005590edb0ecd0
R13: 0000000000000004 R14: 00007ffe713b1d8c R15: 00005590edaec910
 </TASK>
rcu: rcu_preempt kthread starved for 10448 jiffies! g30609 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:24912 pid:17    tgid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5328 [inline]
 __schedule+0x18af/0x4bd0 kernel/sched/core.c:6690
 __schedule_loop kernel/sched/core.c:6767 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6782
 schedule_timeout+0x1be/0x310 kernel/time/timer.c:2615
 rcu_gp_fqs_loop+0x2df/0x1330 kernel/rcu/tree.c:2045
 rcu_gp_kthread+0xa7/0x3b0 kernel/rcu/tree.c:2247
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 0 UID: 0 PID: 5895 Comm: kworker/0:4 Not tainted 6.12.0-rc4-syzkaller-00220-gd80a30913084 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue: wg-crypt-wg0 wg_packet_encrypt_worker
RIP: 0010:rcu_is_watching_curr_cpu include/linux/context_tracking.h:128 [inline]
RIP: 0010:rcu_is_watching+0x3a/0xb0 kernel/rcu/tree.c:737
Code: e8 1b 3f 4c 0a 89 c3 83 f8 08 73 7a 49 bf 00 00 00 00 00 fc ff df 4c 8d 34 dd 50 ea 31 8e 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 <74> 08 4c 89 f7 e8 9c d6 83 00 48 c7 c3 98 7e 03 00 49 03 1e 48 89
RSP: 0018:ffffc90000006650 EFLAGS: 00000246
RAX: 1ffffffff1c63d4a RBX: 0000000000000000 RCX: ffff88802be35a00
RDX: 0000000000000100 RSI: ffffffff8c6102c0 RDI: ffffffff8c610280
RBP: ffffc90000006770 R08: ffffffff8a8e8bf7 R09: ffffffff8a9a4f84
R10: 0000000000000002 R11: ffff88802be35a00 R12: 1ffff1102f58bd9b
R13: ffff888062cf0ed9 R14: ffffffff8e31ea50 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f88b032335c CR3: 000000000e734000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 rcu_read_lock_held_common kernel/rcu/update.c:109 [inline]
 rcu_read_lock_held+0x15/0x50 kernel/rcu/update.c:349
 skb_dst include/linux/skbuff.h:1143 [inline]
 br_drop_fake_rtable include/linux/netfilter_bridge.h:21 [inline]
 br_dev_queue_push_xmit+0x107/0x8d0 net/bridge/br_forward.c:39
 NF_HOOK+0x700/0x7c0 include/linux/netfilter.h:314
 br_nf_post_routing+0xa20/0xe80 net/bridge/br_netfilter_hooks.c:994
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626
 nf_hook include/linux/netfilter.h:269 [inline]
 NF_HOOK+0x2a7/0x460 include/linux/netfilter.h:312
 br_forward_finish+0xd8/0x130 net/bridge/br_forward.c:66
 br_nf_forward_finish+0xb49/0xfb0 net/bridge/br_netfilter_hooks.c:690
 NF_HOOK+0x700/0x7c0 include/linux/netfilter.h:314
 br_nf_forward_ip+0x61e/0x7b0 net/bridge/br_netfilter_hooks.c:744
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xc3/0x220 net/netfilter/core.c:626
 nf_hook include/linux/netfilter.h:269 [inline]
 NF_HOOK+0x2a7/0x460 include/linux/netfilter.h:312
 __br_forward+0x489/0x660 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 maybe_deliver+0xb3/0x150 net/bridge/br_forward.c:190
 br_flood+0x2e4/0x660 net/bridge/br_forward.c:236
 br_handle_frame_finish+0x18ba/0x1fe0 net/bridge/br_input.c:215
 br_nf_hook_thresh+0x472/0x590
 br_nf_pre_routing_finish_ipv6+0xaa0/0xdd0
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x379/0x770 net/bridge/br_netfilter_ipv6.c:184
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
 br_handle_frame+0x9fd/0x1530 net/bridge/br_input.c:424
 __netif_receive_skb_core+0x13e8/0x4570 net/core/dev.c:5564
 __netif_receive_skb_one_core net/core/dev.c:5668 [inline]
 __netif_receive_skb+0x12f/0x650 net/core/dev.c:5783
 process_backlog+0x662/0x15b0 net/core/dev.c:6115
 __napi_poll+0xcb/0x490 net/core/dev.c:6779
 napi_poll net/core/dev.c:6848 [inline]
 net_rx_action+0x89b/0x1240 net/core/dev.c:6970
 handle_softirqs+0x2c5/0x980 kernel/softirq.c:554
 do_softirq+0x11b/0x1e0 kernel/softirq.c:455
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x1bb/0x200 kernel/softirq.c:382
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 ptr_ring_consume_bh include/linux/ptr_ring.h:367 [inline]
 wg_packet_encrypt_worker+0x1561/0x1610 drivers/net/wireguard/send.c:293
 process_one_work kernel/workqueue.c:3229 [inline]
 process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310
 worker_thread+0x870/0xd30 kernel/workqueue.c:3391
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/01 04:07 net d80a30913084 96eb609f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in sys_epoll_ctl
* Struck through repros no longer work on HEAD.