syzbot


INFO: rcu detected stall in sys_bpf (6)

Status: auto-obsoleted due to no activity on 2022/10/28 06:32
Subsystems: net
[Documentation on labels]
First crash: 783d, last: 633d
Similar bugs (11)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in sys_bpf (5) bpf C unreliable 56 822d 915d 0/26 closed as invalid on 2022/02/08 10:34
linux-5.15 INFO: rcu detected stall in sys_bpf (2) 1 14d 14d 0/3 upstream: reported on 2024/04/09 19:57
upstream INFO: rcu detected stall in sys_bpf (3) bpf 4 1567d 1567d 0/26 closed as invalid on 2020/01/09 08:13
linux-5.15 INFO: rcu detected stall in sys_bpf 2 138d 181d 0/3 auto-obsoleted due to no activity on 2024/03/16 17:33
upstream INFO: rcu detected stall in sys_bpf bpf net 3 1733d 1854d 0/26 auto-closed as invalid on 2019/11/23 00:18
upstream INFO: rcu detected stall in sys_bpf (2) bpf 12 1602d 1603d 0/26 closed as invalid on 2019/12/04 14:14
upstream INFO: rcu detected stall in sys_bpf (8) bpf 1 156d 156d 0/26 auto-obsoleted due to no activity on 2024/02/17 03:10
linux-6.1 INFO: rcu detected stall in sys_bpf 1 3d13h 3d13h 0/3 upstream: reported on 2024/04/20 10:27
linux-4.19 INFO: rcu detected stall in sys_bpf 3 1551d 1681d 0/1 auto-closed as invalid on 2020/05/23 14:47
upstream INFO: rcu detected stall in sys_bpf (4) bpf net 3 1520d 1559d 0/26 auto-closed as invalid on 2020/05/24 13:03
android-5-15 BUG: soft lockup in sys_bpf origin:lts C 29 10h36m 25d 0/2 upstream: reported C repro on 2024/03/29 12:25

Sample crash report:
rcu: INFO: rcu_preempt self-detected stall on CPU
rcu: 	0-...!: (10500 ticks this GP) idle=457/1/0x4000000000000000 softirq=130217/130217 fqs=496 
	(t=10502 jiffies g=203041 q=62 ncpus=2)
rcu: rcu_preempt kthread starved for 9509 jiffies! g203041 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:28312 pid:   16 ppid:     2 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5146 [inline]
 __schedule+0xa00/0x4b50 kernel/sched/core.c:6458
 schedule+0xd2/0x1f0 kernel/sched/core.c:6530
 schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1935
 rcu_gp_fqs_loop+0x186/0x810 kernel/rcu/tree.c:1999
 rcu_gp_kthread+0x1de/0x320 kernel/rcu/tree.c:2187
 kthread+0x2e9/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt drivers/acpi/processor_idle.c:111 [inline]
NMI backtrace for cpu 1 skipped: idling at acpi_idle_do_entry+0x1c9/0x240 drivers/acpi/processor_idle.c:554
NMI backtrace for cpu 0
CPU: 0 PID: 2001 Comm: syz-executor.3 Not tainted 5.19.0-rc8-syzkaller-00146-ge65c6a46df94 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Call Trace:
 <IRQ>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1e6/0x230 lib/nmi_backtrace.c:62
 trigger_single_cpu_backtrace include/linux/nmi.h:164 [inline]
 rcu_dump_cpu_stacks+0x262/0x3f0 kernel/rcu/tree_stall.h:371
 print_cpu_stall kernel/rcu/tree_stall.h:667 [inline]
 check_cpu_stall kernel/rcu/tree_stall.h:751 [inline]
 rcu_pending kernel/rcu/tree.c:3977 [inline]
 rcu_sched_clock_irq.cold+0x144/0x8fc kernel/rcu/tree.c:2675
 update_process_times+0x11a/0x1a0 kernel/time/timer.c:1839
 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:243
 tick_sched_timer+0xee/0x120 kernel/time/tick-sched.c:1480
 __run_hrtimer kernel/time/hrtimer.c:1685 [inline]
 __hrtimer_run_queues+0x1c0/0xe50 kernel/time/hrtimer.c:1749
 hrtimer_interrupt+0x31c/0x790 kernel/time/hrtimer.c:1811
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
 __sysvec_apic_timer_interrupt+0x146/0x530 arch/x86/kernel/apic/apic.c:1112
 sysvec_apic_timer_interrupt+0x8e/0xc0 arch/x86/kernel/apic/apic.c:1106
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:__sanitizer_cov_trace_pc+0x4c/0x60 kernel/kcov.c:205
Code: 0e 85 c9 74 35 8b 82 a4 15 00 00 85 c0 74 2b 8b 82 80 15 00 00 83 f8 02 75 20 48 8b 8a 88 15 00 00 8b 92 84 15 00 00 48 8b 01 <48> 83 c0 01 48 39 c2 76 07 48 89 01 48 89 34 c1 c3 0f 1f 00 41 55
RSP: 0018:ffffc9000365f238 EFLAGS: 00000246
RAX: 000000000003ffff RBX: 0000000000000008 RCX: ffffc900057ca000
RDX: 0000000000040000 RSI: ffffffff87858a93 RDI: 0000000000000005
RBP: 0000000000000003 R08: 0000000000000005 R09: 0000000000001fff
R10: 0000000000000007 R11: 0000000000000001 R12: ffff8880372a02c0
R13: 0000000000000007 R14: 0000000000000000 R15: dffffc0000000000
 cake_heapify+0x163/0x3d0 net/sched/sch_cake.c:1426
 cake_drop net/sched/sch_cake.c:1515 [inline]
 cake_enqueue+0x13a7/0x39f0 net/sched/sch_cake.c:1903
 dev_qdisc_enqueue+0x40/0x300 net/core/dev.c:3785
 __dev_xmit_skb net/core/dev.c:3874 [inline]
 __dev_queue_xmit+0x2093/0x3900 net/core/dev.c:4221
 dev_queue_xmit include/linux/netdevice.h:2994 [inline]
 __bpf_tx_skb net/core/filter.c:2114 [inline]
 __bpf_redirect_no_mac net/core/filter.c:2139 [inline]
 __bpf_redirect+0x5fe/0xe40 net/core/filter.c:2162
 ____bpf_clone_redirect net/core/filter.c:2429 [inline]
 bpf_clone_redirect+0x2ae/0x420 net/core/filter.c:2401
 ___bpf_prog_run+0x369d/0x7960 kernel/bpf/core.c:1852
 __bpf_prog_run512+0x91/0xd0 kernel/bpf/core.c:2077
 bpf_dispatcher_nop_func include/linux/bpf.h:869 [inline]
 __bpf_prog_run include/linux/filter.h:628 [inline]
 bpf_prog_run include/linux/filter.h:635 [inline]
 bpf_test_run+0x381/0x9c0 net/bpf/test_run.c:402
 bpf_prog_test_run_skb+0xb5e/0x1e10 net/bpf/test_run.c:1155
 bpf_prog_test_run kernel/bpf/syscall.c:3591 [inline]
 __sys_bpf+0x15c1/0x5700 kernel/bpf/syscall.c:4935
 __do_sys_bpf kernel/bpf/syscall.c:5021 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5019 [inline]
 __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:5019
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f2b0a289209

================================
WARNING: inconsistent lock state
5.19.0-rc8-syzkaller-00146-ge65c6a46df94 #0 Not tainted
--------------------------------
inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
syz-executor.3/2001 [HC1[1]:SC0[2]:HE0:SE0] takes:
ffffffff8beb3e78 (vmap_area_lock){?.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:349 [inline]
ffffffff8beb3e78 (vmap_area_lock){?.+.}-{2:2}, at: find_vmap_area+0x1c/0x130 mm/vmalloc.c:1805
{HARDIRQ-ON-W} state was registered at:
  lock_acquire kernel/locking/lockdep.c:5665 [inline]
  lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5630
  __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
  _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
  spin_lock include/linux/spinlock.h:349 [inline]
  alloc_vmap_area+0xa49/0x1f00 mm/vmalloc.c:1586
  __get_vm_area_node+0x142/0x3f0 mm/vmalloc.c:2453
  __vmalloc_node_range+0x250/0x13e0 mm/vmalloc.c:3125
  __vmalloc_node mm/vmalloc.c:3230 [inline]
  __vmalloc+0x69/0x80 mm/vmalloc.c:3244
  pcpu_mem_zalloc mm/percpu.c:516 [inline]
  pcpu_mem_zalloc+0x51/0xa0 mm/percpu.c:508
  pcpu_alloc_chunk mm/percpu.c:1454 [inline]
  pcpu_create_chunk+0xd7/0x930 mm/percpu-vm.c:338
  pcpu_alloc+0x1012/0x13d0 mm/percpu.c:1834
  alloc_kmem_cache_cpus.constprop.0+0x29/0xc0 mm/slab.c:1729
  do_tune_cpucache+0x37/0x230 mm/slab.c:3844
  enable_cpucache+0x3c/0xa0 mm/slab.c:3938
  kmem_cache_init_late+0x33/0x66 mm/slab.c:1276
  start_kernel+0x2f5/0x48f init/main.c:1058
  secondary_startup_64_no_verify+0xce/0xdb
irq event stamp: 1862469
hardirqs last  enabled at (1862468): [<ffffffff816a6e3a>] seqcount_lockdep_reader_access include/linux/seqlock.h:104 [inline]
hardirqs last  enabled at (1862468): [<ffffffff816a6e3a>] timekeeping_get_delta kernel/time/timekeeping.c:253 [inline]
hardirqs last  enabled at (1862468): [<ffffffff816a6e3a>] timekeeping_get_ns kernel/time/timekeeping.c:387 [inline]
hardirqs last  enabled at (1862468): [<ffffffff816a6e3a>] ktime_get+0x38a/0x470 kernel/time/timekeeping.c:847
hardirqs last disabled at (1862469): [<ffffffff897718db>] sysvec_apic_timer_interrupt+0xb/0xc0 arch/x86/kernel/apic/apic.c:1106
softirqs last  enabled at (414): [<ffffffff81882f3f>] spin_unlock_bh include/linux/spinlock.h:394 [inline]
softirqs last  enabled at (414): [<ffffffff81882f3f>] bpf_prog_alloc_id kernel/bpf/syscall.c:1961 [inline]
softirqs last  enabled at (414): [<ffffffff81882f3f>] bpf_prog_load+0x10bf/0x2250 kernel/bpf/syscall.c:2583
softirqs last disabled at (482): [<ffffffff8753ab23>] __dev_queue_xmit+0x1e3/0x3900 net/core/dev.c:4172

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(vmap_area_lock);
  <Interrupt>
    lock(vmap_area_lock);

 *** DEADLOCK ***

4 locks held by syz-executor.3/2001:
 #0: ffffffff8bd846e0 (rcu_read_lock){....}-{1:2}, at: bpf_test_timer_enter+0x0/0x160 net/bpf/test_run.c:755
 #1: ffffffff8bd84680 (rcu_read_lock_bh){....}-{1:2}, at: __dev_queue_xmit+0x1e3/0x3900 net/core/dev.c:4172
 #2: ffff8880372a0108 (&sch->q.lock){+.-.}-{2:2}, at: spin_lock include/linux/spinlock.h:349 [inline]
 #2: ffff8880372a0108 (&sch->q.lock){+.-.}-{2:2}, at: __dev_xmit_skb net/core/dev.c:3849 [inline]
 #2: ffff8880372a0108 (&sch->q.lock){+.-.}-{2:2}, at: __dev_queue_xmit+0x1f88/0x3900 net/core/dev.c:4221
 #3: ffffffff8bd8e698 (rcu_node_0){-.-.}-{2:2}, at: rcu_dump_cpu_stacks+0xd4/0x3f0 kernel/rcu/tree_stall.h:366

stack backtrace:
CPU: 0 PID: 2001 Comm: syz-executor.3 Not tainted 5.19.0-rc8-syzkaller-00146-ge65c6a46df94 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Call Trace:
 <IRQ>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 print_usage_bug kernel/locking/lockdep.c:3961 [inline]
 valid_state kernel/locking/lockdep.c:3973 [inline]
 mark_lock_irq kernel/locking/lockdep.c:4176 [inline]
 mark_lock.part.0.cold+0x18/0xd8 kernel/locking/lockdep.c:4632
 mark_lock kernel/locking/lockdep.c:4596 [inline]
 mark_usage kernel/locking/lockdep.c:4524 [inline]
 __lock_acquire+0x14ad/0x5660 kernel/locking/lockdep.c:5007
 lock_acquire kernel/locking/lockdep.c:5665 [inline]
 lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5630
 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
 spin_lock include/linux/spinlock.h:349 [inline]
 find_vmap_area+0x1c/0x130 mm/vmalloc.c:1805
 check_heap_object mm/usercopy.c:176 [inline]
 __check_object_size mm/usercopy.c:250 [inline]
 __check_object_size+0x1f8/0x700 mm/usercopy.c:212
 check_object_size include/linux/thread_info.h:199 [inline]
 __copy_from_user_inatomic include/linux/uaccess.h:62 [inline]
 copy_from_user_nmi arch/x86/lib/usercopy.c:47 [inline]
 copy_from_user_nmi+0xcb/0x130 arch/x86/lib/usercopy.c:31
 copy_code arch/x86/kernel/dumpstack.c:91 [inline]
 show_opcodes+0x59/0xb0 arch/x86/kernel/dumpstack.c:121
 show_iret_regs+0xd/0x33 arch/x86/kernel/dumpstack.c:149
 __show_regs+0x1e/0x60 arch/x86/kernel/process_64.c:74
 show_trace_log_lvl+0x25b/0x2ba arch/x86/kernel/dumpstack.c:292
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1e6/0x230 lib/nmi_backtrace.c:62
 trigger_single_cpu_backtrace include/linux/nmi.h:164 [inline]
 rcu_dump_cpu_stacks+0x262/0x3f0 kernel/rcu/tree_stall.h:371
 print_cpu_stall kernel/rcu/tree_stall.h:667 [inline]
 check_cpu_stall kernel/rcu/tree_stall.h:751 [inline]
 rcu_pending kernel/rcu/tree.c:3977 [inline]
 rcu_sched_clock_irq.cold+0x144/0x8fc kernel/rcu/tree.c:2675
 update_process_times+0x11a/0x1a0 kernel/time/timer.c:1839
 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:243
 tick_sched_timer+0xee/0x120 kernel/time/tick-sched.c:1480
 __run_hrtimer kernel/time/hrtimer.c:1685 [inline]
 __hrtimer_run_queues+0x1c0/0xe50 kernel/time/hrtimer.c:1749
 hrtimer_interrupt+0x31c/0x790 kernel/time/hrtimer.c:1811
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
 __sysvec_apic_timer_interrupt+0x146/0x530 arch/x86/kernel/apic/apic.c:1112
 sysvec_apic_timer_interrupt+0x8e/0xc0 arch/x86/kernel/apic/apic.c:1106
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:__sanitizer_cov_trace_pc+0x4c/0x60 kernel/kcov.c:205
Code: 0e 85 c9 74 35 8b 82 a4 15 00 00 85 c0 74 2b 8b 82 80 15 00 00 83 f8 02 75 20 48 8b 8a 88 15 00 00 8b 92 84 15 00 00 48 8b 01 <48> 83 c0 01 48 39 c2 76 07 48 89 01 48 89 34 c1 c3 0f 1f 00 41 55
RSP: 0018:ffffc9000365f238 EFLAGS: 00000246
RAX: 000000000003ffff RBX: 0000000000000008 RCX: ffffc900057ca000
RDX: 0000000000040000 RSI: ffffffff87858a93 RDI: 0000000000000005
RBP: 0000000000000003 R08: 0000000000000005 R09: 0000000000001fff
R10: 0000000000000007 R11: 0000000000000001 R12: ffff8880372a02c0
R13: 0000000000000007 R14: 0000000000000000 R15: dffffc0000000000
 cake_heapify+0x163/0x3d0 net/sched/sch_cake.c:1426
 cake_drop net/sched/sch_cake.c:1515 [inline]
 cake_enqueue+0x13a7/0x39f0 net/sched/sch_cake.c:1903
 dev_qdisc_enqueue+0x40/0x300 net/core/dev.c:3785
 __dev_xmit_skb net/core/dev.c:3874 [inline]
 __dev_queue_xmit+0x2093/0x3900 net/core/dev.c:4221
 dev_queue_xmit include/linux/netdevice.h:2994 [inline]
 __bpf_tx_skb net/core/filter.c:2114 [inline]
 __bpf_redirect_no_mac net/core/filter.c:2139 [inline]
 __bpf_redirect+0x5fe/0xe40 net/core/filter.c:2162
 ____bpf_clone_redirect net/core/filter.c:2429 [inline]
 bpf_clone_redirect+0x2ae/0x420 net/core/filter.c:2401
 ___bpf_prog_run+0x369d/0x7960 kernel/bpf/core.c:1852
 __bpf_prog_run512+0x91/0xd0 kernel/bpf/core.c:2077
 bpf_dispatcher_nop_func include/linux/bpf.h:869 [inline]
 __bpf_prog_run include/linux/filter.h:628 [inline]
 bpf_prog_run include/linux/filter.h:635 [inline]
 bpf_test_run+0x381/0x9c0 net/bpf/test_run.c:402
 bpf_prog_test_run_skb+0xb5e/0x1e10 net/bpf/test_run.c:1155
 bpf_prog_test_run kernel/bpf/syscall.c:3591 [inline]
 __sys_bpf+0x15c1/0x5700 kernel/bpf/syscall.c:4935
 __do_sys_bpf kernel/bpf/syscall.c:5021 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5019 [inline]
 __x64_sys_bpf+0x75/0xb0 kernel/bpf/syscall.c:5019
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f2b0a289209
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2b0b31c168 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f2b0a39bf60 RCX: 00007f2b0a289209
RDX: 0000000000000048 RSI: 0000000020000140 RDI: 000000000000000a
RBP: 00007f2b0a2e3161 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffeef857a4f R14: 00007f2b0b31c300 R15: 0000000000022000
 </TASK>
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2b0b31c168 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f2b0a39bf60 RCX: 00007f2b0a289209
RDX: 0000000000000048 RSI: 0000000020000140 RDI: 000000000000000a
RBP: 00007f2b0a2e3161 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffeef857a4f R14: 00007f2b0b31c300 R15: 0000000000022000
 </TASK>

Crashes (7):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/07/30 06:23 upstream e65c6a46df94 fef302b1 .config console log report info ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_bpf
2022/05/09 10:16 upstream c5eb0a61238d 8b277b8e .config console log report info ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_bpf
2022/05/01 01:05 net-old a9384a4c1d25 2df221f6 .config console log report info ci-upstream-net-this-kasan-gce INFO: rcu detected stall in sys_bpf
2022/03/02 04:39 net-old 0b0e2ff10356 45a13a73 .config console log report info ci-upstream-net-this-kasan-gce INFO: rcu detected stall in sys_bpf
2022/05/04 11:50 net-next-old f43f0cd2d9b0 dc9e5259 .config console log report info ci-upstream-net-kasan-gce INFO: rcu detected stall in sys_bpf
2022/05/03 03:59 net-next-old 829b7bdd7044 2df221f6 .config console log report info ci-upstream-net-kasan-gce INFO: rcu detected stall in sys_bpf
2022/03/09 20:53 net-next-old 24055bb87977 9e8eaa75 .config console log report info ci-upstream-net-kasan-gce INFO: rcu detected stall in sys_bpf
* Struck through repros no longer work on HEAD.