rcu: INFO: rcu_preempt self-detected stall on CPU rcu: 0-...!: (9360 ticks this GP) idle=0ba/1/0x4000000000000000 softirq=56297/56298 fqs=1 (t=10501 jiffies g=90805 q=612) rcu: rcu_preempt kthread starved for 4003 jiffies! g90805 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1 rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior. rcu: RCU grace-period kthread stack dump: task:rcu_preempt state:R running task stack:28880 pid: 11 ppid: 2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:3779 [inline] __schedule+0x893/0x2130 kernel/sched/core.c:4528 schedule+0xcf/0x270 kernel/sched/core.c:4606 schedule_timeout+0x148/0x250 kernel/time/timer.c:1871 rcu_gp_fqs_loop kernel/rcu/tree.c:1925 [inline] rcu_gp_kthread+0xb4c/0x1c90 kernel/rcu/tree.c:2099 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 NMI backtrace for cpu 0 CPU: 0 PID: 27238 Comm: syz-executor.1 Not tainted 5.10.0-rc7-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:118 nmi_cpu_backtrace.cold+0x44/0xd7 lib/nmi_backtrace.c:105 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_single_cpu_backtrace include/linux/nmi.h:164 [inline] rcu_dump_cpu_stacks+0x1e3/0x21e kernel/rcu/tree_stall.h:331 print_cpu_stall kernel/rcu/tree_stall.h:563 [inline] check_cpu_stall kernel/rcu/tree_stall.h:637 [inline] rcu_pending kernel/rcu/tree.c:3694 [inline] rcu_sched_clock_irq.cold+0x472/0xee8 kernel/rcu/tree.c:2567 update_process_times+0x77/0xd0 kernel/time/timer.c:1709 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:176 tick_sched_timer+0x1d1/0x2a0 kernel/time/tick-sched.c:1328 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x1ce/0xea0 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0x334/0x940 kernel/time/hrtimer.c:1645 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1080 [inline] __sysvec_apic_timer_interrupt+0x146/0x540 arch/x86/kernel/apic/apic.c:1097 run_sysvec_on_irqstack_cond arch/x86/include/asm/irq_stack.h:91 [inline] sysvec_apic_timer_interrupt+0x48/0x100 arch/x86/kernel/apic/apic.c:1091 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:631 RIP: 0010:lock_is_held_type+0xc2/0x100 kernel/locking/lockdep.c:5481 Code: 03 44 39 f0 41 0f 94 c4 48 c7 c7 40 5f 4b 89 e8 d4 0b 00 00 b8 ff ff ff ff 65 0f c1 05 67 78 1c 77 83 f8 01 75 23 ff 34 24 9d <48> 83 c4 08 44 89 e0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 45 31 e4 eb RSP: 0018:ffffc90000007dd8 EFLAGS: 00000202 RAX: 0000000000000001 RBX: 0000000000000000 RCX: 1ffffffff19d9c03 RDX: 0000000000000000 RSI: 0000000000000102 RDI: 0000000000000000 RBP: ffffffff8b337820 R08: 0000000000000000 R09: ffffffff8cecae4f R10: 0000000000000000 R11: 0000000000000001 R12: 0000000000000000 R13: ffff8880549ca358 R14: 00000000ffffffff R15: dffffc0000000000 lock_is_held include/linux/lockdep.h:271 [inline] rcu_read_lock_sched_held+0x3a/0x70 kernel/rcu/update.c:123 trace_hrtimer_expire_exit include/trace/events/timer.h:279 [inline] __run_hrtimer kernel/time/hrtimer.c:1522 [inline] __hrtimer_run_queues+0xc46/0xea0 kernel/time/hrtimer.c:1583 hrtimer_run_softirq+0x17b/0x360 kernel/time/hrtimer.c:1600 __do_softirq+0x2a0/0x9f6 kernel/softirq.c:298 asm_call_irq_on_stack+0xf/0x20 __run_on_irqstack arch/x86/include/asm/irq_stack.h:26 [inline] run_on_irqstack_cond arch/x86/include/asm/irq_stack.h:77 [inline] do_softirq_own_stack+0xaa/0xd0 arch/x86/kernel/irq_64.c:77 invoke_softirq kernel/softirq.c:393 [inline] __irq_exit_rcu kernel/softirq.c:423 [inline] irq_exit_rcu+0x132/0x200 kernel/softirq.c:435 sysvec_apic_timer_interrupt+0x4d/0x100 arch/x86/kernel/apic/apic.c:1091 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:631 RIP: 0010:lock_is_held_type+0xc2/0x100 kernel/locking/lockdep.c:5481 Code: 03 44 39 f0 41 0f 94 c4 48 c7 c7 40 5f 4b 89 e8 d4 0b 00 00 b8 ff ff ff ff 65 0f c1 05 67 78 1c 77 83 f8 01 75 23 ff 34 24 9d <48> 83 c4 08 44 89 e0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 45 31 e4 eb RSP: 0018:ffffc9000182f7f0 EFLAGS: 00000202 RAX: 0000000000000001 RBX: 0000000000000000 RCX: 1ffffffff19d9c03 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000 RBP: ffffffff8b337880 R08: 0000000000000000 R09: ffffffff8b43ac23 R10: fffffbfff1687584 R11: 0000000000000000 R12: 0000000000000000 R13: ffff8880549ca358 R14: 00000000ffffffff R15: 0000000000034940 lock_is_held include/linux/lockdep.h:271 [inline] schedule_debug kernel/sched/core.c:4298 [inline] __schedule+0x13f0/0x2130 kernel/sched/core.c:4423 preempt_schedule_common+0x45/0xc0 kernel/sched/core.c:4687 preempt_schedule_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:40 __raw_spin_unlock include/linux/spinlock_api_smp.h:152 [inline] _raw_spin_unlock+0x36/0x40 kernel/locking/spinlock.c:183 spin_unlock include/linux/spinlock.h:394 [inline] alloc_vmap_area+0x992/0x1e00 mm/vmalloc.c:1215 __get_vm_area_node+0x128/0x380 mm/vmalloc.c:2080 __vmalloc_node_range mm/vmalloc.c:2553 [inline] __vmalloc_node mm/vmalloc.c:2601 [inline] __vmalloc+0xf3/0x1a0 mm/vmalloc.c:2615 bpf_prog_alloc_no_stats+0x33/0x2e0 kernel/bpf/core.c:85 bpf_prog_alloc+0x2c/0x250 kernel/bpf/core.c:113 bpf_prog_load+0x4c4/0x1b60 kernel/bpf/syscall.c:2151 __do_sys_bpf+0x14b9/0x5180 kernel/bpf/syscall.c:4399 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x45e159 Code: 0d b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db b3 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007f5b8ed7bc68 EFLAGS: 00000246 ORIG_RAX: 0000000000000141 RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 000000000045e159 RDX: 0000000000000048 RSI: 00000000200054c0 RDI: 0000000000000005 RBP: 000000000119c068 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 000000000119c034 R13: 00007ffee8880e2f R14: 00007f5b8ed7c9c0 R15: 000000000119c034