rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: 0-...!: (1 GPs behind) idle=f36/1/0x4000000000000002 softirq=82487/82488 fqs=4 (detected by 1, t=10502 jiffies, g=149409, q=28) ============================================ WARNING: possible recursive locking detected 5.10.0-rc4-syzkaller #0 Not tainted -------------------------------------------- syz-executor.5/6185 is trying to acquire lock: ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:328 but task is already holding lock: ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:487 [inline] ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3694 [inline] ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xee8 kernel/rcu/tree.c:2567 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(rcu_node_0); lock(rcu_node_0); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by syz-executor.5/6185: #0: ffffffff8b43ab78 (free_vmap_area_lock){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:354 [inline] #0: ffffffff8b43ab78 (free_vmap_area_lock){+.+.}-{2:2}, at: alloc_vmap_area+0xb9e/0x1e00 mm/vmalloc.c:1205 #1: ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:487 [inline] #1: ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] #1: ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3694 [inline] #1: ffffffff8b33f8d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xee8 kernel/rcu/tree.c:2567 stack backtrace: CPU: 1 PID: 6185 Comm: syz-executor.5 Not tainted 5.10.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:118 print_deadlock_bug kernel/locking/lockdep.c:2759 [inline] check_deadlock kernel/locking/lockdep.c:2802 [inline] validate_chain kernel/locking/lockdep.c:3593 [inline] __lock_acquire.cold+0x115/0x39f kernel/locking/lockdep.c:4830 lock_acquire kernel/locking/lockdep.c:5435 [inline] lock_acquire+0x2a3/0x8c0 kernel/locking/lockdep.c:5400 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x39/0x50 kernel/locking/spinlock.c:159 rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:328 print_other_cpu_stall kernel/rcu/tree_stall.h:504 [inline] check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] rcu_pending kernel/rcu/tree.c:3694 [inline] rcu_sched_clock_irq.cold+0x6c8/0xee8 kernel/rcu/tree.c:2567 update_process_times+0x77/0xd0 kernel/time/timer.c:1709 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:176 tick_sched_timer+0x1d1/0x2a0 kernel/time/tick-sched.c:1328 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x1ce/0xea0 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0x334/0x940 kernel/time/hrtimer.c:1645 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1080 [inline] __sysvec_apic_timer_interrupt+0x146/0x540 arch/x86/kernel/apic/apic.c:1097 asm_call_irq_on_stack+0xf/0x20 __run_sysvec_on_irqstack arch/x86/include/asm/irq_stack.h:37 [inline] run_sysvec_on_irqstack_cond arch/x86/include/asm/irq_stack.h:89 [inline] sysvec_apic_timer_interrupt+0xbd/0x100 arch/x86/kernel/apic/apic.c:1091 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:631 RIP: 0010:kvm_wait arch/x86/kernel/kvm.c:854 [inline] RIP: 0010:kvm_wait+0x9c/0xd0 arch/x86/kernel/kvm.c:831 Code: 02 48 89 da 83 e2 07 38 d0 7f 04 84 c0 75 32 0f b6 03 41 38 c4 75 13 e8 a2 6d 46 00 e9 07 00 00 00 0f 00 2d 56 f6 18 08 fb f4 8f 6d 46 00 eb af c3 e9 07 00 00 00 0f 00 2d 40 f6 18 08 f4 eb RSP: 0018:ffffc9000197f800 EFLAGS: 00000206 RAX: 000000000000bf4b RBX: ffffffff8b43ab60 RCX: ffffffff8155b427 RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000 RBP: 0000000000000246 R08: 0000000000000001 R09: ffffffff8ebad667 R10: fffffbfff1d75acc R11: 0000000000000000 R12: 0000000000000003 R13: fffffbfff168756c R14: 0000000000000001 R15: ffff8880b9f356c0 pv_wait arch/x86/include/asm/paravirt.h:564 [inline] pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:470 [inline] __pv_queued_spin_lock_slowpath+0x8b8/0xb40 kernel/locking/qspinlock.c:508 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:554 [inline] queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline] queued_spin_lock include/asm-generic/qspinlock.h:85 [inline] do_raw_spin_lock+0x200/0x2b0 kernel/locking/spinlock_debug.c:113 spin_lock include/linux/spinlock.h:354 [inline] alloc_vmap_area+0xb9e/0x1e00 mm/vmalloc.c:1205 __get_vm_area_node+0x128/0x380 mm/vmalloc.c:2080 __vmalloc_node_range+0xcb/0x170 mm/vmalloc.c:2553 alloc_thread_stack_node kernel/fork.c:244 [inline] dup_task_struct kernel/fork.c:864 [inline] copy_process+0x8de/0x6e80 kernel/fork.c:1938 kernel_clone+0xe7/0xab0 kernel/fork.c:2456 __do_sys_clone+0xc8/0x110 kernel/fork.c:2573 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x460889 Code: ff 48 85 f6 0f 84 37 8a fb ff 48 83 ee 10 48 89 4e 08 48 89 3e 48 89 d7 4c 89 c2 4d 89 c8 4c 8b 54 24 08 b8 38 00 00 00 0f 05 <48> 85 c0 0f 8c 0e 8a fb ff 74 01 c3 31 ed 48 f7 c7 00 00 01 00 75 RSP: 002b:00007ffd03273da8 EFLAGS: 00000202 ORIG_RAX: 0000000000000038 RAX: ffffffffffffffda RBX: 00007efc12856700 RCX: 0000000000460889 RDX: 00007efc128569d0 RSI: 00007efc12855db0 RDI: 00000000003d0f00 RBP: 00007ffd03273fc0 R08: 00007efc12856700 R09: 00007efc12856700 R10: 00007efc128569d0 R11: 0000000000000202 R12: 0000000000000000 R13: 00007ffd03273e5f R14: 00007efc128569c0 R15: 000000000118c07c