rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: 1-...!: (0 ticks this GP) idle=1f2/1/0x4000000000000000 softirq=60524/60524 fqs=0 (detected by 0, t=10502 jiffies, g=91785, q=734) ============================================ WARNING: possible recursive locking detected 5.10.0-rc4-syzkaller #0 Not tainted -------------------------------------------- syz-executor.3/26965 is trying to acquire lock: ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:328 but task is already holding lock: ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:487 [inline] ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3694 [inline] ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xee8 kernel/rcu/tree.c:2567 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(rcu_node_0); lock(rcu_node_0); *** DEADLOCK *** May be due to missing lock nesting notation 2 locks held by syz-executor.3/26965: #0: ffffffff8b43a8c8 (vmap_purge_lock){+.+.}-{3:3}, at: _vm_unmap_aliases.part.0+0x368/0x4e0 mm/vmalloc.c:1766 #1: ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: print_other_cpu_stall kernel/rcu/tree_stall.h:487 [inline] #1: ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] #1: ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_pending kernel/rcu/tree.c:3694 [inline] #1: ffffffff8b33f7d8 (rcu_node_0){-.-.}-{2:2}, at: rcu_sched_clock_irq.cold+0xbc/0xee8 kernel/rcu/tree.c:2567 stack backtrace: CPU: 0 PID: 26965 Comm: syz-executor.3 Not tainted 5.10.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:118 print_deadlock_bug kernel/locking/lockdep.c:2759 [inline] check_deadlock kernel/locking/lockdep.c:2802 [inline] validate_chain kernel/locking/lockdep.c:3593 [inline] __lock_acquire.cold+0x115/0x39f kernel/locking/lockdep.c:4830 lock_acquire kernel/locking/lockdep.c:5435 [inline] lock_acquire+0x2a3/0x8c0 kernel/locking/lockdep.c:5400 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x39/0x50 kernel/locking/spinlock.c:159 rcu_dump_cpu_stacks+0x9c/0x21e kernel/rcu/tree_stall.h:328 print_other_cpu_stall kernel/rcu/tree_stall.h:504 [inline] check_cpu_stall kernel/rcu/tree_stall.h:646 [inline] rcu_pending kernel/rcu/tree.c:3694 [inline] rcu_sched_clock_irq.cold+0x6c8/0xee8 kernel/rcu/tree.c:2567 update_process_times+0x77/0xd0 kernel/time/timer.c:1709 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:176 tick_sched_timer+0x1d1/0x2a0 kernel/time/tick-sched.c:1328 __run_hrtimer kernel/time/hrtimer.c:1519 [inline] __hrtimer_run_queues+0x1ce/0xea0 kernel/time/hrtimer.c:1583 hrtimer_interrupt+0x334/0x940 kernel/time/hrtimer.c:1645 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1080 [inline] __sysvec_apic_timer_interrupt+0x146/0x540 arch/x86/kernel/apic/apic.c:1097 asm_call_irq_on_stack+0xf/0x20 __run_sysvec_on_irqstack arch/x86/include/asm/irq_stack.h:37 [inline] run_sysvec_on_irqstack_cond arch/x86/include/asm/irq_stack.h:89 [inline] sysvec_apic_timer_interrupt+0xbd/0x100 arch/x86/kernel/apic/apic.c:1091 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:631 RIP: 0010:csd_lock_wait kernel/smp.c:227 [inline] RIP: 0010:smp_call_function_single+0x1b0/0x4b0 kernel/smp.c:512 Code: 10 8b 7c 24 1c 48 8d 74 24 40 48 89 44 24 50 48 8b 44 24 08 48 89 44 24 58 e8 0c fb ff ff 41 89 c5 eb 07 e8 d2 38 0b 00 f3 90 <44> 8b 64 24 48 31 ff 41 83 e4 01 44 89 e6 e8 0d 31 0b 00 45 85 e4 RSP: 0018:ffffc900173275a0 EFLAGS: 00000246 RAX: 0000000000040000 RBX: 1ffff92002e64eb8 RCX: ffffc9000f514000 RDX: 0000000000040000 RSI: ffffffff8164f89e RDI: 0000000000000005 RBP: ffffc90017327678 R08: 0000000000000001 R09: ffffffff8ebbb667 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000001 R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000008 smp_call_function_many_cond+0x25f/0x9d0 kernel/smp.c:648 smp_call_function_many kernel/smp.c:711 [inline] smp_call_function kernel/smp.c:733 [inline] on_each_cpu+0x4f/0x110 kernel/smp.c:832 __purge_vmap_area_lazy+0x11e/0x1b70 mm/vmalloc.c:1350 _vm_unmap_aliases.part.0+0x3d6/0x4e0 mm/vmalloc.c:1768 _vm_unmap_aliases mm/vmalloc.c:1742 [inline] vm_unmap_aliases+0x42/0x50 mm/vmalloc.c:1791 change_page_attr_set_clr+0x241/0x500 arch/x86/mm/pat/set_memory.c:1732 change_page_attr_clear arch/x86/mm/pat/set_memory.c:1789 [inline] set_memory_ro+0x78/0xa0 arch/x86/mm/pat/set_memory.c:1935 bpf_jit_binary_lock_ro include/linux/filter.h:824 [inline] bpf_int_jit_compile+0xdfa/0x11b0 arch/x86/net/bpf_jit_comp.c:2093 bpf_prog_select_runtime+0x5c5/0xb30 kernel/bpf/core.c:1817 bpf_prog_load+0xe6a/0x1b60 kernel/bpf/syscall.c:2214 __do_sys_bpf+0x14b9/0x5180 kernel/bpf/syscall.c:4399 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x45deb9 Code: 0d b4 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 db b3 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007fb93b4fbc78 EFLAGS: 00000246 ORIG_RAX: 0000000000000141 RAX: ffffffffffffffda RBX: 0000000000001d40 RCX: 000000000045deb9 RDX: 0000000000000048 RSI: 000000002000e000 RDI: 0000000000000005 RBP: 000000000118bf60 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 000000000118bf2c R13: 00007fff7beb2fff R14: 00007fb93b4fc9c0 R15: 000000000118bf2c