====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ rcu_preempt/15 is trying to acquire lock: ffff8880b903a358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475 but task is already holding lock: ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: force_qs_rnp kernel/rcu/tree.c:2646 [inline] ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs kernel/rcu/tree.c:-1 [inline] ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs_loop+0x768/0x11b0 kernel/rcu/tree.c:1986 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (rcu_node_0){-.-.}-{2:2}: __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 check_cb_ovld kernel/rcu/tree.c:2974 [inline] __call_rcu kernel/rcu/tree.c:3025 [inline] call_rcu+0x312/0x930 kernel/rcu/tree.c:3091 queue_rcu_work+0x81/0x90 kernel/workqueue.c:1788 kfree_rcu_monitor+0x32a/0x730 kernel/rcu/tree.c:3418 process_one_work+0x863/0x1000 kernel/workqueue.c:2310 worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457 kthread+0x436/0x520 kernel/kthread.c:334 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287 -> #2 (krc.lock){..-.}-{2:2}: __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 krc_this_cpu_lock kernel/rcu/tree.c:3203 [inline] add_ptr_to_bulk_krc_lock kernel/rcu/tree.c:3510 [inline] kvfree_call_rcu+0x186/0x7c0 kernel/rcu/tree.c:3601 trie_delete_elem+0x58c/0x710 kernel/bpf/lpm_trie.c:-1 0xffffffffa00180ae bpf_dispatcher_nop_func include/linux/bpf.h:888 [inline] __bpf_prog_run include/linux/filter.h:628 [inline] bpf_prog_run include/linux/filter.h:635 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:1878 [inline] bpf_trace_run2+0x15b/0x2d0 kernel/trace/bpf_trace.c:1915 trace_tlb_flush+0xe6/0x110 include/trace/events/tlb.h:38 switch_mm_irqs_off+0x6e3/0x9a0 arch/x86/mm/tlb.c:-1 unuse_temporary_mm arch/x86/kernel/alternative.c:1276 [inline] __text_poke+0x5a3/0x7b0 arch/x86/kernel/alternative.c:1372 text_poke arch/x86/kernel/alternative.c:1413 [inline] text_poke_bp_batch+0x138/0x7c0 arch/x86/kernel/alternative.c:1639 text_poke_flush arch/x86/kernel/alternative.c:1833 [inline] text_poke_finish+0x16/0x30 arch/x86/kernel/alternative.c:1840 arch_jump_label_transform_apply+0x13/0x20 arch/x86/kernel/jump_label.c:146 static_key_enable_cpuslocked+0x11f/0x240 kernel/jump_label.c:177 static_key_enable+0x16/0x20 kernel/jump_label.c:190 tracepoint_add_func+0x83b/0x9a0 kernel/tracepoint.c:361 tracepoint_probe_register_prio_may_exist+0x5c/0x90 kernel/tracepoint.c:482 bpf_raw_tracepoint_open+0x69d/0x780 kernel/bpf/syscall.c:3119 __sys_bpf+0x48b/0x670 kernel/bpf/syscall.c:4699 __do_sys_bpf kernel/bpf/syscall.c:4761 [inline] __se_sys_bpf kernel/bpf/syscall.c:4759 [inline] __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4759 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x66/0xd0 -> #1 (&trie->lock){..-.}-{2:2}: __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xa4/0xf0 kernel/locking/spinlock.c:162 trie_delete_elem+0x90/0x710 kernel/bpf/lpm_trie.c:467 0xffffffffa00180ae bpf_dispatcher_nop_func include/linux/bpf.h:888 [inline] __bpf_prog_run include/linux/filter.h:628 [inline] bpf_prog_run include/linux/filter.h:635 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:1878 [inline] bpf_trace_run2+0x15b/0x2d0 kernel/trace/bpf_trace.c:1915 trace_tlb_flush+0xe6/0x110 include/trace/events/tlb.h:38 switch_mm_irqs_off+0x6e3/0x9a0 arch/x86/mm/tlb.c:-1 context_switch kernel/sched/core.c:5035 [inline] __schedule+0x1024/0x4390 kernel/sched/core.c:6395 preempt_schedule_common+0x82/0xd0 kernel/sched/core.c:6571 preempt_schedule+0xa7/0xb0 kernel/sched/core.c:6596 preempt_schedule_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:34 __raw_spin_unlock include/linux/spinlock_api_smp.h:152 [inline] _raw_spin_unlock+0x36/0x40 kernel/locking/spinlock.c:186 spin_unlock include/linux/spinlock.h:404 [inline] __text_poke+0x64b/0x7b0 arch/x86/kernel/alternative.c:1389 text_poke arch/x86/kernel/alternative.c:1413 [inline] text_poke_bp_batch+0x138/0x7c0 arch/x86/kernel/alternative.c:1639 text_poke_flush arch/x86/kernel/alternative.c:1833 [inline] text_poke_finish+0x16/0x30 arch/x86/kernel/alternative.c:1840 arch_jump_label_transform_apply+0x13/0x20 arch/x86/kernel/jump_label.c:146 static_key_enable_cpuslocked+0x11f/0x240 kernel/jump_label.c:177 static_key_enable+0x16/0x20 kernel/jump_label.c:190 tracepoint_add_func+0x83b/0x9a0 kernel/tracepoint.c:361 tracepoint_probe_register_prio_may_exist+0x5c/0x90 kernel/tracepoint.c:482 bpf_raw_tracepoint_open+0x69d/0x780 kernel/bpf/syscall.c:3119 __sys_bpf+0x48b/0x670 kernel/bpf/syscall.c:4699 __do_sys_bpf kernel/bpf/syscall.c:4761 [inline] __se_sys_bpf kernel/bpf/syscall.c:4759 [inline] __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:4759 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x66/0xd0 -> #0 (&rq->__lock){-.-.}-{2:2}: check_prev_add kernel/locking/lockdep.c:3053 [inline] check_prevs_add kernel/locking/lockdep.c:3172 [inline] validate_chain kernel/locking/lockdep.c:3788 [inline] __lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012 lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623 _raw_spin_lock_nested+0x2e/0x40 kernel/locking/spinlock.c:368 raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475 raw_spin_rq_lock kernel/sched/sched.h:1326 [inline] _raw_spin_rq_lock_irqsave kernel/sched/sched.h:1345 [inline] resched_cpu+0xd4/0x240 kernel/sched/core.c:994 rcu_implicit_dynticks_qs+0x438/0xc30 kernel/rcu/tree.c:1329 force_qs_rnp kernel/rcu/tree.c:2664 [inline] rcu_gp_fqs kernel/rcu/tree.c:-1 [inline] rcu_gp_fqs_loop+0x972/0x11b0 kernel/rcu/tree.c:1986 rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145 kthread+0x436/0x520 kernel/kthread.c:334 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287 other info that might help us debug this: Chain exists of: &rq->__lock --> krc.lock --> rcu_node_0 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(rcu_node_0); lock(krc.lock); lock(rcu_node_0); lock(&rq->__lock); *** DEADLOCK *** 1 lock held by rcu_preempt/15: #0: ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: force_qs_rnp kernel/rcu/tree.c:2646 [inline] #0: ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs kernel/rcu/tree.c:-1 [inline] #0: ffffffff8c120b98 (rcu_node_0){-.-.}-{2:2}, at: rcu_gp_fqs_loop+0x768/0x11b0 kernel/rcu/tree.c:1986 stack backtrace: CPU: 1 PID: 15 Comm: rcu_preempt Not tainted syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Call Trace: dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106 check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2133 check_prev_add kernel/locking/lockdep.c:3053 [inline] check_prevs_add kernel/locking/lockdep.c:3172 [inline] validate_chain kernel/locking/lockdep.c:3788 [inline] __lock_acquire+0x2c33/0x7c60 kernel/locking/lockdep.c:5012 lock_acquire+0x197/0x3f0 kernel/locking/lockdep.c:5623 _raw_spin_lock_nested+0x2e/0x40 kernel/locking/spinlock.c:368 raw_spin_rq_lock_nested+0x26/0x140 kernel/sched/core.c:475 raw_spin_rq_lock kernel/sched/sched.h:1326 [inline] _raw_spin_rq_lock_irqsave kernel/sched/sched.h:1345 [inline] resched_cpu+0xd4/0x240 kernel/sched/core.c:994 rcu_implicit_dynticks_qs+0x438/0xc30 kernel/rcu/tree.c:1329 force_qs_rnp kernel/rcu/tree.c:2664 [inline] rcu_gp_fqs kernel/rcu/tree.c:-1 [inline] rcu_gp_fqs_loop+0x972/0x11b0 kernel/rcu/tree.c:1986 rcu_gp_kthread+0x98/0x350 kernel/rcu/tree.c:2145 kthread+0x436/0x520 kernel/kthread.c:334 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287