===================================================== WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected 6.1.82-syzkaller #0 Not tainted ----------------------------------------------------- kworker/u4:1/11 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire: ffff88807b69d820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932 and this task is already holding: ffff8880b983a258 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x58c/0xf90 which would create a new lock dependency: (&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2} but this new dependency connects a HARDIRQ-irq-safe lock: (&pool->lock){-.-.}-{2:2} ... which became HARDIRQ-irq-safe at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x58c/0xf90 queue_work_on+0x14b/0x250 kernel/workqueue.c:1548 hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline] hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912 run_local_timers kernel/time/timer.c:1815 [inline] update_process_times+0x7b/0x1b0 kernel/time/timer.c:1838 tick_periodic+0x197/0x210 kernel/time/tick-common.c:100 tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline] __sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112 sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653 __sanitizer_cov_trace_switch+0x13/0xe0 kernel/kcov.c:323 update_event_printk kernel/trace/trace_events.c:2616 [inline] trace_event_eval_update+0x319/0xfc0 kernel/trace/trace_events.c:2788 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 to a HARDIRQ-irq-unsafe lock: (&htab->buckets[i].lock){+...}-{2:2} ... which became HARDIRQ-irq-unsafe at: ... lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x160/0x820 net/core/sock_map.c:1149 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&htab->buckets[i].lock); local_irq_disable(); lock(&pool->lock); lock(&htab->buckets[i].lock); lock(&pool->lock); *** DEADLOCK *** 6 locks held by kworker/u4:1/11: #0: ffff888012479138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc90000107d20 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #2: ffffffff8d12ff38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline] #2: ffffffff8d12ff38 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3b0/0x8a0 kernel/rcu/tree_exp.h:949 #3: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline] #3: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:760 [inline] #3: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: __queue_work+0xe5/0xf90 kernel/workqueue.c:1443 #4: ffff8880b983a258 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x58c/0xf90 #5: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:319 [inline] #5: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:760 [inline] #5: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline] #5: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run3+0x146/0x440 kernel/trace/bpf_trace.c:2313 the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> (&pool->lock){-.-.}-{2:2} { IN-HARDIRQ-W at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x58c/0xf90 queue_work_on+0x14b/0x250 kernel/workqueue.c:1548 hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline] hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912 run_local_timers kernel/time/timer.c:1815 [inline] update_process_times+0x7b/0x1b0 kernel/time/timer.c:1838 tick_periodic+0x197/0x210 kernel/time/tick-common.c:100 tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline] __sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112 sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653 __sanitizer_cov_trace_switch+0x13/0xe0 kernel/kcov.c:323 update_event_printk kernel/trace/trace_events.c:2616 [inline] trace_event_eval_update+0x319/0xfc0 kernel/trace/trace_events.c:2788 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 IN-SOFTIRQ-W at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x58c/0xf90 call_timer_fn+0x1ad/0x6b0 kernel/time/timer.c:1474 expire_timers kernel/time/timer.c:1514 [inline] __run_timers+0x6a8/0x890 kernel/time/timer.c:1790 run_timer_softirq+0x63/0xf0 kernel/time/timer.c:1803 __do_softirq+0x2e9/0xa4c kernel/softirq.c:571 invoke_softirq kernel/softirq.c:445 [inline] __irq_exit_rcu+0x155/0x240 kernel/softirq.c:650 irq_exit_rcu+0x5/0x20 kernel/softirq.c:662 sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1106 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653 native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] default_idle+0xb/0x10 arch/x86/kernel/process.c:730 default_idle_call+0x84/0xc0 kernel/sched/idle.c:109 cpuidle_idle_call kernel/sched/idle.c:191 [inline] do_idle+0x251/0x680 kernel/sched/idle.c:303 cpu_startup_entry+0x3d/0x60 kernel/sched/idle.c:401 start_secondary+0xe4/0xf0 arch/x86/kernel/smpboot.c:281 secondary_startup_64_no_verify+0xcf/0xdb INITIAL USE at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162 pwq_adjust_max_active+0x14e/0x550 kernel/workqueue.c:3765 link_pwq kernel/workqueue.c:3831 [inline] alloc_and_link_pwqs kernel/workqueue.c:4227 [inline] alloc_workqueue+0xbf8/0x1440 kernel/workqueue.c:4349 workqueue_init_early+0x71a/0x927 kernel/workqueue.c:6055 start_kernel+0x208/0x53f init/main.c:1029 secondary_startup_64_no_verify+0xcf/0xdb } ... key at: [] init_worker_pool.__key+0x0/0x20 the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&htab->buckets[i].lock){+...}-{2:2} { HARDIRQ-ON-W at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x160/0x820 net/core/sock_map.c:1149 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 INITIAL USE at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x160/0x820 net/core/sock_map.c:1149 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 } ... key at: [] sock_hash_alloc.__key+0x0/0x20 ... acquired at: lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932 bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline] __bpf_prog_run include/linux/filter.h:600 [inline] bpf_prog_run include/linux/filter.h:607 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline] bpf_trace_run3+0x231/0x440 kernel/trace/bpf_trace.c:2313 trace_workqueue_queue_work include/trace/events/workqueue.h:23 [inline] __queue_work+0xeeb/0xf90 kernel/workqueue.c:1500 queue_work_on+0x14b/0x250 kernel/workqueue.c:1548 queue_work include/linux/workqueue.h:512 [inline] synchronize_rcu_expedited_queue_work kernel/rcu/tree_exp.h:517 [inline] synchronize_rcu_expedited+0x5fd/0x8a0 kernel/rcu/tree_exp.h:959 synchronize_rcu+0x11c/0x3f0 kernel/rcu/tree.c:3575 sock_hash_free+0x769/0x820 net/core/sock_map.c:1172 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307 stack backtrace: CPU: 0 PID: 11 Comm: kworker/u4:1 Not tainted 6.1.82-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024 Workqueue: events_unbound bpf_map_free_deferred Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 print_bad_irq_dependency kernel/locking/lockdep.c:2604 [inline] check_irq_usage kernel/locking/lockdep.c:2843 [inline] check_prev_add kernel/locking/lockdep.c:3094 [inline] check_prevs_add kernel/locking/lockdep.c:3209 [inline] validate_chain+0x4d16/0x5950 kernel/locking/lockdep.c:3825 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932 bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline] __bpf_prog_run include/linux/filter.h:600 [inline] bpf_prog_run include/linux/filter.h:607 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline] bpf_trace_run3+0x231/0x440 kernel/trace/bpf_trace.c:2313 trace_workqueue_queue_work include/trace/events/workqueue.h:23 [inline] __queue_work+0xeeb/0xf90 kernel/workqueue.c:1500 queue_work_on+0x14b/0x250 kernel/workqueue.c:1548 queue_work include/linux/workqueue.h:512 [inline] synchronize_rcu_expedited_queue_work kernel/rcu/tree_exp.h:517 [inline] synchronize_rcu_expedited+0x5fd/0x8a0 kernel/rcu/tree_exp.h:959 synchronize_rcu+0x11c/0x3f0 kernel/rcu/tree.c:3575 sock_hash_free+0x769/0x820 net/core/sock_map.c:1172 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307