===================================================== WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected 5.15.152-syzkaller #0 Not tainted ----------------------------------------------------- kworker/0:3/2923 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire: ffff88807ab8d820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937 and this task is already holding: ffff8880b9a39b18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00 which would create a new lock dependency: (&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2} but this new dependency connects a HARDIRQ-irq-safe lock: (&pool->lock){-.-.}-{2:2} ... which became HARDIRQ-irq-safe at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x56d/0xd00 queue_work_on+0x14b/0x250 kernel/workqueue.c:1559 hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline] hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912 run_local_timers kernel/time/timer.c:1762 [inline] update_process_times+0xca/0x200 kernel/time/timer.c:1787 tick_periodic+0x197/0x210 kernel/time/tick-common.c:100 tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline] __sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102 sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638 native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] default_idle+0xb/0x10 arch/x86/kernel/process.c:717 default_idle_call+0x81/0xc0 kernel/sched/idle.c:112 cpuidle_idle_call kernel/sched/idle.c:194 [inline] do_idle+0x271/0x670 kernel/sched/idle.c:306 cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403 start_kernel+0x48c/0x535 init/main.c:1137 secondary_startup_64_no_verify+0xb1/0xbb to a HARDIRQ-irq-unsafe lock: (&htab->buckets[i].lock){+...}-{2:2} ... which became HARDIRQ-irq-unsafe at: ... lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&htab->buckets[i].lock); local_irq_disable(); lock(&pool->lock ); lock( &htab->buckets[i].lock); lock( &pool->lock); *** DEADLOCK *** 6 locks held by kworker/0:3/2923: #0: ffff888011c70938 ((wq_completion)events ){+.+.}-{0:0} , at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283 #1: ffffc9000c00fd20 ((work_completion)(&map->work) ){+.+.}-{0:0} , at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285 #2: ffffffff8c923ce8 (rcu_state.exp_mutex ){+.+.}-{3:3} , at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline] , at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:845 #3: ffffffff8c91f720 (rcu_read_lock ){....}-{1:2} , at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268 #4: ffff8880b9a39b18 (&pool->lock ){-.-.}-{2:2} , at: __queue_work+0x56d/0xd00 #5: ffffffff8c91f720 (rcu_read_lock ){....}-{1:2} , at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:268 the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> ( &pool->lock){-.-.}-{2:2} { IN-HARDIRQ-W at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x56d/0xd00 queue_work_on+0x14b/0x250 kernel/workqueue.c:1559 hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline] hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912 run_local_timers kernel/time/timer.c:1762 [inline] update_process_times+0xca/0x200 kernel/time/timer.c:1787 tick_periodic+0x197/0x210 kernel/time/tick-common.c:100 tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline] __sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102 sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638 native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] default_idle+0xb/0x10 arch/x86/kernel/process.c:717 default_idle_call+0x81/0xc0 kernel/sched/idle.c:112 cpuidle_idle_call kernel/sched/idle.c:194 [inline] do_idle+0x271/0x670 kernel/sched/idle.c:306 cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403 start_kernel+0x48c/0x535 init/main.c:1137 secondary_startup_64_no_verify+0xb1/0xbb IN-SOFTIRQ-W at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x56d/0xd00 call_timer_fn+0x16d/0x560 kernel/time/timer.c:1421 expire_timers kernel/time/timer.c:1461 [inline] __run_timers+0x6a8/0x890 kernel/time/timer.c:1737 __do_softirq+0x3b3/0x93a kernel/softirq.c:558 invoke_softirq kernel/softirq.c:432 [inline] __irq_exit_rcu+0x155/0x240 kernel/softirq.c:637 irq_exit_rcu+0x5/0x20 kernel/softirq.c:649 sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1096 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638 native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] default_idle+0xb/0x10 arch/x86/kernel/process.c:717 default_idle_call+0x81/0xc0 kernel/sched/idle.c:112 cpuidle_idle_call kernel/sched/idle.c:194 [inline] do_idle+0x271/0x670 kernel/sched/idle.c:306 cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403 start_kernel+0x48c/0x535 init/main.c:1137 secondary_startup_64_no_verify+0xb1/0xbb INITIAL USE at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162 pwq_adjust_max_active+0x14e/0x550 kernel/workqueue.c:3783 link_pwq kernel/workqueue.c:3849 [inline] alloc_and_link_pwqs kernel/workqueue.c:4243 [inline] alloc_workqueue+0xbb4/0x13f0 kernel/workqueue.c:4365 workqueue_init_early+0x7b2/0x96c kernel/workqueue.c:6099 start_kernel+0x1fa/0x535 init/main.c:1024 secondary_startup_64_no_verify+0xb1/0xbb } ... key at: [] init_worker_pool.__key+0x0/0x20 the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&htab->buckets[i].lock ){+...}-{2:2} { HARDIRQ-ON-W at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 INITIAL USE at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 } ... key at: [] sock_hash_alloc.__key+0x0/0x20 ... acquired at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937 bpf_prog_2c29ac5cdc6b1842+0x3a/0x400 bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline] __bpf_prog_run include/linux/filter.h:625 [inline] bpf_prog_run include/linux/filter.h:632 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline] bpf_trace_run3+0x1d1/0x380 kernel/trace/bpf_trace.c:1918 trace_workqueue_queue_work include/trace/events/workqueue.h:23 [inline] __queue_work+0xc99/0xd00 kernel/workqueue.c:1512 queue_work_on+0x14b/0x250 kernel/workqueue.c:1559 queue_work include/linux/workqueue.h:512 [inline] synchronize_rcu_expedited+0x4eb/0x740 kernel/rcu/tree_exp.h:856 synchronize_rcu+0x107/0x1a0 kernel/rcu/tree.c:3798 sock_hash_free+0x6e8/0x780 net/core/sock_map.c:1177 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 stack backtrace: CPU: 0 PID: 2923 Comm: kworker/0:3 Not tainted 5.15.152-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024 Workqueue: events bpf_map_free_deferred Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 print_bad_irq_dependency kernel/locking/lockdep.c:2567 [inline] check_irq_usage kernel/locking/lockdep.c:2806 [inline] check_prev_add kernel/locking/lockdep.c:3057 [inline] check_prevs_add kernel/locking/lockdep.c:3172 [inline] validate_chain+0x4d01/0x5930 kernel/locking/lockdep.c:3788 __lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012 lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937 bpf_prog_2c29ac5cdc6b1842+0x3a/0x400 bpf_dispatcher_nop_func include/linux/bpf.h:780 [inline] __bpf_prog_run include/linux/filter.h:625 [inline] bpf_prog_run include/linux/filter.h:632 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline] bpf_trace_run3+0x1d1/0x380 kernel/trace/bpf_trace.c:1918 trace_workqueue_queue_work include/trace/events/workqueue.h:23 [inline] __queue_work+0xc99/0xd00 kernel/workqueue.c:1512 queue_work_on+0x14b/0x250 kernel/workqueue.c:1559 queue_work include/linux/workqueue.h:512 [inline] synchronize_rcu_expedited+0x4eb/0x740 kernel/rcu/tree_exp.h:856 synchronize_rcu+0x107/0x1a0 kernel/rcu/tree.c:3798 sock_hash_free+0x6e8/0x780 net/core/sock_map.c:1177 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298