syzbot


possible deadlock in hrtimer_run_queues

Status: fixed on 2024/05/11 07:50
Reported-by: syzbot+df17c7f1816a27780d00@syzkaller.appspotmail.com
Fix commit: d1e73fb19a4c bpf, sockmap: Prevent lock inversion deadlock in map delete elem
First crash: 251d, last: 225d
Fix bisection: fixed by (bisect log) :
commit d1e73fb19a4c872d7a399ad3c66e8ca30e0875ec
Author: Jakub Sitnicki <jakub@cloudflare.com>
Date: Tue Apr 2 10:46:21 2024 +0000

  bpf, sockmap: Prevent lock inversion deadlock in map delete elem

  
Bug presence (1)
Date Name Commit Repro Result
2024/04/29 upstream (ToT) d03d41889088 C Didn't crash
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in hrtimer_run_queues net bpf C error 592 209d 246d 25/28 fixed on 2024/05/22 23:36
linux-6.1 possible deadlock in hrtimer_run_queues C inconclusive 19 114d 250d 0/3 auto-obsoleted due to no activity on 2024/11/06 21:34

Sample crash report:
=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
5.15.153-syzkaller #0 Not tainted
-----------------------------------------------------
kworker/1:1/25 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffffc900030513e0 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937

and this task is already holding:
ffff8880b9b39b18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00
which would create a new lock dependency:
 (&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
 (&pool->lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
  lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
  __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
  _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
  __queue_work+0x56d/0xd00
  queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
  hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
  hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
  run_local_timers kernel/time/timer.c:1762 [inline]
  update_process_times+0xca/0x200 kernel/time/timer.c:1787
  tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
  tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
  local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
  __sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
  sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
  asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
  native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
  arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
  default_idle+0xb/0x10 arch/x86/kernel/process.c:717
  default_idle_call+0x81/0xc0 kernel/sched/idle.c:112
  cpuidle_idle_call kernel/sched/idle.c:194 [inline]
  do_idle+0x271/0x670 kernel/sched/idle.c:306
  cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403
  start_kernel+0x48c/0x535 init/main.c:1138
  secondary_startup_64_no_verify+0xb1/0xbb

to a HARDIRQ-irq-unsafe lock:
 (&htab->buckets[i].lock){+...}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
  lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
  __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
  _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
  sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
  process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
  worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
  kthread+0x3f6/0x4f0 kernel/kthread.c:319
  ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298

other info that might help us debug this:

 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&htab->buckets[i].lock);
                               local_irq_disable();
                               lock(&pool->lock);
                               lock(&htab->buckets[i].lock);
  <Interrupt>
    lock(&pool->lock);

 *** DEADLOCK ***

6 locks held by kworker/1:1/25:
 #0: ffff888011c70938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x78a/0x10c0 kernel/workqueue.c:2283
 #1: ffffc90000dffd20 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0x10c0 kernel/workqueue.c:2285
 #2: ffffffff8c923ce8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
 #2: ffffffff8c923ce8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x280/0x740 kernel/rcu/tree_exp.h:845
 #3: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:311
 #4: ffff8880b9b39b18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00
 #5: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:311

the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (&pool->lock){-.-.}-{2:2} {
   IN-HARDIRQ-W at:
                    lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
                    __queue_work+0x56d/0xd00
                    queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
                    hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline]
                    hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912
                    run_local_timers kernel/time/timer.c:1762 [inline]
                    update_process_times+0xca/0x200 kernel/time/timer.c:1787
                    tick_periodic+0x197/0x210 kernel/time/tick-common.c:100
                    tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112
                    local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline]
                    __sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102
                    sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096
                    asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
                    native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline]
                    arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline]
                    default_idle+0xb/0x10 arch/x86/kernel/process.c:717
                    default_idle_call+0x81/0xc0 kernel/sched/idle.c:112
                    cpuidle_idle_call kernel/sched/idle.c:194 [inline]
                    do_idle+0x271/0x670 kernel/sched/idle.c:306
                    cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403
                    start_kernel+0x48c/0x535 init/main.c:1138
                    secondary_startup_64_no_verify+0xb1/0xbb
   IN-SOFTIRQ-W at:
                    lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
                    __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline]
                    _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
                    __queue_work+0x56d/0xd00
                    call_timer_fn+0x16d/0x560 kernel/time/timer.c:1421
                    expire_timers kernel/time/timer.c:1461 [inline]
                    __run_timers+0x6a8/0x890 kernel/time/timer.c:1737
                    __do_softirq+0x3b3/0x93a kernel/softirq.c:558
                    invoke_softirq kernel/softirq.c:432 [inline]
                    __irq_exit_rcu+0x155/0x240 kernel/softirq.c:637
                    irq_exit_rcu+0x5/0x20 kernel/softirq.c:649
                    sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1096
                    asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638
                    __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline]
                    _raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
                    pci_conf1_write+0x128/0x140 arch/x86/pci/direct.c:76
                    __pci_read_base+0x730/0xea0 drivers/pci/probe.c:251
                    pci_read_bases drivers/pci/probe.c:336 [inline]
                    pci_setup_device+0x16fe/0x2eb0 drivers/pci/probe.c:1883
                    pci_scan_device drivers/pci/probe.c:2389 [inline]
                    pci_scan_single_device+0x313/0x4c0 drivers/pci/probe.c:2543
                    pci_scan_slot+0x198/0x860 drivers/pci/probe.c:2622
                    pci_scan_child_bus_extend+0xa6/0x950 drivers/pci/probe.c:2839
                    acpi_pci_root_create+0xb63/0xd80 drivers/acpi/pci_root.c:925
                    pci_acpi_scan_root+0x3b4/0x630 arch/x86/pci/acpi.c:368
                    acpi_pci_root_add+0xd0e/0x22f0 drivers/acpi/pci_root.c:597
                    acpi_scan_attach_handler drivers/acpi/scan.c:2130 [inline]
                    acpi_bus_attach+0x85a/0xcb0 drivers/acpi/scan.c:2178
                    acpi_bus_attach+0x2c8/0xcb0 drivers/acpi/scan.c:2199
                    acpi_bus_attach+0x2c8/0xcb0 drivers/acpi/scan.c:2199
                    acpi_bus_scan+0x10a/0x200 drivers/acpi/scan.c:2371
                    acpi_scan_init+0x263/0x7b5 drivers/acpi/scan.c:2546
                    acpi_init+0x148/0x21f drivers/acpi/bus.c:1346
                    do_one_initcall+0x22b/0x7a0 init/main.c:1300
                    do_initcall_level+0x157/0x207 init/main.c:1373
                    do_initcalls+0x49/0x86 init/main.c:1389
                    kernel_init_freeable+0x425/0x5b5 init/main.c:1613
                    kernel_init+0x19/0x290 init/main.c:1504
                    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
   INITIAL USE at:
                   lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
                   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                   _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
                   pwq_adjust_max_active+0x14e/0x550 kernel/workqueue.c:3783
                   link_pwq kernel/workqueue.c:3849 [inline]
                   alloc_and_link_pwqs kernel/workqueue.c:4243 [inline]
                   alloc_workqueue+0xbb4/0x13f0 kernel/workqueue.c:4365
                   workqueue_init_early+0x7b2/0x96c kernel/workqueue.c:6099
                   start_kernel+0x1fa/0x535 init/main.c:1025
                   secondary_startup_64_no_verify+0xb1/0xbb
 }
 ... key      at: [<ffffffff8f5d9c60>] init_worker_pool.__key+0x0/0x20

the dependencies between the lock to be acquired
 and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+...}-{2:2} {
   HARDIRQ-ON-W at:
                    lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
                    __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
                    _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
                    sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
                    process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
                    worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
                    kthread+0x3f6/0x4f0 kernel/kthread.c:319
                    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
   INITIAL USE at:
                   lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
                   __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
                   _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
                   sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154
                   process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
                   worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
                   kthread+0x3f6/0x4f0 kernel/kthread.c:319
                   ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
 }
 ... key      at: [<ffffffff9178a740>] sock_hash_alloc.__key+0x0/0x20
 ... acquired at:
   lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
   __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
   _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
   sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
   bpf_prog_2c29ac5cdc6b1842+0x3a/0xda8
   bpf_dispatcher_nop_func include/linux/bpf.h:785 [inline]
   __bpf_prog_run include/linux/filter.h:628 [inline]
   bpf_prog_run include/linux/filter.h:635 [inline]
   __bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
   bpf_trace_run1+0x168/0x2f0 kernel/trace/bpf_trace.c:1916
   trace_workqueue_activate_work+0x150/0x1b0 include/trace/events/workqueue.h:59
   __queue_work+0x89e/0xd00 kernel/workqueue.c:1521
   queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
   queue_work include/linux/workqueue.h:512 [inline]
   synchronize_rcu_expedited+0x4eb/0x740 kernel/rcu/tree_exp.h:856
   synchronize_rcu+0x107/0x1a0 kernel/rcu/tree.c:3798
   sock_hash_free+0xa6/0x780 net/core/sock_map.c:1144
   process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
   worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
   kthread+0x3f6/0x4f0 kernel/kthread.c:319
   ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298


stack backtrace:
CPU: 1 PID: 25 Comm: kworker/1:1 Not tainted 5.15.153-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: events bpf_map_free_deferred
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 print_bad_irq_dependency kernel/locking/lockdep.c:2567 [inline]
 check_irq_usage kernel/locking/lockdep.c:2806 [inline]
 check_prev_add kernel/locking/lockdep.c:3057 [inline]
 check_prevs_add kernel/locking/lockdep.c:3172 [inline]
 validate_chain+0x4d01/0x5930 kernel/locking/lockdep.c:3788
 __lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012
 lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623
 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline]
 _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937
 bpf_prog_2c29ac5cdc6b1842+0x3a/0xda8
 bpf_dispatcher_nop_func include/linux/bpf.h:785 [inline]
 __bpf_prog_run include/linux/filter.h:628 [inline]
 bpf_prog_run include/linux/filter.h:635 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline]
 bpf_trace_run1+0x168/0x2f0 kernel/trace/bpf_trace.c:1916
 trace_workqueue_activate_work+0x150/0x1b0 include/trace/events/workqueue.h:59
 __queue_work+0x89e/0xd00 kernel/workqueue.c:1521
 queue_work_on+0x14b/0x250 kernel/workqueue.c:1559
 queue_work include/linux/workqueue.h:512 [inline]
 synchronize_rcu_expedited+0x4eb/0x740 kernel/rcu/tree_exp.h:856
 synchronize_rcu+0x107/0x1a0 kernel/rcu/tree.c:3798
 sock_hash_free+0xa6/0x780 net/core/sock_map.c:1144
 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310
 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457
 kthread+0x3f6/0x4f0 kernel/kthread.c:319
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
 </TASK>

Crashes (20):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/03/29 04:09 linux-5.15.y 9465fef4ae35 e91187ee .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/03/21 01:04 linux-5.15.y b95c01af2113 5b7d42ae .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/03/17 02:31 linux-5.15.y b95c01af2113 d615901c .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/04/10 07:35 linux-5.15.y 9465fef4ae35 171ec371 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/04/09 18:32 linux-5.15.y 9465fef4ae35 f3234354 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/04/08 15:00 linux-5.15.y 9465fef4ae35 53df08b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/04/08 07:57 linux-5.15.y 9465fef4ae35 ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/04/06 10:53 linux-5.15.y 9465fef4ae35 ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/04/06 00:39 linux-5.15.y 9465fef4ae35 77230c29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/04/05 20:08 linux-5.15.y 9465fef4ae35 77230c29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/04/01 22:00 linux-5.15.y 9465fef4ae35 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/03/31 01:32 linux-5.15.y 9465fef4ae35 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/03/30 18:17 linux-5.15.y 9465fef4ae35 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/03/30 06:08 linux-5.15.y 9465fef4ae35 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/03/30 06:08 linux-5.15.y 9465fef4ae35 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/03/28 18:17 linux-5.15.y 9465fef4ae35 e91187ee .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/03/20 01:44 linux-5.15.y b95c01af2113 e104824c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan-perf possible deadlock in hrtimer_run_queues
2024/03/16 15:48 linux-5.15.y b95c01af2113 d615901c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/03/15 14:23 linux-5.15.y 574362648507 d615901c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
2024/03/15 12:41 linux-5.15.y 574362648507 d615901c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-5-15-kasan possible deadlock in hrtimer_run_queues
* Struck through repros no longer work on HEAD.