syzbot


possible deadlock in lock_timer_base

Status: upstream: reported C repro on 2023/06/17 18:14
Reported-by: syzbot+1e90d72fb78c8c8fae1d@syzkaller.appspotmail.com
First crash: 321d, last: 23d
Bug presence (1)
Date Name Commit Repro Result
2024/05/03 upstream (ToT) f03359bca01b C Didn't crash
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in lock_timer_base bpf net C 107 1d13h 1216d 1/26 upstream: reported C repro on 2021/01/03 06:59
linux-5.15 possible deadlock in lock_timer_base C error 91 4h18m 343d 0/3 upstream: reported C repro on 2023/05/25 21:57
Fix bisection attempts (1)
Created Duration User Patch Repo Result
2023/10/09 13:49 2h28m fix candidate upstream job log (0)

Sample crash report:
=====================================================
WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected
6.1.84-syzkaller #0 Not tainted
-----------------------------------------------------
kworker/1:2/151 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire:
ffff8880766fd820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932

and this task is already holding:
ffff8880b9928358 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x120/0x260 kernel/time/timer.c:999
which would create a new lock dependency:
 (&base->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2}

but this new dependency connects a HARDIRQ-irq-safe lock:
 (&base->lock){-.-.}-{2:2}

... which became HARDIRQ-irq-safe at:
  lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
  _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
  lock_timer_base+0x120/0x260 kernel/time/timer.c:999
  add_timer_on+0x1eb/0x580 kernel/time/timer.c:1239
  handle_irq_event_percpu kernel/irq/handle.c:195 [inline]
  handle_irq_event+0xa9/0x1e0 kernel/irq/handle.c:210
  handle_level_irq+0x3ab/0x6c0 kernel/irq/chip.c:650
  generic_handle_irq_desc include/linux/irqdesc.h:158 [inline]
  handle_irq arch/x86/kernel/irq.c:231 [inline]
  __common_interrupt+0xd7/0x1f0 arch/x86/kernel/irq.c:250
  common_interrupt+0x9f/0xc0 arch/x86/kernel/irq.c:240
  asm_common_interrupt+0x22/0x40 arch/x86/include/asm/idtentry.h:644
  __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
  _raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
  __setup_irq+0x12fa/0x1d80 kernel/irq/manage.c:1809
  request_threaded_irq+0x2a7/0x380 kernel/irq/manage.c:2198
  request_irq include/linux/interrupt.h:168 [inline]
  setup_default_timer_irq+0x1f/0x30 arch/x86/kernel/time.c:70
  x86_late_time_init+0x51/0x86 arch/x86/kernel/time.c:94
  start_kernel+0x414/0x53f init/main.c:1102
  secondary_startup_64_no_verify+0xcf/0xdb

to a HARDIRQ-irq-unsafe lock:
 (&htab->buckets[i].lock){+...}-{2:2}

... which became HARDIRQ-irq-unsafe at:
...
  lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
  __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
  _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
  sock_hash_free+0x160/0x820 net/core/sock_map.c:1149
  process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
  worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
  kthread+0x28d/0x320 kernel/kthread.c:376
  ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307

other info that might help us debug this:

 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&htab->buckets[i].lock);
                               local_irq_disable();
                               lock(&base->lock);
                               lock(&htab->buckets[i].lock);
  <Interrupt>
    lock(&base->lock);

 *** DEADLOCK ***

4 locks held by kworker/1:2/151:
 #0: ffff888012472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
 #1: ffffc9000261fd20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
 #2: ffff8880b9928358 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x120/0x260 kernel/time/timer.c:999
 #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2272 [inline]
 #3: ffffffff8d12a980 (rcu_read_lock){....}-{1:2}, at: bpf_trace_run3+0x146/0x440 kernel/trace/bpf_trace.c:2313

the dependencies between HARDIRQ-irq-safe lock and the holding lock:
-> (&base->lock){-.-.}-{2:2} {
   IN-HARDIRQ-W at:
                    lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
                    __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                    _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
                    lock_timer_base+0x120/0x260 kernel/time/timer.c:999
                    add_timer_on+0x1eb/0x580 kernel/time/timer.c:1239
                    handle_irq_event_percpu kernel/irq/handle.c:195 [inline]
                    handle_irq_event+0xa9/0x1e0 kernel/irq/handle.c:210
                    handle_level_irq+0x3ab/0x6c0 kernel/irq/chip.c:650
                    generic_handle_irq_desc include/linux/irqdesc.h:158 [inline]
                    handle_irq arch/x86/kernel/irq.c:231 [inline]
                    __common_interrupt+0xd7/0x1f0 arch/x86/kernel/irq.c:250
                    common_interrupt+0x9f/0xc0 arch/x86/kernel/irq.c:240
                    asm_common_interrupt+0x22/0x40 arch/x86/include/asm/idtentry.h:644
                    __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
                    _raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
                    __setup_irq+0x12fa/0x1d80 kernel/irq/manage.c:1809
                    request_threaded_irq+0x2a7/0x380 kernel/irq/manage.c:2198
                    request_irq include/linux/interrupt.h:168 [inline]
                    setup_default_timer_irq+0x1f/0x30 arch/x86/kernel/time.c:70
                    x86_late_time_init+0x51/0x86 arch/x86/kernel/time.c:94
                    start_kernel+0x414/0x53f init/main.c:1102
                    secondary_startup_64_no_verify+0xcf/0xdb
   IN-SOFTIRQ-W at:
                    lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
                    __raw_spin_lock_irq include/linux/spinlock_api_smp.h:119 [inline]
                    _raw_spin_lock_irq+0xcf/0x110 kernel/locking/spinlock.c:170
                    __run_timers+0x111/0x890 kernel/time/timer.c:1802
                    run_timer_softirq+0x63/0xf0 kernel/time/timer.c:1833
                    __do_softirq+0x2e9/0xa4c kernel/softirq.c:571
                    invoke_softirq kernel/softirq.c:445 [inline]
                    __irq_exit_rcu+0x155/0x240 kernel/softirq.c:650
                    irq_exit_rcu+0x5/0x20 kernel/softirq.c:662
                    common_interrupt+0xa4/0xc0 arch/x86/kernel/irq.c:240
                    asm_common_interrupt+0x22/0x40 arch/x86/include/asm/idtentry.h:644
                    console_emit_next_record+0xd67/0x1000 kernel/printk/printk.c:2786
                    console_unlock+0x278/0x7c0 kernel/printk/printk.c:2906
                    vprintk_emit+0x523/0x740 kernel/printk/printk.c:2303
                    _printk+0xd1/0x111 kernel/printk/printk.c:2328
                    spectre_v2_select_mitigation+0x53f/0x7d3 arch/x86/kernel/cpu/bugs.c:1698
                    cpu_select_mitigations+0x3d/0x8f arch/x86/kernel/cpu/bugs.c:148
                    arch_cpu_finalize_init+0xf/0x81 arch/x86/kernel/cpu/common.c:2449
                    start_kernel+0x423/0x53f init/main.c:1106
                    secondary_startup_64_no_verify+0xcf/0xdb
   INITIAL USE at:
                   lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
                   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                   _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162
                   lock_timer_base+0x120/0x260 kernel/time/timer.c:999
                   add_timer_on+0x1eb/0x580 kernel/time/timer.c:1239
                   handle_irq_event_percpu kernel/irq/handle.c:195 [inline]
                   handle_irq_event+0xa9/0x1e0 kernel/irq/handle.c:210
                   handle_level_irq+0x3ab/0x6c0 kernel/irq/chip.c:650
                   generic_handle_irq_desc include/linux/irqdesc.h:158 [inline]
                   handle_irq arch/x86/kernel/irq.c:231 [inline]
                   __common_interrupt+0xd7/0x1f0 arch/x86/kernel/irq.c:250
                   common_interrupt+0x9f/0xc0 arch/x86/kernel/irq.c:240
                   asm_common_interrupt+0x22/0x40 arch/x86/include/asm/idtentry.h:644
                   __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
                   _raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
                   __setup_irq+0x12fa/0x1d80 kernel/irq/manage.c:1809
                   request_threaded_irq+0x2a7/0x380 kernel/irq/manage.c:2198
                   request_irq include/linux/interrupt.h:168 [inline]
                   setup_default_timer_irq+0x1f/0x30 arch/x86/kernel/time.c:70
                   x86_late_time_init+0x51/0x86 arch/x86/kernel/time.c:94
                   start_kernel+0x414/0x53f init/main.c:1102
                   secondary_startup_64_no_verify+0xcf/0xdb
 }
 ... key      at: [<ffffffff91cd5480>] init_timer_cpu.__key+0x0/0x20

the dependencies between the lock to be acquired
 and HARDIRQ-irq-unsafe lock:
-> (&htab->buckets[i].lock){+...}-{2:2} {
   HARDIRQ-ON-W at:
                    lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
                    __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
                    _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
                    sock_hash_free+0x160/0x820 net/core/sock_map.c:1149
                    process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
                    worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
                    kthread+0x28d/0x320 kernel/kthread.c:376
                    ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
   INITIAL USE at:
                   lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
                   __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
                   _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
                   sock_hash_free+0x160/0x820 net/core/sock_map.c:1149
                   process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
                   worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
                   kthread+0x28d/0x320 kernel/kthread.c:376
                   ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
 }
 ... key      at: [<ffffffff920b1340>] sock_hash_alloc.__key+0x0/0x20
 ... acquired at:
   lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
   __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
   _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
   sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
   bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
   bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
   __bpf_prog_run include/linux/filter.h:603 [inline]
   bpf_prog_run include/linux/filter.h:610 [inline]
   __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
   bpf_trace_run3+0x231/0x440 kernel/trace/bpf_trace.c:2313
   trace_timer_start include/trace/events/timer.h:53 [inline]
   enqueue_timer+0x440/0x600 kernel/time/timer.c:609
   __mod_timer+0x92b/0xee0
   schedule_timeout+0x1b4/0x300 kernel/time/timer.c:1964
   synchronize_rcu_expedited_wait_once kernel/rcu/tree_exp.h:580 [inline]
   synchronize_rcu_expedited_wait kernel/rcu/tree_exp.h:631 [inline]
   rcu_exp_wait_wake kernel/rcu/tree_exp.h:699 [inline]
   rcu_exp_sel_wait_wake+0x764/0x1d50 kernel/rcu/tree_exp.h:733
   process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
   worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
   kthread+0x28d/0x320 kernel/kthread.c:376
   ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307


stack backtrace:
CPU: 1 PID: 151 Comm: kworker/1:2 Not tainted 6.1.84-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Workqueue: rcu_gp wait_rcu_exp_gp
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 print_bad_irq_dependency kernel/locking/lockdep.c:2604 [inline]
 check_irq_usage kernel/locking/lockdep.c:2843 [inline]
 check_prev_add kernel/locking/lockdep.c:3094 [inline]
 check_prevs_add kernel/locking/lockdep.c:3209 [inline]
 validate_chain+0x4d16/0x5950 kernel/locking/lockdep.c:3825
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
 _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178
 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:932
 bpf_prog_2c29ac5cdc6b1842+0x3a/0x3e
 bpf_dispatcher_nop_func include/linux/bpf.h:989 [inline]
 __bpf_prog_run include/linux/filter.h:603 [inline]
 bpf_prog_run include/linux/filter.h:610 [inline]
 __bpf_trace_run kernel/trace/bpf_trace.c:2273 [inline]
 bpf_trace_run3+0x231/0x440 kernel/trace/bpf_trace.c:2313
 trace_timer_start include/trace/events/timer.h:53 [inline]
 enqueue_timer+0x440/0x600 kernel/time/timer.c:609
 __mod_timer+0x92b/0xee0
 schedule_timeout+0x1b4/0x300 kernel/time/timer.c:1964
 synchronize_rcu_expedited_wait_once kernel/rcu/tree_exp.h:580 [inline]
 synchronize_rcu_expedited_wait kernel/rcu/tree_exp.h:631 [inline]
 rcu_exp_wait_wake kernel/rcu/tree_exp.h:699 [inline]
 rcu_exp_sel_wait_wake+0x764/0x1d50 kernel/rcu/tree_exp.h:733
 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
 kthread+0x28d/0x320 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
 </TASK>

Crashes (34):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/04/10 08:56 linux-6.1.y 347385861c50 171ec371 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/04/08 17:13 linux-6.1.y 347385861c50 53df08b6 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/06 23:17 linux-6.1.y 347385861c50 ca620dd8 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/06 22:46 linux-6.1.y 347385861c50 ca620dd8 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/06 19:30 linux-6.1.y 347385861c50 ca620dd8 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/04/06 08:23 linux-6.1.y 347385861c50 ca620dd8 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/05 10:35 linux-6.1.y 347385861c50 0ee3535e .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/04 20:56 linux-6.1.y 347385861c50 0ee3535e .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/03/31 23:32 linux-6.1.y e5cd595e23c1 6baf5069 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/31 09:02 linux-6.1.y e5cd595e23c1 6baf5069 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/26 16:42 linux-6.1.y d7543167affd bcd9b39f .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/26 10:27 linux-6.1.y d7543167affd bcd9b39f .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/26 09:58 linux-6.1.y d7543167affd bcd9b39f .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/26 09:27 linux-6.1.y d7543167affd bcd9b39f .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/26 05:22 linux-6.1.y d7543167affd bcd9b39f .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/24 00:00 linux-6.1.y d7543167affd 0ea90952 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2023/06/17 18:12 linux-6.1.y ca87e77a2ef8 f3921d4d .config console log report syz [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in lock_timer_base
2024/03/19 11:38 linux-6.1.y d7543167affd baa80228 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/10 08:38 linux-6.1.y 347385861c50 171ec371 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/09 20:50 linux-6.1.y 347385861c50 171ec371 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/09 04:48 linux-6.1.y 347385861c50 f3234354 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/09 04:39 linux-6.1.y 347385861c50 f3234354 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/04/07 13:27 linux-6.1.y 347385861c50 ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/04/06 18:59 linux-6.1.y 347385861c50 ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/04/05 10:06 linux-6.1.y 347385861c50 0ee3535e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/31 09:30 linux-6.1.y e5cd595e23c1 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/03/31 05:23 linux-6.1.y e5cd595e23c1 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/03/26 08:27 linux-6.1.y d7543167affd bcd9b39f .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/03/26 03:54 linux-6.1.y d7543167affd bcd9b39f .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2024/03/25 19:47 linux-6.1.y d7543167affd 0ea90952 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/05 04:04 linux-6.1.y a3eb3a74aa8c 5fc53669 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in lock_timer_base
2023/10/25 21:22 linux-6.1.y 32c9cdbe383c 72e794c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2023/10/23 22:04 linux-6.1.y 7d24402875c7 989a3687 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in lock_timer_base
2024/03/20 14:27 linux-6.1.y d7543167affd a485f239 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in lock_timer_base
* Struck through repros no longer work on HEAD.