syzbot


possible deadlock in __hrtimer_run_queues

Status: auto-obsoleted due to no activity on 2023/08/23 09:03
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+3384541342de0ca933f1@syzkaller.appspotmail.com
First crash: 353d, last: 320d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] possible deadlock in __hrtimer_run_queues 4 (5) 2023/05/14 06:48
Similar bugs (5)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in __hrtimer_run_queues (2) kernel C error 15 4d11h 31d 0/26 upstream: reported C repro on 2024/03/24 21:06
linux-6.1 possible deadlock in __hrtimer_run_queues (2) C 2 16d 23d 0/3 upstream: reported C repro on 2024/04/02 19:14
linux-5.15 possible deadlock in __hrtimer_run_queues 3 332d 343d 0/3 auto-obsoleted due to no activity on 2023/09/06 10:20
linux-6.1 possible deadlock in __hrtimer_run_queues 1 338d 338d 0/3 auto-obsoleted due to no activity on 2023/08/31 12:31
linux-5.15 possible deadlock in __hrtimer_run_queues (2) 3 5d14h 28d 0/3 upstream: reported on 2024/03/28 04:51

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.4.0-rc4-syzkaller-00031-g8b817fded42d #0 Not tainted
------------------------------------------------------
syz-fuzzer/5132 is trying to acquire lock:
ffff88803fffeba0 (&pgdat->kswapd_wait){-...}-{2:2}, at: __wake_up_common_lock+0xb8/0x140 kernel/sched/wait.c:137

but task is already holding lock:
ffff88802c82b858 (hrtimer_bases.lock){-.-.}-{2:2}, at: __run_hrtimer kernel/time/hrtimer.c:1689 [inline]
ffff88802c82b858 (hrtimer_bases.lock){-.-.}-{2:2}, at: __hrtimer_run_queues+0x23e/0xbe0 kernel/time/hrtimer.c:1749

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (hrtimer_bases.lock){-.-.}-{2:2}:
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0x3d/0x60 kernel/locking/spinlock.c:162
       lock_hrtimer_base kernel/time/hrtimer.c:173 [inline]
       hrtimer_start_range_ns+0xe9/0xd80 kernel/time/hrtimer.c:1296
       hrtimer_start_expires include/linux/hrtimer.h:432 [inline]
       do_start_rt_bandwidth kernel/sched/rt.c:116 [inline]
       start_rt_bandwidth kernel/sched/rt.c:127 [inline]
       inc_rt_group kernel/sched/rt.c:1241 [inline]
       inc_rt_tasks kernel/sched/rt.c:1285 [inline]
       __enqueue_rt_entity kernel/sched/rt.c:1461 [inline]
       enqueue_rt_entity kernel/sched/rt.c:1510 [inline]
       enqueue_task_rt+0xa86/0xfc0 kernel/sched/rt.c:1545
       enqueue_task+0xad/0x330 kernel/sched/core.c:2082
       __sched_setscheduler.constprop.0+0xb89/0x25d0 kernel/sched/core.c:7774
       _sched_setscheduler kernel/sched/core.c:7820 [inline]
       sched_setscheduler_nocheck kernel/sched/core.c:7867 [inline]
       sched_set_fifo+0xb1/0x110 kernel/sched/core.c:7891
       irq_thread+0xe3/0x540 kernel/irq/manage.c:1302
       kthread+0x344/0x440 kernel/kthread.c:379
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

-> #3 (&rt_b->rt_runtime_lock){-.-.}-{2:2}:
       __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
       _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
       __enable_runtime kernel/sched/rt.c:876 [inline]
       rq_online_rt+0xb3/0x3b0 kernel/sched/rt.c:2485
       set_rq_online.part.0+0xf9/0x140 kernel/sched/core.c:9541
       set_rq_online kernel/sched/core.c:9533 [inline]
       sched_cpu_activate+0x216/0x440 kernel/sched/core.c:9649
       cpuhp_invoke_callback+0x645/0xeb0 kernel/cpu.c:192
       cpuhp_thread_fun+0x47f/0x700 kernel/cpu.c:815
       smpboot_thread_fn+0x659/0x9e0 kernel/smpboot.c:164
       kthread+0x344/0x440 kernel/kthread.c:379
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308

-> #2 (&rq->__lock){-.-.}-{2:2}:
       _raw_spin_lock_nested+0x34/0x40 kernel/locking/spinlock.c:378
       raw_spin_rq_lock_nested+0x2f/0x120 kernel/sched/core.c:558
       raw_spin_rq_lock kernel/sched/sched.h:1366 [inline]
       rq_lock kernel/sched/sched.h:1653 [inline]
       task_fork_fair+0x74/0x4f0 kernel/sched/fair.c:12095
       sched_cgroup_fork+0x3d1/0x540 kernel/sched/core.c:4777
       copy_process+0x4b8a/0x7600 kernel/fork.c:2618
       kernel_clone+0xeb/0x890 kernel/fork.c:2918
       user_mode_thread+0xb1/0xf0 kernel/fork.c:2996
       rest_init+0x27/0x2b0 init/main.c:700
       arch_call_rest_init+0x13/0x30 init/main.c:834
       start_kernel+0x3b6/0x490 init/main.c:1088
       x86_64_start_reservations+0x18/0x30 arch/x86/kernel/head64.c:556
       x86_64_start_kernel+0xb3/0xc0 arch/x86/kernel/head64.c:537
       secondary_startup_64_no_verify+0xf4/0xfb

-> #1 (&p->pi_lock){-.-.}-{2:2}:
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0x3d/0x60 kernel/locking/spinlock.c:162
       try_to_wake_up+0xab/0x1c40 kernel/sched/core.c:4191
       autoremove_wake_function+0x16/0x150 kernel/sched/wait.c:419
       __wake_up_common+0x147/0x650 kernel/sched/wait.c:107
       __wake_up_common_lock+0xd4/0x140 kernel/sched/wait.c:138
       wakeup_kswapd+0x3fe/0x5c0 mm/vmscan.c:7798
       rmqueue mm/page_alloc.c:3057 [inline]
       get_page_from_freelist+0x6c5/0x2c00 mm/page_alloc.c:3499
       __alloc_pages+0x1cb/0x4a0 mm/page_alloc.c:4768
       __folio_alloc+0x16/0x40 mm/page_alloc.c:4800
       vma_alloc_folio+0x155/0x890 mm/mempolicy.c:2240
       do_anonymous_page mm/memory.c:4085 [inline]
       do_pte_missing mm/memory.c:3645 [inline]
       handle_pte_fault mm/memory.c:4947 [inline]
       __handle_mm_fault+0x224c/0x41c0 mm/memory.c:5089
       handle_mm_fault+0x2af/0x9f0 mm/memory.c:5243
       do_user_addr_fault+0x2ca/0x1210 arch/x86/mm/fault.c:1349
       handle_page_fault arch/x86/mm/fault.c:1534 [inline]
       exc_page_fault+0x98/0x170 arch/x86/mm/fault.c:1590
       asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:570

-> #0 (&pgdat->kswapd_wait){-...}-{2:2}:
       check_prev_add kernel/locking/lockdep.c:3113 [inline]
       check_prevs_add kernel/locking/lockdep.c:3232 [inline]
       validate_chain kernel/locking/lockdep.c:3847 [inline]
       __lock_acquire+0x2fcd/0x5f30 kernel/locking/lockdep.c:5088
       lock_acquire kernel/locking/lockdep.c:5705 [inline]
       lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0x3d/0x60 kernel/locking/spinlock.c:162
       __wake_up_common_lock+0xb8/0x140 kernel/sched/wait.c:137
       wakeup_kswapd+0x3fe/0x5c0 mm/vmscan.c:7798
       rmqueue mm/page_alloc.c:3057 [inline]
       get_page_from_freelist+0x6c5/0x2c00 mm/page_alloc.c:3499
       __alloc_pages+0x1cb/0x4a0 mm/page_alloc.c:4768
       alloc_pages+0x1aa/0x270 mm/mempolicy.c:2279
       alloc_slab_page mm/slub.c:1851 [inline]
       allocate_slab+0x25f/0x390 mm/slub.c:1998
       new_slab mm/slub.c:2051 [inline]
       ___slab_alloc+0xa91/0x1400 mm/slub.c:3192
       __slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3291
       __slab_alloc_node mm/slub.c:3344 [inline]
       slab_alloc_node mm/slub.c:3441 [inline]
       slab_alloc mm/slub.c:3459 [inline]
       __kmem_cache_alloc_lru mm/slub.c:3466 [inline]
       kmem_cache_alloc+0x38e/0x3b0 mm/slub.c:3475
       kmem_cache_zalloc include/linux/slab.h:670 [inline]
       fill_pool+0x264/0x5c0 lib/debugobjects.c:168
       debug_objects_fill_pool lib/debugobjects.c:606 [inline]
       debug_object_activate+0x12d/0x4f0 lib/debugobjects.c:704
       debug_hrtimer_activate kernel/time/hrtimer.c:420 [inline]
       debug_activate kernel/time/hrtimer.c:475 [inline]
       enqueue_hrtimer+0x27/0x320 kernel/time/hrtimer.c:1084
       __run_hrtimer kernel/time/hrtimer.c:1702 [inline]
       __hrtimer_run_queues+0xa5b/0xbe0 kernel/time/hrtimer.c:1749
       hrtimer_interrupt+0x320/0x7b0 kernel/time/hrtimer.c:1811
       local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
       __sysvec_apic_timer_interrupt+0x14a/0x430 arch/x86/kernel/apic/apic.c:1112
       sysvec_apic_timer_interrupt+0x92/0xc0 arch/x86/kernel/apic/apic.c:1106
       asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
       __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
       _raw_spin_unlock_irqrestore+0x3c/0x70 kernel/locking/spinlock.c:194
       spin_unlock_irqrestore include/linux/spinlock.h:405 [inline]
       rmqueue_bulk mm/page_alloc.c:2351 [inline]
       __rmqueue_pcplist+0xd4a/0x1790 mm/page_alloc.c:2958
       rmqueue_pcplist mm/page_alloc.c:3000 [inline]
       rmqueue mm/page_alloc.c:3043 [inline]
       get_page_from_freelist+0x50c/0x2c00 mm/page_alloc.c:3499
       __alloc_pages+0x1cb/0x4a0 mm/page_alloc.c:4768
       __folio_alloc+0x16/0x40 mm/page_alloc.c:4800
       vma_alloc_folio+0x155/0x890 mm/mempolicy.c:2240
       do_anonymous_page mm/memory.c:4085 [inline]
       do_pte_missing mm/memory.c:3645 [inline]
       handle_pte_fault mm/memory.c:4947 [inline]
       __handle_mm_fault+0x224c/0x41c0 mm/memory.c:5089
       handle_mm_fault+0x2af/0x9f0 mm/memory.c:5243
       do_user_addr_fault+0x2ca/0x1210 arch/x86/mm/fault.c:1349
       handle_page_fault arch/x86/mm/fault.c:1534 [inline]
       exc_page_fault+0x98/0x170 arch/x86/mm/fault.c:1590
       asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:570

other info that might help us debug this:

Chain exists of:
  &pgdat->kswapd_wait --> &rt_b->rt_runtime_lock --> hrtimer_bases.lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(hrtimer_bases.lock);
                               lock(&rt_b->rt_runtime_lock);
                               lock(hrtimer_bases.lock);
  lock(&pgdat->kswapd_wait);

 *** DEADLOCK ***

4 locks held by syz-fuzzer/5132:
 #0: ffff88801c23cdf0 (&vma->vm_lock->lock){++++}-{3:3}, at: vma_start_read include/linux/mm.h:646 [inline]
 #0: ffff88801c23cdf0 (&vma->vm_lock->lock){++++}-{3:3}, at: lock_vma_under_rcu+0x21c/0xc00 mm/memory.c:5291
 #1: ffff88802c843618 (&pcp->lock){+.+.}-{2:2}, at: spin_trylock include/linux/spinlock.h:360 [inline]
 #1: ffff88802c843618 (&pcp->lock){+.+.}-{2:2}, at: rmqueue_pcplist mm/page_alloc.c:2987 [inline]
 #1: ffff88802c843618 (&pcp->lock){+.+.}-{2:2}, at: rmqueue mm/page_alloc.c:3043 [inline]
 #1: ffff88802c843618 (&pcp->lock){+.+.}-{2:2}, at: get_page_from_freelist+0x49d/0x2c00 mm/page_alloc.c:3499
 #2: ffff88802c82b858 (hrtimer_bases.lock){-.-.}-{2:2}, at: __run_hrtimer kernel/time/hrtimer.c:1689 [inline]
 #2: ffff88802c82b858 (hrtimer_bases.lock){-.-.}-{2:2}, at: __hrtimer_run_queues+0x23e/0xbe0 kernel/time/hrtimer.c:1749
 #3: ffffffff8d104620 (fill_pool_map-wait-type-override){+.+.}-{3:3}, at: debug_object_activate+0xf7/0x4f0 lib/debugobjects.c:754

stack backtrace:
CPU: 2 PID: 5132 Comm: syz-fuzzer Not tainted 6.4.0-rc4-syzkaller-00031-g8b817fded42d #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
Call Trace:
 <IRQ>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2188
 check_prev_add kernel/locking/lockdep.c:3113 [inline]
 check_prevs_add kernel/locking/lockdep.c:3232 [inline]
 validate_chain kernel/locking/lockdep.c:3847 [inline]
 __lock_acquire+0x2fcd/0x5f30 kernel/locking/lockdep.c:5088
 lock_acquire kernel/locking/lockdep.c:5705 [inline]
 lock_acquire+0x1b1/0x520 kernel/locking/lockdep.c:5670
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x3d/0x60 kernel/locking/spinlock.c:162
 __wake_up_common_lock+0xb8/0x140 kernel/sched/wait.c:137
 wakeup_kswapd+0x3fe/0x5c0 mm/vmscan.c:7798
 rmqueue mm/page_alloc.c:3057 [inline]
 get_page_from_freelist+0x6c5/0x2c00 mm/page_alloc.c:3499
 __alloc_pages+0x1cb/0x4a0 mm/page_alloc.c:4768
 alloc_pages+0x1aa/0x270 mm/mempolicy.c:2279
 alloc_slab_page mm/slub.c:1851 [inline]
 allocate_slab+0x25f/0x390 mm/slub.c:1998
 new_slab mm/slub.c:2051 [inline]
 ___slab_alloc+0xa91/0x1400 mm/slub.c:3192
 __slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3291
 __slab_alloc_node mm/slub.c:3344 [inline]
 slab_alloc_node mm/slub.c:3441 [inline]
 slab_alloc mm/slub.c:3459 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3466 [inline]
 kmem_cache_alloc+0x38e/0x3b0 mm/slub.c:3475
 kmem_cache_zalloc include/linux/slab.h:670 [inline]
 fill_pool+0x264/0x5c0 lib/debugobjects.c:168
 debug_objects_fill_pool lib/debugobjects.c:606 [inline]
 debug_object_activate+0x12d/0x4f0 lib/debugobjects.c:704
 debug_hrtimer_activate kernel/time/hrtimer.c:420 [inline]
 debug_activate kernel/time/hrtimer.c:475 [inline]
 enqueue_hrtimer+0x27/0x320 kernel/time/hrtimer.c:1084
 __run_hrtimer kernel/time/hrtimer.c:1702 [inline]
 __hrtimer_run_queues+0xa5b/0xbe0 kernel/time/hrtimer.c:1749
 hrtimer_interrupt+0x320/0x7b0 kernel/time/hrtimer.c:1811
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
 __sysvec_apic_timer_interrupt+0x14a/0x430 arch/x86/kernel/apic/apic.c:1112
 sysvec_apic_timer_interrupt+0x92/0xc0 arch/x86/kernel/apic/apic.c:1106
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0x3c/0x70 kernel/locking/spinlock.c:194
Code: 74 24 10 e8 36 ad 52 f7 48 89 ef e8 5e 1b 53 f7 81 e3 00 02 00 00 75 25 9c 58 f6 c4 02 75 2d 48 85 db 74 01 fb bf 01 00 00 00 <e8> cf f0 44 f7 65 8b 05 f0 82 f0 75 85 c0 74 0a 5b 5d c3 e8 bc a6
RSP: 0000:ffffc90003a97818 EFLAGS: 00000206
RAX: 0000000000000002 RBX: 0000000000000200 RCX: 1ffffffff22a55d6
RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000001
RBP: ffff88803fffacc0 R08: 0000000000000001 R09: ffffffff91528d07
R10: 0000000000000001 R11: 1ffffffff1ccaa8c R12: ffffffffffffffe1
R13: dffffc0000000000 R14: ffffea0000581108 R15: ffff88802c843668
 spin_unlock_irqrestore include/linux/spinlock.h:405 [inline]
 rmqueue_bulk mm/page_alloc.c:2351 [inline]
 __rmqueue_pcplist+0xd4a/0x1790 mm/page_alloc.c:2958
 rmqueue_pcplist mm/page_alloc.c:3000 [inline]
 rmqueue mm/page_alloc.c:3043 [inline]
 get_page_from_freelist+0x50c/0x2c00 mm/page_alloc.c:3499
 __alloc_pages+0x1cb/0x4a0 mm/page_alloc.c:4768
 __folio_alloc+0x16/0x40 mm/page_alloc.c:4800
 vma_alloc_folio+0x155/0x890 mm/mempolicy.c:2240
 do_anonymous_page mm/memory.c:4085 [inline]
 do_pte_missing mm/memory.c:3645 [inline]
 handle_pte_fault mm/memory.c:4947 [inline]
 __handle_mm_fault+0x224c/0x41c0 mm/memory.c:5089
 handle_mm_fault+0x2af/0x9f0 mm/memory.c:5243
 do_user_addr_fault+0x2ca/0x1210 arch/x86/mm/fault.c:1349
 handle_page_fault arch/x86/mm/fault.c:1534 [inline]
 exc_page_fault+0x98/0x170 arch/x86/mm/fault.c:1590
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0033:0x46a95c
Code: 4c 01 de 48 29 c3 c5 fe 6f 06 c5 fe 6f 4e 20 c5 fe 6f 56 40 c5 fe 6f 5e 60 48 01 c6 c5 fd 7f 07 c5 fd 7f 4f 20 c5 fd 7f 57 40 <c5> fd 7f 5f 60 48 01 c7 48 29 c3 77 cf 48 01 c3 48 01 fb c4 c1 7e
RSP: 002b:000000c000835b78 EFLAGS: 00010202
RAX: 0000000000000080 RBX: 00000000000035f5 RCX: 000000c000698000
RDX: 000000c000690000 RSI: 000000c000694a0b RDI: 000000c0040effa0
RBP: 000000c000835ba8 R08: 000000c000690000 R09: 0000000000000001
R10: 000000c0040eb615 R11: 000000000000000b R12: 0000000000003ee7
R13: 0000000000007fcb R14: 000000c000082ea0 R15: 0000000000000010
 </TASK>
----------------
Code disassembly (best guess):
   0:	74 24                	je     0x26
   2:	10 e8                	adc    %ch,%al
   4:	36 ad                	lods   %ss:(%rsi),%eax
   6:	52                   	push   %rdx
   7:	f7 48 89 ef e8 5e 1b 	testl  $0x1b5ee8ef,-0x77(%rax)
   e:	53                   	push   %rbx
   f:	f7 81 e3 00 02 00 00 	testl  $0x9c257500,0x200e3(%rcx)
  16:	75 25 9c
  19:	58                   	pop    %rax
  1a:	f6 c4 02             	test   $0x2,%ah
  1d:	75 2d                	jne    0x4c
  1f:	48 85 db             	test   %rbx,%rbx
  22:	74 01                	je     0x25
  24:	fb                   	sti
  25:	bf 01 00 00 00       	mov    $0x1,%edi
* 2a:	e8 cf f0 44 f7       	callq  0xf744f0fe <-- trapping instruction
  2f:	65 8b 05 f0 82 f0 75 	mov    %gs:0x75f082f0(%rip),%eax        # 0x75f08326
  36:	85 c0                	test   %eax,%eax
  38:	74 0a                	je     0x44
  3a:	5b                   	pop    %rbx
  3b:	5d                   	pop    %rbp
  3c:	c3                   	retq
  3d:	e8                   	.byte 0xe8
  3e:	bc                   	.byte 0xbc
  3f:	a6                   	cmpsb  %es:(%rdi),%ds:(%rsi)

Crashes (24):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/05/30 11:37 upstream 8b817fded42d 8d5c7541 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/27 17:10 upstream 49572d536129 cf184559 .config console log report info ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/23 11:20 upstream ae8373a5add4 4bce1a3e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/23 08:21 upstream 421ca22e3138 4bce1a3e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/23 02:25 upstream 421ca22e3138 4bce1a3e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/23 01:54 upstream 421ca22e3138 4bce1a3e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/21 02:43 upstream d635f6cc934b 4bce1a3e .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/18 17:20 upstream 4d6d4c7f541d 3bb7af1d .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/17 15:39 upstream f1fcbaa18b28 258520f6 .config console log report info ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/16 23:22 upstream f1fcbaa18b28 11c89444 .config console log report info ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/13 05:22 upstream 76c7f8873a76 2b9ba477 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/12 15:05 upstream cc3c44c9fda2 893599a2 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/09 23:34 upstream 1dc3731daf1f 1964022b .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/09 14:01 upstream ba0ad6ed89fd f4168103 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/09 11:21 upstream ba0ad6ed89fd f4168103 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/09 00:03 upstream ba0ad6ed89fd 33db58a6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/08 11:28 upstream ac9a78681b92 90c93c40 .config console log report info ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/08 05:44 upstream 17784de648be 90c93c40 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in __hrtimer_run_queues
2023/05/31 08:40 net-next 2e246bca9865 09898419 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce possible deadlock in __hrtimer_run_queues
2023/06/10 03:58 linux-next 715abedee4cd 7086cdb9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __hrtimer_run_queues
2023/06/10 03:42 linux-next 715abedee4cd 7086cdb9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __hrtimer_run_queues
2023/05/29 07:06 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci eb0f1697d729 cf184559 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in __hrtimer_run_queues
2023/05/26 22:23 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci eb0f1697d729 cf184559 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in __hrtimer_run_queues
2023/05/20 07:18 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci f1fcbaa18b28 96689200 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in __hrtimer_run_queues
* Struck through repros no longer work on HEAD.