syzbot


BUG: soft lockup in aoecmd_cfg (3)

Status: upstream: reported on 2025/04/30 11:13
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+5dfe55156cc098033526@syzkaller.appspotmail.com
First crash: 369d, last: 10d
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
da6f0d76-c57f-49e1-88bc-d93bcd4d07be repro BUG: soft lockup in aoecmd_cfg (3) 2026/03/06 15:04 2026/03/06 15:04 2026/03/06 15:15 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [block?] BUG: soft lockup in aoecmd_cfg (3) 0 (1) 2025/04/30 11:13
Similar bugs (8)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: rcu detected stall in aoecmd_cfg 1 2 599d 689d 0/3 auto-obsoleted due to no activity on 2024/12/07 10:13
upstream BUG: soft lockup in aoecmd_cfg (2) block 1 3 504d 535d 0/29 auto-obsoleted due to no activity on 2025/03/02 19:34
upstream BUG: soft lockup in aoecmd_cfg block 1 1 823d 819d 0/29 auto-obsoleted due to no activity on 2024/04/17 16:28
upstream INFO: rcu detected stall in aoecmd_cfg (2) usb block 1 C done 7 599d 712d 28/29 fixed on 2024/10/22 11:57
linux-5.15 INFO: rcu detected stall in aoecmd_cfg 1 1 246d 246d 0/3 auto-obsoleted due to no activity on 2025/11/25 17:18
linux-6.1 INFO: rcu detected stall in aoecmd_cfg (2) 1 1 56d 56d 0/3 upstream: reported on 2026/02/23 19:48
linux-5.15 INFO: rcu detected stall in aoecmd_cfg (2) 1 2 85d 145d 0/3 upstream: reported on 2025/11/26 13:19
linux-6.6 INFO: rcu detected stall in aoecmd_cfg 1 1 50d 50d 0/2 upstream: reported on 2026/03/01 04:23

Sample crash report:
watchdog: BUG: soft lockup - CPU#0 stuck for 143s! [syz.2.6550:5928]
Modules linked in:
irq event stamp: 15039415
hardirqs last  enabled at (15039414): [<ffffffff8baf8f2e>] irqentry_exit+0x59e/0x620 kernel/entry/common.c:242
hardirqs last disabled at (15039415): [<ffffffff8baf793e>] sysvec_apic_timer_interrupt+0xe/0xc0 arch/x86/kernel/apic/apic.c:1056
softirqs last  enabled at (14638980): [<ffffffff8188041f>] __do_softirq kernel/softirq.c:656 [inline]
softirqs last  enabled at (14638980): [<ffffffff8188041f>] invoke_softirq kernel/softirq.c:496 [inline]
softirqs last  enabled at (14638980): [<ffffffff8188041f>] __irq_exit_rcu+0x5f/0x150 kernel/softirq.c:723
softirqs last disabled at (14638985): [<ffffffff8188041f>] __do_softirq kernel/softirq.c:656 [inline]
softirqs last disabled at (14638985): [<ffffffff8188041f>] invoke_softirq kernel/softirq.c:496 [inline]
softirqs last disabled at (14638985): [<ffffffff8188041f>] __irq_exit_rcu+0x5f/0x150 kernel/softirq.c:723
CPU: 0 UID: 0 PID: 5928 Comm: syz.2.6550 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:unwind_get_return_address+0x29/0x90 arch/x86/kernel/unwind_orc.c:382
Code: 90 f3 0f 1e fa 41 57 41 56 53 48 89 fb 49 be 00 00 00 00 00 fc ff df 48 89 f8 48 c1 e8 03 42 0f b6 04 30 84 c0 75 4c 83 3b 00 <74> 3a 48 83 c3 48 49 89 df 49 c1 ef 03 43 80 3c 37 00 74 08 48 89
RSP: 0018:ffffc900000076a8 EFLAGS: 00000202
RAX: 0000000000000000 RBX: ffffc900000076c8 RCX: 0000000000000101
RDX: 0000000000000003 RSI: ffffffff8e16b7af RDI: ffffc900000076c8
RBP: ffffc90000007750 R08: ffffc90000007727 R09: 0000000000000000
R10: ffffc90000007718 R11: fffff52000000ee5 R12: ffff888035680000
R13: 00000000000000f0 R14: dffffc0000000000 R15: ffffc900000076c8
FS:  00007f070c0766c0(0000) GS:ffff888125457000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f070c034d58 CR3: 000000003e68c000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000200000000300 DR2: 0000200000000300
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
 <IRQ>
 arch_stack_walk+0xfb/0x150 arch/x86/kernel/stacktrace.c:26
 stack_trace_save+0xa9/0x100 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:57 [inline]
 kasan_save_track+0x3e/0x80 mm/kasan/common.c:78
 unpoison_slab_object mm/kasan/common.c:340 [inline]
 __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:366
 kasan_slab_alloc include/linux/kasan.h:253 [inline]
 slab_post_alloc_hook mm/slub.c:4538 [inline]
 slab_alloc_node mm/slub.c:4866 [inline]
 kmem_cache_alloc_node_noprof+0x384/0x690 mm/slub.c:4918
 __alloc_skb+0x1d0/0x7d0 net/core/skbuff.c:702
 alloc_skb include/linux/skbuff.h:1383 [inline]
 new_skb+0x2f/0x2b0 drivers/block/aoe/aoecmd.c:66
 aoecmd_cfg_pkts drivers/block/aoe/aoecmd.c:430 [inline]
 aoecmd_cfg+0x2b1/0x800 drivers/block/aoe/aoecmd.c:1374
 call_timer_fn+0x192/0x640 kernel/time/timer.c:1748
 expire_timers kernel/time/timer.c:1799 [inline]
 __run_timers kernel/time/timer.c:2373 [inline]
 __run_timer_base+0x652/0x8b0 kernel/time/timer.c:2385
 run_timer_base kernel/time/timer.c:2394 [inline]
 run_timer_softirq+0xb7/0x170 kernel/time/timer.c:2404
 handle_softirqs+0x22a/0x870 kernel/softirq.c:622
 __do_softirq kernel/softirq.c:656 [inline]
 invoke_softirq kernel/softirq.c:496 [inline]
 __irq_exit_rcu+0x5f/0x150 kernel/softirq.c:723
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:739
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1056 [inline]
 sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1056
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
RIP: 0010:call_rcu+0x65c/0x890 kernel/rcu/tree.c:3252
Code: c4 49 bc 00 00 00 00 00 fc ff df 7f 47 e8 ac b9 22 00 9c 58 a9 00 02 00 00 75 1f f7 44 24 28 00 02 00 00 74 01 fb 48 83 c4 50 <5b> 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc cc e8 9f 9e 04 0a f7
RSP: 0018:ffffc90003956da8 EFLAGS: 00000282
RAX: 0000000000000006 RBX: ffff88807a8db678 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff8defacfd RDI: ffffffff8c27d100
RBP: 1ffff110170c77d6 R08: ffffffff9011cfb7 R09: 1ffffffff20239f6
R10: dffffc0000000000 R11: fffffbfff20239f7 R12: dffffc0000000000
R13: ffff8880b863beb0 R14: 1ffff110170c77dc R15: ffff8880b863bee0
 context_switch kernel/sched/core.c:5301 [inline]
 __schedule+0x15e5/0x52d0 kernel/sched/core.c:6911
 preempt_schedule_irq+0x4d/0xa0 kernel/sched/core.c:7238
 irqentry_exit+0x599/0x620 kernel/entry/common.c:239
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:697
RIP: 0010:__update_page_owner_handle+0x1ce/0x570 mm/page_owner.c:258
Code: de e8 76 55 8b ff 4c 39 eb 0f 84 cd 02 00 00 48 8b 1d 16 8b 51 0c 48 8d 3c 2b 48 83 c7 08 48 89 f8 48 c1 e8 03 42 0f b6 04 30 <84> c0 0f 85 06 02 00 00 48 01 eb 8b 44 24 0c 89 43 08 48 89 d8 48
RSP: 0018:ffffc90003957100 EFLAGS: 00000a02
RAX: 0000000000000000 RBX: 0000000000000008 RCX: ffff888035680000
RDX: 0000000000000002 RSI: 0000000000000001 RDI: ffff88801c10d520
RBP: ffff88801c10d510 R08: ffffffff823a606a R09: ffffffff8e75e5e0
R10: ffffc90003956d08 R11: fffff5200072ada3 R12: 0000000000000000
R13: 0000000000000000 R14: dffffc0000000000 R15: 000000000002b5de
 __set_page_owner+0x10a/0x4c0 mm/page_owner.c:342
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x231/0x280 mm/page_alloc.c:1889
 prep_new_page mm/page_alloc.c:1897 [inline]
 get_page_from_freelist+0x24dc/0x2580 mm/page_alloc.c:3962
 __alloc_frozen_pages_noprof+0x18d/0x380 mm/page_alloc.c:5250
 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2490
 alloc_frozen_pages_noprof mm/mempolicy.c:2561 [inline]
 alloc_pages_noprof+0xa8/0x1a0 mm/mempolicy.c:2581
 get_free_pages_noprof+0xf/0x80 mm/page_alloc.c:5309
 __kasan_populate_vmalloc_do mm/kasan/shadow.c:364 [inline]
 __kasan_populate_vmalloc+0x38/0x1d0 mm/kasan/shadow.c:424
 kasan_populate_vmalloc include/linux/kasan.h:580 [inline]
 alloc_vmap_area+0xd73/0x14b0 mm/vmalloc.c:2129
 __get_vm_area_node+0x1f8/0x300 mm/vmalloc.c:3232
 __vmalloc_node_range_noprof+0x372/0x1730 mm/vmalloc.c:4024
 __vmalloc_node_noprof mm/vmalloc.c:4124 [inline]
 __vmalloc_noprof+0xd2/0x120 mm/vmalloc.c:4140
 bpf_prog_alloc_no_stats+0x4a/0x4f0 kernel/bpf/core.c:107
 bpf_prog_alloc+0x3c/0x1a0 kernel/bpf/core.c:156
 bpf_prog_load+0x7ba/0x1ae0 kernel/bpf/syscall.c:2993
 __sys_bpf+0x618/0x950 kernel/bpf/syscall.c:6249
 __do_sys_bpf kernel/bpf/syscall.c:6362 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:6360 [inline]
 __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:6360
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f070b19c819
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f070c076028 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f070b415fa0 RCX: 00007f070b19c819
RDX: 0000000000000094 RSI: 0000200000000880 RDI: 0000000000000005
RBP: 00007f070b232c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f070b416038 R14: 00007f070b415fa0 R15: 00007fff7ebc2c48
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 5910 Comm: syz.7.6545 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:csd_lock_wait kernel/smp.c:342 [inline]
RIP: 0010:smp_call_function_many_cond+0xce5/0x12c0 kernel/smp.c:877
Code: 45 8b 2c 24 44 89 ee 83 e6 01 31 ff e8 d4 eb 0b 00 41 83 e5 01 49 bd 00 00 00 00 00 fc ff df 75 07 e8 7f e7 0b 00 eb 38 f3 90 <42> 0f b6 04 2b 84 c0 75 11 41 f7 04 24 01 00 00 00 74 1e e8 63 e7
RSP: 0018:ffffc900053bf440 EFLAGS: 00000293
RAX: ffffffff81b9cd6d RBX: 1ffff110170c8525 RCX: ffff88807c60bd00
RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000000
RBP: ffffc900053bf580 R08: ffffffff9011cfb7 R09: 1ffffffff20239f6
R10: dffffc0000000000 R11: fffffbfff20239f7 R12: ffff8880b8642928
R13: dffffc0000000000 R14: ffff8880b873c000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888125557000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f55bcb63286 CR3: 000000000e54c000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000200000000300 DR2: 0000200000000300
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
Call Trace:
 <TASK>
 on_each_cpu_cond_mask+0x3f/0x80 kernel/smp.c:1043
 __flush_tlb_multi arch/x86/include/asm/paravirt.h:57 [inline]
 flush_tlb_multi arch/x86/mm/tlb.c:1382 [inline]
 flush_tlb_mm_range+0x5c3/0x10c0 arch/x86/mm/tlb.c:1472
 tlb_flush arch/x86/include/asm/tlb.h:23 [inline]
 tlb_flush_mmu_tlbonly include/asm-generic/tlb.h:505 [inline]
 tlb_flush_mmu+0x1af/0xa30 mm/mmu_gather.c:404
 tlb_finish_mmu+0xf9/0x230 mm/mmu_gather.c:530
 free_ldt_pgtables+0x19d/0x350 arch/x86/kernel/ldt.c:411
 arch_exit_mmap arch/x86/include/asm/mmu_context.h:232 [inline]
 exit_mmap+0x1af/0xa10 mm/mmap.c:1287
 __mmput+0x118/0x430 kernel/fork.c:1175
 exit_mm+0x168/0x220 kernel/exit.c:581
 do_exit+0x6a2/0x23c0 kernel/exit.c:964
 do_group_exit+0x21b/0x2d0 kernel/exit.c:1118
 get_signal+0x1284/0x1330 kernel/signal.c:3034
 arch_do_signal_or_restart+0xbc/0x830 arch/x86/kernel/signal.c:337
 __exit_to_user_mode_loop kernel/entry/common.c:64 [inline]
 exit_to_user_mode_loop+0x86/0x480 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x32d/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fe39119c819
Code: Unable to access opcode bytes at 0x7fe39119c7ef.
RSP: 002b:00007fe3920ce028 EFLAGS: 00000246 ORIG_RAX: 0000000000000029
RAX: 000000000000000c RBX: 00007fe391415fa0 RCX: 00007fe39119c819
RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000011
RBP: 00007fe391232c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fe391416038 R14: 00007fe391415fa0 R15: 00007ffcecbb78e8
 </TASK>

Crashes (27):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/10 06:29 bpf d8a9a4b11a13 38c8e246 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/03/21 22:31 bpf a1e5c46eaed3 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/12/07 03:44 bpf 861111b69896 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/11/22 13:26 bpf 22d70d400556 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/09/16 22:47 bpf f36caa7c14f4 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/07/23 11:45 bpf 7abc678e3084 e1dd4f22 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/07/06 17:25 bpf bf4807c89d8f 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/06/22 19:30 bpf d4adf1c9ee77 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/03/24 04:47 bpf-next bb6da652c585 baf8bf12 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/03/07 00:30 bpf-next 6dd780f97381 41d8037d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/02/12 10:26 bpf-next 4475cdac12c4 76a109e2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/01/29 03:35 bpf-next 08a749184322 b78a7341 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/01/27 21:01 bpf-next 8016abd6314e 9a514c2f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/01/23 06:28 bpf-next a32ae2658471 82c9c083 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/01/17 11:51 bpf-next efad162f5a84 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2026/01/11 09:37 bpf-next 5714ca8cba5e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/12/16 16:43 bpf-next 6f0b824a61f2 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/12/11 02:37 bpf-next 759377dab35e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/07/02 15:20 bpf-next 212ec9229567 0cd59a8f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/06/23 01:21 bpf-next 99fe8af069a9 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/05/28 05:36 bpf-next db22b1382b96 874a1386 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/05/27 14:44 bpf-next 079e5c56a5c4 874a1386 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/04/30 11:12 bpf-next 38d976c32d85 85a5a23f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/04/30 10:52 bpf-next 38d976c32d85 85a5a23f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/04/24 13:25 bpf-next 60400cd2b9be 9c80ffa0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/04/19 00:22 bpf-next 8582d9ab3efd 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
2025/04/16 07:41 bpf-next 7d0b43b68d1c 23b969b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce BUG: soft lockup in aoecmd_cfg
* Struck through repros no longer work on HEAD.