syzbot


BUG: soft lockup in wb_workfn

Status: upstream: reported syz repro on 2024/10/24 21:18
Bug presence: origin:upstream
[Documentation on labels]
Reported-by: syzbot+48cac2bbba146c43df3d@syzkaller.appspotmail.com
First crash: 50d, last: 34d
Bug presence (1)
Date Name Commit Repro Result
2024/11/13 upstream (ToT) 14b6320953a3 C [report] INFO: rcu detected stall in tc_modify_qdisc
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: rcu detected stall in wb_workfn origin:lts-only syz error 4 65d 137d 0/3 upstream: reported syz repro on 2024/07/29 15:58
upstream BUG: soft lockup in wb_workfn kernel 1 1949d 1945d 0/28 auto-closed as invalid on 2019/11/11 12:45
upstream INFO: rcu detected stall in wb_workfn (2) fs 1 900d 900d 0/28 auto-closed as invalid on 2022/09/25 22:53
upstream INFO: rcu detected stall in wb_workfn mm 2 1143d 1150d 0/28 auto-closed as invalid on 2022/01/25 21:50
linux-4.19 INFO: rcu detected stall in wb_workfn 1 849d 849d 0/1 auto-obsoleted due to no activity on 2022/12/16 09:39
upstream INFO: rcu detected stall in wb_workfn (3) hfs ext4 block 3 481d 563d 0/28 auto-obsoleted due to no activity on 2023/11/18 21:40
linux-4.14 INFO: rcu detected stall in wb_workfn 1 1882d 1882d 0/1 auto-closed as invalid on 2020/02/17 08:22

Sample crash report:
rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 1-...D } 2631 jiffies s: 4517 root: 0x2/.
rcu: blocking rcu_node structures (internal RCU debug):
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 51 Comm: kworker/u4:3 Not tainted 6.1.116-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue: writeback wb_workfn (flush-8:0)
RIP: 0010:trace_lock_release include/trace/events/lock.h:69 [inline]
RIP: 0010:lock_release+0xcf/0xa20 kernel/locking/lockdep.c:5673
Code: 0d 0f 86 c2 05 00 00 89 db 48 89 d8 48 c1 e8 06 48 8d 3c c5 28 08 9a 8e be 08 00 00 00 e8 b9 5d 77 00 48 0f a3 1d a9 4f 2f 0d <73> 0d e8 5a bf 08 00 84 c0 0f 84 c5 05 00 00 48 c7 c0 e4 3c 9a 8e
RSP: 0018:ffffc900001e0b60 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 0000000000000001 RCX: ffffffff816ab877
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8e9a0828
RBP: ffffc900001e0ca0 R08: dffffc0000000000 R09: fffffbfff1d34106
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff9200003c178
R13: ffffffff88d033db R14: dffffc0000000000 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005583289b4950 CR3: 000000007f859000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 __raw_spin_unlock include/linux/spinlock_api_smp.h:141 [inline]
 _raw_spin_unlock+0x12/0x40 kernel/locking/spinlock.c:186
 spin_unlock include/linux/spinlock.h:391 [inline]
 advance_sched+0x68b/0x970 net/sched/sch_taprio.c:749
 __run_hrtimer kernel/time/hrtimer.c:1689 [inline]
 __hrtimer_run_queues+0x5e5/0xe50 kernel/time/hrtimer.c:1753
 hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1815
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1107 [inline]
 __sysvec_apic_timer_interrupt+0x158/0x5b0 arch/x86/kernel/apic/apic.c:1124
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1118 [inline]
 sysvec_apic_timer_interrupt+0x9b/0xc0 arch/x86/kernel/apic/apic.c:1118
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:691
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
Code: 9c 8f 44 24 20 42 80 3c 23 00 74 08 4c 89 f7 e8 82 4d 30 f7 f6 44 24 21 02 75 4e 41 f7 c7 00 02 00 00 74 01 fb bf 01 00 00 00 <e8> 87 b8 ac f6 65 8b 05 08 a8 50 75 85 c0 74 3f 48 c7 04 24 0e 36
RSP: 0018:ffffc90000bc6b80 EFLAGS: 00000206
RAX: 26ac3d9a41269b00 RBX: 1ffff92000178d74 RCX: ffffffff816b028a
RDX: dffffc0000000000 RSI: ffffffff8b0c01c0 RDI: 0000000000000001
RBP: ffffc90000bc6c10 R08: dffffc0000000000 R09: fffffbfff2246261
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff92000178d70 R14: ffffc90000bc6ba0 R15: 0000000000000246
 spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
 __folio_start_writeback+0x57f/0x10c0 mm/page-writeback.c:3022
 ext4_bio_write_page+0x352/0x2ac0 fs/ext4/page-io.c:453
 mpage_submit_page+0x18d/0x230 fs/ext4/inode.c:2141
 mpage_map_and_submit_buffers fs/ext4/inode.c:2386 [inline]
 mpage_map_and_submit_extent fs/ext4/inode.c:2525 [inline]
 ext4_writepages+0x2076/0x3de0 fs/ext4/inode.c:2854
 do_writepages+0x3a2/0x670 mm/page-writeback.c:2491
 __writeback_single_inode+0x15d/0x11e0 fs/fs-writeback.c:1612
 writeback_sb_inodes+0xc2b/0x1b20 fs/fs-writeback.c:1903
 __writeback_inodes_wb+0x114/0x400 fs/fs-writeback.c:1974
 wb_writeback+0x4b1/0xe10 fs/fs-writeback.c:2079
 wb_check_old_data_flush fs/fs-writeback.c:2179 [inline]
 wb_do_writeback fs/fs-writeback.c:2232 [inline]
 wb_workfn+0xbec/0x1020 fs/fs-writeback.c:2260
 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
 kthread+0x28d/0x320 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/09 15:06 linux-6.1.y d7039b844a1c 6b856513 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: rcu detected stall in wb_workfn
2024/10/24 21:17 linux-6.1.y 7ec6f9fa3d97 0d144d1a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 BUG: soft lockup in wb_workfn
* Struck through repros no longer work on HEAD.