syzbot


INFO: rcu detected stall in sys_openat (2)

Status: upstream: reported C repro on 2024/03/13 11:37
Bug presence: origin:upstream
[Documentation on labels]
Reported-by: syzbot+9854cfdd44796d239bec@syzkaller.appspotmail.com
First crash: 103d, last: 12d
Bug presence (1)
Date Name Commit Repro Result
2024/03/13 upstream (ToT) 259f7d5e2baf C [report] INFO: rcu detected stall in worker_thread
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: rcu detected stall in sys_openat 3 291d 324d 0/3 auto-obsoleted due to no activity on 2023/12/16 21:24
upstream INFO: rcu detected stall in sys_openat exfat 5 1989d 2067d 0/27 closed as dup on 2018/10/27 13:02
linux-5.15 INFO: rcu detected stall in sys_openat origin:upstream C 11 18d 371d 0/3 upstream: reported C repro on 2023/06/19 20:52
upstream INFO: rcu detected stall in sys_openat (3) mm kernfs block C error 58 1d05h 295d 0/27 upstream: reported C repro on 2023/09/03 11:10
upstream INFO: rcu detected stall in sys_openat (2) kernfs 8 871d 1083d 0/27 closed as invalid on 2022/02/08 09:50
android-5-15 BUG: soft lockup in sys_openat 21 23d 78d 0/2 premoderation: reported on 2024/04/07 09:29
Fix bisection attempts (2)
Created Duration User Patch Repo Result
2024/05/20 02:32 1h25m bisect fix linux-6.1.y job log (0) log
2024/04/16 07:43 2h24m bisect fix linux-6.1.y job log (0) log

Sample crash report:
rcu: INFO: rcu_preempt detected expedited stalls on CPUs/tasks: { 1-...D } 2688 jiffies s: 609 root: 0x2/.
rcu: blocking rcu_node structures (internal RCU debug):
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 3569 Comm: syz-executor176 Not tainted 6.1.81-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:22 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline]
RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:106 [inline]
RIP: 0010:__raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:108 [inline]
RIP: 0010:_raw_spin_lock_irqsave+0x65/0x120 kernel/locking/spinlock.c:162
Code: b5 41 48 c7 44 24 08 81 d0 8d 8c 48 c7 44 24 10 00 b7 93 8a 49 89 e5 49 c1 ed 03 48 b8 f1 f1 f1 f1 00 f3 f3 f3 4b 89 44 3d 00 <4c> 89 e3 48 c1 eb 03 42 80 3c 3b 00 74 08 4c 89 e7 e8 b5 3d 4e f7
RSP: 0018:ffffc900001e0c20 EFLAGS: 00000806
RAX: f3f3f300f1f1f1f1 RBX: 0000000000000000 RCX: 000000000000d618
RDX: 0000000000010004 RSI: ffffffff8aedd700 RDI: ffffffff91f11ca0
RBP: ffffc900001e0cb8 R08: ffffffff8179adf0 R09: 0000000000000003
R10: ffffffffffffffff R11: dffffc0000000001 R12: ffffc900001e0c40
R13: 1ffff9200003c184 R14: ffffffff91f11ca0 R15: dffffc0000000000
FS:  0000555555d38380(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fcb3fc383b0 CR3: 0000000078f6f000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 debug_object_activate+0x68/0x4e0 lib/debugobjects.c:697
 debug_hrtimer_activate kernel/time/hrtimer.c:420 [inline]
 debug_activate kernel/time/hrtimer.c:475 [inline]
 enqueue_hrtimer+0x30/0x410 kernel/time/hrtimer.c:1084
 __run_hrtimer kernel/time/hrtimer.c:1703 [inline]
 __hrtimer_run_queues+0x728/0xe50 kernel/time/hrtimer.c:1750
 hrtimer_interrupt+0x392/0x980 kernel/time/hrtimer.c:1812
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
 __sysvec_apic_timer_interrupt+0x156/0x580 arch/x86/kernel/apic/apic.c:1112
 sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1106
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:653
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xd4/0x130 kernel/locking/spinlock.c:194
Code: 9c 8f 44 24 20 42 80 3c 23 00 74 08 4c 89 f7 e8 42 3a 4e f7 f6 44 24 21 02 75 4e 41 f7 c7 00 02 00 00 74 01 fb bf 01 00 00 00 <e8> 77 84 ca f6 65 8b 05 b8 b4 6e 75 85 c0 74 3f 48 c7 04 24 0e 36
RSP: 0018:ffffc90003bfefe0 EFLAGS: 00000206
RAX: 699902d1cbcd6900 RBX: 1ffff9200077fe00 RCX: ffffffff816ababa
RDX: dffffc0000000000 RSI: ffffffff8aebed40 RDI: 0000000000000001
RBP: ffffc90003bff070 R08: dffffc0000000000 R09: fffffbfff2092a45
R10: 0000000000000000 R11: dffffc0000000001 R12: dffffc0000000000
R13: 1ffff9200077fdfc R14: ffffc90003bff000 R15: 0000000000000246
 spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
 rmqueue_bulk mm/page_alloc.c:3146 [inline]
 __rmqueue_pcplist+0x2023/0x2310 mm/page_alloc.c:3749
 rmqueue_pcplist mm/page_alloc.c:3791 [inline]
 rmqueue mm/page_alloc.c:3834 [inline]
 get_page_from_freelist+0x86c/0x3320 mm/page_alloc.c:4276
 __alloc_pages+0x28d/0x770 mm/page_alloc.c:5545
 alloc_slab_page+0x6a/0x150 mm/slub.c:1794
 allocate_slab mm/slub.c:1939 [inline]
 new_slab+0x84/0x2d0 mm/slub.c:1992
 ___slab_alloc+0xc20/0x1270 mm/slub.c:3180
 __slab_alloc mm/slub.c:3279 [inline]
 slab_alloc_node mm/slub.c:3364 [inline]
 slab_alloc mm/slub.c:3406 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3413 [inline]
 kmem_cache_alloc_lru+0x1a5/0x2d0 mm/slub.c:3429
 alloc_inode_sb include/linux/fs.h:3193 [inline]
 proc_alloc_inode+0x26/0xb0 fs/proc/inode.c:67
 alloc_inode fs/inode.c:261 [inline]
 new_inode_pseudo+0x61/0x1d0 fs/inode.c:1020
 new_inode+0x25/0x1d0 fs/inode.c:1048
 proc_pid_make_inode+0x21/0x1c0 fs/proc/base.c:1897
 proc_pident_instantiate+0x72/0x2a0 fs/proc/base.c:2643
 proc_pident_lookup+0x1ca/0x260 fs/proc/base.c:2679
 lookup_open fs/namei.c:3462 [inline]
 open_last_lookups fs/namei.c:3552 [inline]
 path_openat+0x10fb/0x2e60 fs/namei.c:3782
 do_filp_open+0x230/0x480 fs/namei.c:3812
 do_sys_openat2+0x13b/0x500 fs/open.c:1318
 do_sys_open fs/open.c:1334 [inline]
 __do_sys_openat fs/open.c:1350 [inline]
 __se_sys_openat fs/open.c:1345 [inline]
 __x64_sys_openat+0x243/0x290 fs/open.c:1345
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fcb3fbe9f21
Code: 75 57 89 f0 25 00 00 41 00 3d 00 00 41 00 74 49 80 3d 8a e1 07 00 00 74 6d 89 da 48 89 ee bf 9c ff ff ff b8 01 01 00 00 0f 05 <48> 3d 00 f0 ff ff 0f 87 93 00 00 00 48 8b 54 24 28 64 48 2b 14 25
RSP: 002b:00007ffdb529d060 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 0000000000080001 RCX: 00007fcb3fbe9f21
RDX: 0000000000080001 RSI: 00007fcb3fc383b5 RDI: 00000000ffffff9c
RBP: 00007fcb3fc383b5 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 00007ffdb529d100
R13: 000000000001f59f R14: 00007ffdb529d5ec R15: 0000000000000003
 </TASK>
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 1.757 msecs

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/03/13 11:37 linux-6.1.y 61adba85cc40 db5b7ff0 .config console log report syz C [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: rcu detected stall in sys_openat
2024/06/12 09:14 linux-6.1.y 88690811da69 4d75f4f7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: rcu detected stall in sys_openat
2024/05/27 15:44 linux-6.1.y 88690811da69 761766e6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: rcu detected stall in sys_openat
2024/06/08 02:54 linux-6.1.y 88690811da69 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: rcu detected stall in sys_openat
* Struck through repros no longer work on HEAD.