syzbot


INFO: rcu detected stall in schedule_timeout (5)

Status: auto-obsoleted due to no activity on 2024/01/26 16:31
Subsystems: kernel
[Documentation on labels]
First crash: 217d, last: 181d
Similar bugs (5)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in schedule_timeout (3) cgroups mm 11 1571d 1571d 0/26 closed as invalid on 2020/01/09 08:13
upstream INFO: rcu detected stall in schedule_timeout (2) kernel 2 1571d 1571d 0/26 closed as invalid on 2020/01/08 05:23
upstream INFO: rcu detected stall in schedule_timeout (4) kernel 1 276d 276d 0/26 closed as invalid on 2023/09/01 06:44
linux-6.1 INFO: rcu detected stall in schedule_timeout 1 9d10h 9d10h 0/3 upstream: reported on 2024/04/18 03:48
upstream INFO: rcu detected stall in schedule_timeout cgroups mm 69 1606d 1607d 0/26 closed as invalid on 2019/12/04 14:14

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P20737/1:b..l
rcu: 	(detected by 1, t=10502 jiffies, g=119453, q=255 ncpus=2)
task:syz-executor.3  state:R  running task     stack:26928 pid:20737 ppid:5070   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0xee1/0x5a10 kernel/sched/core.c:6695
 preempt_schedule_common+0x45/0xc0 kernel/sched/core.c:6864
 preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
 unwind_next_frame+0x1c80/0x2390 arch/x86/kernel/unwind_orc.c:672
 arch_stack_walk+0xfa/0x170 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x96/0xd0 kernel/stacktrace.c:122
 save_stack+0x160/0x1f0 mm/page_owner.c:128
 __reset_page_owner+0x5a/0x190 mm/page_owner.c:149
 reset_page_owner include/linux/page_owner.h:24 [inline]
 free_pages_prepare mm/page_alloc.c:1136 [inline]
 free_unref_page_prepare+0x476/0xa40 mm/page_alloc.c:2312
 free_unref_page_list+0xe6/0xb30 mm/page_alloc.c:2451
 release_pages+0x32a/0x14e0 mm/swap.c:1042
 __folio_batch_release+0x77/0xe0 mm/swap.c:1062
 folio_batch_release include/linux/pagevec.h:83 [inline]
 shmem_undo_range+0x582/0x1040 mm/shmem.c:1022
 shmem_truncate_range mm/shmem.c:1114 [inline]
 shmem_evict_inode+0x3a2/0xb70 mm/shmem.c:1243
 evict+0x2ed/0x6b0 fs/inode.c:664
 iput_final fs/inode.c:1775 [inline]
 iput.part.0+0x55e/0x7a0 fs/inode.c:1801
 iput+0x5c/0x80 fs/inode.c:1791
 dentry_unlink_inode+0x292/0x430 fs/dcache.c:401
 __dentry_kill+0x3b8/0x640 fs/dcache.c:607
 dentry_kill fs/dcache.c:733 [inline]
 dput+0x8b7/0xf80 fs/dcache.c:913
 __fput+0x536/0xa70 fs/file_table.c:392
 task_work_run+0x14d/0x240 kernel/task_work.c:180
 exit_task_work include/linux/task_work.h:38 [inline]
 do_exit+0xa92/0x2a20 kernel/exit.c:874
 do_group_exit+0xd4/0x2a0 kernel/exit.c:1024
 get_signal+0x23ba/0x2790 kernel/signal.c:2892
 arch_do_signal_or_restart+0x90/0x7f0 arch/x86/kernel/signal.c:309
 exit_to_user_mode_loop kernel/entry/common.c:168 [inline]
 exit_to_user_mode_prepare+0x11f/0x240 kernel/entry/common.c:204
 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline]
 syscall_exit_to_user_mode+0x1d/0x60 kernel/entry/common.c:296
 do_syscall_64+0x44/0xb0 arch/x86/entry/common.c:86
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f919cc7e714
RSP: 002b:00007f919da41f70 EFLAGS: 00000246 ORIG_RAX: 000000000000002d
RAX: fffffffffffffe00 RBX: 00007f919da42050 RCX: 00007f919cc7e714
RDX: 0000000000001000 RSI: 00007f919da420a0 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f919da42008 R14: 00007f919da420a0 R15: 0000000000000000
 </TASK>
rcu: rcu_preempt kthread starved for 1742 jiffies! g119453 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:I stack:28304 pid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0xee1/0x5a10 kernel/sched/core.c:6695
 preempt_schedule_irq+0x52/0x90 kernel/sched/core.c:7007
 irqentry_exit+0x35/0x80 kernel/entry/common.c:432
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:lockdep_init_map_type+0x2bd/0x7c0 kernel/locking/lockdep.c:4914
Code: b8 ff ff ff ff 65 0f c1 05 20 b9 99 7e 83 f8 01 0f 85 07 04 00 00 9c 58 f6 c4 02 0f 85 1a 04 00 00 f7 c5 00 02 00 00 74 01 fb <48> 83 c4 10 5b 5d 41 5c 41 5d 41 5f c3 48 8d 7b 18 48 b8 00 00 00
RSP: 0018:ffffc90000167be0 EFLAGS: 00000246
RAX: 0000000000000003 RBX: ffffc90000167c70 RCX: 1ffffffff1d998fb
RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff922c1f00
RBP: ffffffff922bf5e0 R08: 0000000000000000 R09: 0000000000000000
R10: ffffffff8ecc9257 R11: 0000000000000000 R12: ffffffff922c1f00
R13: 0000000000000000 R14: 1ffffffff1901e40 R15: 0000000000000000
 lockdep_init_map_waits include/linux/lockdep.h:192 [inline]
 lockdep_init_map_wait include/linux/lockdep.h:199 [inline]
 lockdep_init_map include/linux/lockdep.h:205 [inline]
 do_init_timer kernel/time/timer.c:850 [inline]
 init_timer_on_stack_key kernel/time/timer.c:806 [inline]
 schedule_timeout+0x142/0x2c0 kernel/time/timer.c:2165
 rcu_gp_fqs_loop+0x1ec/0xa50 kernel/rcu/tree.c:1613
 rcu_gp_kthread+0x249/0x380 kernel/rcu/tree.c:1812
 kthread+0x33c/0x440 kernel/kthread.c:388
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 5118 Comm: kworker/0:5 Not tainted 6.6.0-rc7-syzkaller-02075-g55c900477f5b #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/09/2023
Workqueue: events nsim_dev_trap_report_work
RIP: 0010:kvm_wait+0x146/0x180 arch/x86/kernel/kvm.c:1062
Code: 5b 5d 41 5c 41 5d e9 89 28 4e 00 e8 84 28 4e 00 e8 3f de 54 00 66 90 e8 78 28 4e 00 0f 00 2d 61 f6 4f 09 e8 6c 28 4e 00 fb f4 <5b> 5d 41 5c 41 5d e9 5f 28 4e 00 e8 5a 28 4e 00 e8 85 de 54 00 e9
RSP: 0018:ffffc900000076f8 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 0000000000000003 RCX: 0000000000000100
RDX: ffff8880214dd940 RSI: ffffffff8139bd64 RDI: ffffffff8ae942e0
RBP: ffff8880243568c0 R08: 0000000000000001 R09: fffffbfff233a1ef
R10: ffffffff919d0f7f R11: 0000000000000000 R12: 0000000000000003
R13: 0000000000000003 R14: 0000000000000000 R15: ffffed100486ad18
FS:  0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c000065fc0 CR3: 0000000079e61000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 pv_wait arch/x86/include/asm/paravirt.h:598 [inline]
 pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:470 [inline]
 __pv_queued_spin_lock_slowpath+0x959/0xc70 kernel/locking/qspinlock.c:511
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:586 [inline]
 queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x20e/0x2b0 kernel/locking/spinlock_debug.c:115
 spin_lock include/linux/spinlock.h:351 [inline]
 __netif_tx_lock include/linux/netdevice.h:4381 [inline]
 __dev_queue_xmit+0x1a7f/0x3d10 net/core/dev.c:4340
 dev_queue_xmit include/linux/netdevice.h:3112 [inline]
 can_send+0x77c/0xb40 net/can/af_can.c:276
 can_can_gw_rcv+0x74c/0xab0 net/can/gw.c:561
 deliver net/can/af_can.c:572 [inline]
 can_rcv_filter+0x15e/0x8e0 net/can/af_can.c:599
 can_receive+0x320/0x5c0 net/can/af_can.c:663
 can_rcv+0x1dc/0x270 net/can/af_can.c:687
 __netif_receive_skb_one_core+0x115/0x180 net/core/dev.c:5527
 __netif_receive_skb+0x1f/0x1b0 net/core/dev.c:5641
 process_backlog+0x101/0x6b0 net/core/dev.c:5969
 __napi_poll.constprop.0+0xb4/0x540 net/core/dev.c:6531
 napi_poll net/core/dev.c:6600 [inline]
 net_rx_action+0x956/0xe90 net/core/dev.c:6733
 __do_softirq+0x218/0x965 kernel/softirq.c:553
 do_softirq kernel/softirq.c:454 [inline]
 do_softirq+0xaa/0xe0 kernel/softirq.c:441
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0xf8/0x120 kernel/softirq.c:381
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:820 [inline]
 nsim_dev_trap_report_work+0x86a/0xc70 drivers/net/netdevsim/dev.c:850
 process_one_work+0x884/0x15c0 kernel/workqueue.c:2630
 process_scheduled_works kernel/workqueue.c:2703 [inline]
 worker_thread+0x8b9/0x1290 kernel/workqueue.c:2784
 kthread+0x33c/0x440 kernel/kthread.c:388
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/10/28 16:22 net-next 55c900477f5b 3c418d72 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in schedule_timeout
2023/09/23 10:12 linux-next 940fcc189c51 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in schedule_timeout
* Struck through repros no longer work on HEAD.