syzbot


INFO: rcu detected stall in inet_sendmsg (4)

Status: auto-obsoleted due to no activity on 2023/01/16 15:35
Subsystems: net
[Documentation on labels]
First crash: 553d, last: 553d
Similar bugs (9)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in inet_sendmsg sctp 35 1633d 1937d 0/26 auto-closed as invalid on 2020/01/26 18:56
linux-4.14 INFO: rcu detected stall in inet_sendmsg (2) 1 1435d 1435d 0/1 auto-closed as invalid on 2020/09/11 11:53
linux-4.14 INFO: rcu detected stall in inet_sendmsg 3 1684d 1790d 0/1 auto-closed as invalid on 2020/01/06 14:34
linux-4.19 INFO: rcu detected stall in inet_sendmsg 3 1683d 1685d 0/1 auto-closed as invalid on 2020/01/06 19:02
linux-4.19 INFO: rcu detected stall in inet_sendmsg (2) 4 1285d 1452d 0/1 auto-closed as invalid on 2021/02/08 07:43
upstream INFO: rcu detected stall in inet_sendmsg (2) perf 13 1372d 1456d 15/26 fixed on 2020/07/17 17:58
upstream INFO: rcu detected stall in inet_sendmsg (3) perf syz error error 68 723d 1361d 0/26 auto-obsoleted due to no activity on 2022/10/08 22:05
linux-4.14 BUG: soft lockup in inet_sendmsg 1 743d 743d 0/1 auto-closed as invalid on 2022/08/03 16:05
linux-4.19 BUG: soft lockup in inet_sendmsg 74 437d 774d 0/1 upstream: reported on 2022/03/05 20:29

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	0-...!: (1 GPs behind) idle=9844/1/0x4000000000000000 softirq=42439/42441 fqs=0
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P19689/1:b..l
	(detected by 1, t=10506 jiffies, g=52741, q=369 ncpus=2)
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 3695 Comm: kworker/0:5 Not tainted 6.0.0-syzkaller-09589-g55be6084c8e0 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/22/2022
Workqueue: events p9_poll_workfn
RIP: 0010:halt arch/x86/include/asm/irqflags.h:99 [inline]
RIP: 0010:kvm_wait+0xc1/0x100 arch/x86/kernel/kvm.c:1060
Code: f4 48 83 c4 10 c3 89 74 24 0c 48 89 3c 24 e8 f6 07 4c 00 8b 74 24 0c 48 8b 3c 24 e9 6a ff ff ff 66 90 0f 00 2d 70 de b4 08 f4 <eb> bf 89 74 24 0c 48 89 3c 24 e8 f0 38 92 00 8b 74 24 0c 48 8b 3c
RSP: 0018:ffffc90003b579c0 EFLAGS: 00000046
RAX: 0000000000000003 RBX: 0000000000000000 RCX: dffffc0000000000
RDX: 0000000000000000 RSI: 0000000000000003 RDI: ffff8880776f6400
RBP: ffff8880776f6400 R08: 0000000000000001 R09: ffff8880776f6400
R10: ffffed100eedec80 R11: 0000000000000001 R12: 0000000000000000
R13: ffffed100eedec80 R14: 0000000000000001 R15: ffff8880b9a3ae80
FS:  0000000000000000(0000) GS:ffff8880b9a00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2e42d000 CR3: 00000000474b0000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 pv_wait arch/x86/include/asm/paravirt.h:603 [inline]
 pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:470 [inline]
 __pv_queued_spin_lock_slowpath+0x8c7/0xb50 kernel/locking/qspinlock.c:511
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
 queued_spin_lock_slowpath arch/x86/include/asm/qspinlock.h:51 [inline]
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x200/0x2a0 kernel/locking/spinlock_debug.c:115
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
 _raw_spin_lock_irqsave+0x41/0x50 kernel/locking/spinlock.c:162
 p9_tag_remove net/9p/client.c:367 [inline]
 p9_req_put net/9p/client.c:375 [inline]
 p9_req_put+0xc6/0x250 net/9p/client.c:372
 p9_conn_cancel+0x640/0x970 net/9p/trans_fd.c:213
 p9_poll_mux net/9p/trans_fd.c:627 [inline]
 p9_poll_workfn+0x25d/0x4e0 net/9p/trans_fd.c:1147
 process_one_work+0x991/0x1610 kernel/workqueue.c:2289
 worker_thread+0x665/0x1080 kernel/workqueue.c:2436
 kthread+0x2e4/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
task:syz-executor.2  state:R  running task     stack:25880 pid:19689 ppid:3636   flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5178 [inline]
 __schedule+0xadf/0x5270 kernel/sched/core.c:6490
 preempt_schedule_irq+0x4e/0x90 kernel/sched/core.c:6802
 irqentry_exit+0x31/0x80 kernel/entry/common.c:428
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:__sanitizer_cov_trace_pc+0x50/0x60 kernel/kcov.c:206
Code: 35 8b 82 bc 15 00 00 85 c0 74 2b 8b 82 98 15 00 00 83 f8 02 75 20 48 8b 8a a0 15 00 00 8b 92 9c 15 00 00 48 8b 01 48 83 c0 01 <48> 39 c2 76 07 48 89 01 48 89 34 c1 c3 0f 1f 00 41 55 41 54 49 89
RSP: 0018:ffffc900051eef20 EFLAGS: 00000216
RAX: 0000000000040000 RBX: 0000000000000002 RCX: ffffc90005871000
RDX: 0000000000040000 RSI: ffffffff880be5d1 RDI: 0000000000000004
RBP: 0000000000000100 R08: 0000000000000004 R09: 0000000001010164
R10: 00000000010000e0 R11: 000000000008c07d R12: ffff88801718dc00
R13: 0000000000000032 R14: dffffc0000000000 R15: 0000000001000000
 __xfrm_state_lookup.isra.0+0x421/0x870 net/xfrm/xfrm_state.c:965
 xfrm_state_find+0x1cac/0x4f10 net/xfrm/xfrm_state.c:1129
 xfrm_tmpl_resolve_one net/xfrm/xfrm_policy.c:2392 [inline]
 xfrm_tmpl_resolve+0x2f3/0xd40 net/xfrm/xfrm_policy.c:2437
 xfrm_resolve_and_create_bundle+0x123/0x2580 net/xfrm/xfrm_policy.c:2730
 xfrm_lookup_with_ifid+0x229/0x20f0 net/xfrm/xfrm_policy.c:3064
 xfrm_lookup net/xfrm/xfrm_policy.c:3193 [inline]
 xfrm_lookup_route+0x36/0x1e0 net/xfrm/xfrm_policy.c:3204
 ip_route_output_flow+0x114/0x150 net/ipv4/route.c:2880
 udp_sendmsg+0x1963/0x2740 net/ipv4/udp.c:1224
 inet_sendmsg+0x99/0xe0 net/ipv4/af_inet.c:819
 sock_sendmsg_nosec net/socket.c:714 [inline]
 sock_sendmsg+0xcf/0x120 net/socket.c:734
 ____sys_sendmsg+0x334/0x8c0 net/socket.c:2482
 ___sys_sendmsg+0x110/0x1b0 net/socket.c:2536
 __sys_sendmmsg+0x18b/0x460 net/socket.c:2622
 __do_sys_sendmmsg net/socket.c:2651 [inline]
 __se_sys_sendmmsg net/socket.c:2648 [inline]
 __x64_sys_sendmmsg+0x99/0x100 net/socket.c:2648
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f44c8e8b5a9
RSP: 002b:00007f44ca0c8168 EFLAGS: 00000246 ORIG_RAX: 0000000000000133
RAX: ffffffffffffffda RBX: 00007f44c8fac050 RCX: 00007f44c8e8b5a9
RDX: 0400000000000354 RSI: 0000000020000180 RDI: 0000000000000003
RBP: 00007f44c8ee6580 R08: 0000000000000000 R09: 0000000000000000
R10: 000002873dedf99c R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc94da289f R14: 00007f44ca0c8300 R15: 0000000000022000
 </TASK>
rcu: rcu_preempt kthread starved for 10506 jiffies! g52741 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:29520 pid:16    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5178 [inline]
 __schedule+0xadf/0x5270 kernel/sched/core.c:6490
 schedule+0xda/0x1b0 kernel/sched/core.c:6566
 schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1935
 rcu_gp_fqs_loop+0x190/0x910 kernel/rcu/tree.c:1658
 rcu_gp_kthread+0x236/0x360 kernel/rcu/tree.c:1857
 kthread+0x2e4/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 1 PID: 3807 Comm: kworker/u4:8 Not tainted 6.0.0-syzkaller-09589-g55be6084c8e0 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/22/2022
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:csd_lock_wait kernel/smp.c:413 [inline]
RIP: 0010:smp_call_function_many_cond+0x5fe/0x1420 kernel/smp.c:987
Code: 89 ee e8 f5 aa 0a 00 85 ed 74 48 48 8b 44 24 08 49 89 c4 83 e0 07 49 c1 ec 03 48 89 c5 4d 01 f4 83 c5 03 e8 14 ae 0a 00 f3 90 <41> 0f b6 04 24 40 38 c5 7c 08 84 c0 0f 85 7a 0b 00 00 8b 43 08 31
RSP: 0018:ffffc90004acf968 EFLAGS: 00000293
RAX: 0000000000000000 RBX: ffff8880b9a44fc0 RCX: 0000000000000000
RDX: ffff88807f3dc0c0 RSI: ffffffff816fc89c RDI: 0000000000000005
RBP: 0000000000000003 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000001 R12: ffffed10173489f9
R13: 0000000000000000 R14: dffffc0000000000 R15: 0000000000000001
FS:  0000000000000000(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9a2c6e62e0 CR3: 000000000bc8e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 on_each_cpu_cond_mask+0x56/0xa0 kernel/smp.c:1153
 on_each_cpu include/linux/smp.h:71 [inline]
 text_poke_sync arch/x86/kernel/alternative.c:1311 [inline]
 text_poke_bp_batch+0x22e/0x6b0 arch/x86/kernel/alternative.c:1502
 text_poke_flush arch/x86/kernel/alternative.c:1670 [inline]
 text_poke_flush arch/x86/kernel/alternative.c:1667 [inline]
 text_poke_finish+0x16/0x30 arch/x86/kernel/alternative.c:1677
 arch_jump_label_transform_apply+0x13/0x20 arch/x86/kernel/jump_label.c:146
 jump_label_update+0x32f/0x410 kernel/jump_label.c:801
 static_key_enable_cpuslocked+0x1b1/0x260 kernel/jump_label.c:177
 static_key_enable+0x16/0x20 kernel/jump_label.c:190
 toggle_allocation_gate mm/kfence/core.c:811 [inline]
 toggle_allocation_gate+0x100/0x390 mm/kfence/core.c:803
 process_one_work+0x991/0x1610 kernel/workqueue.c:2289
 worker_thread+0x665/0x1080 kernel/workqueue.c:2436
 kthread+0x2e4/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/10/13 06:12 upstream 55be6084c8e0 3f6b40a1 .config console log report info ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in inet_sendmsg
* Struck through repros no longer work on HEAD.