syzbot


INFO: rcu detected stall in sys_recvmmsg (3)

Status: upstream: reported on 2023/12/19 09:31
Subsystems: kasan mm batman
[Documentation on labels]
Reported-by: syzbot+b079dc0aa6e992859e7c@syzkaller.appspotmail.com
First crash: 475d, last: 88d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [batman?] INFO: rcu detected stall in sys_recvmmsg (3) 0 (1) 2023/12/19 09:31
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: rcu detected stall in sys_recvmmsg C error 1 491d 491d 0/1 upstream: reported C repro on 2022/12/23 04:58
linux-5.15 INFO: rcu detected stall in sys_recvmmsg origin:upstream C 7 18d 387d 0/3 upstream: reported C repro on 2023/04/06 01:32
linux-6.1 INFO: rcu detected stall in sys_recvmmsg (2) 1 22d 22d 0/3 upstream: reported on 2024/04/05 00:00
upstream INFO: rcu detected stall in sys_recvmmsg mptcp C done 52 817d 948d 20/26 fixed on 2022/03/08 16:11
linux-6.1 INFO: rcu detected stall in sys_recvmmsg 5 268d 399d 0/3 auto-obsoleted due to no activity on 2023/11/11 06:06
upstream INFO: rcu detected stall in sys_recvmmsg (2) net 4 577d 607d 0/26 auto-obsoleted due to no activity on 2023/01/03 09:53
android-5-15 BUG: soft lockup in sys_recvmmsg 1 3d18h 3d18h 0/2 premoderation: reported on 2024/04/23 17:26

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	0-...!: (1 GPs behind) idle=37cc/1/0x4000000000000000 softirq=50462/50463 fqs=4
rcu: 	(detected by 1, t=10502 jiffies, g=81977, q=169 ncpus=2)
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 21610 Comm: syz-executor.0 Not tainted 6.8.0-rc2-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
RIP: 0010:mark_lock+0x55/0xc50 kernel/locking/lockdep.c:4639
Code: 8a b5 41 48 8d 44 24 30 48 c7 44 24 38 30 1e 84 8c 48 c1 e8 03 48 c7 44 24 40 70 24 68 81 49 89 c7 48 01 d0 c7 00 f1 f1 f1 f1 <c7> 40 04 00 f2 f2 f2 c7 40 08 00 f2 f2 f2 c7 40 10 00 00 00 f3 c7
RSP: 0018:ffffc900000079b0 EFLAGS: 00000086
RAX: fffff52000000f3c RBX: ffff88802a9046b2 RCX: 0000000000000002
RDX: dffffc0000000000 RSI: ffff88802a904690 RDI: ffff88802a903b80
RBP: ffffc90000007ae8 R08: 0000000000000000 R09: 0000000000000001
R10: ffffffff92157f4f R11: ffffffff8acf30a0 R12: ffff88802a904690
R13: ffffed10055208c7 R14: 0000000000000000 R15: 1ffff92000000f3c
FS:  00007f05e4a956c0(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020fab030 CR3: 000000004130f000 CR4: 0000000000350ef0
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 mark_usage kernel/locking/lockdep.c:4564 [inline]
 __lock_acquire+0x137a/0x3b30 kernel/locking/lockdep.c:5091
 lock_acquire kernel/locking/lockdep.c:5754 [inline]
 lock_acquire+0x1ae/0x520 kernel/locking/lockdep.c:5719
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x3a/0x50 kernel/locking/spinlock.c:162
 debug_object_deactivate+0x138/0x370 lib/debugobjects.c:763
 debug_hrtimer_deactivate kernel/time/hrtimer.c:427 [inline]
 debug_deactivate kernel/time/hrtimer.c:483 [inline]
 __run_hrtimer kernel/time/hrtimer.c:1656 [inline]
 __hrtimer_run_queues+0x470/0xc20 kernel/time/hrtimer.c:1752
 hrtimer_interrupt+0x31b/0x800 kernel/time/hrtimer.c:1814
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1065 [inline]
 __sysvec_apic_timer_interrupt+0x105/0x400 arch/x86/kernel/apic/apic.c:1082
 sysvec_apic_timer_interrupt+0x90/0xb0 arch/x86/kernel/apic/apic.c:1076
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:save_stack+0x75/0x1f0 mm/page_owner.c:114
Code: f3 f3 f3 65 48 8b 04 25 28 00 00 00 48 89 84 24 d8 00 00 00 31 c0 e8 9a 2f 9f ff 31 c0 b9 10 00 00 00 4c 8d 64 24 20 4c 89 e7 <f3> 48 ab 65 48 8b 2c 25 80 c2 03 00 48 8d bd 61 05 00 00 48 89 f8
RSP: 0018:ffffc90016246f38 EFLAGS: 00000246
RAX: 0000000000000000 RBX: 1ffff92002c48de7 RCX: 000000000000000f
RDX: 0000000000040000 RSI: ffffffff81e8f706 RDI: ffffc90016246f60
RBP: 0000000000000000 R08: 0000160000000000 R09: 0000000000000000
R10: ffffed1006e02c00 R11: dffffc0000000000 R12: ffffc90016246f58
R13: 0000000000140dca R14: dffffc0000000000 R15: 0000000000000000
 __set_page_owner+0x1f/0x60 mm/page_owner.c:195
 set_page_owner include/linux/page_owner.h:31 [inline]
 post_alloc_hook+0x2d0/0x350 mm/page_alloc.c:1533
 prep_new_page mm/page_alloc.c:1540 [inline]
 get_page_from_freelist+0xa28/0x3780 mm/page_alloc.c:3311
 __alloc_pages+0x22f/0x2440 mm/page_alloc.c:4567
 alloc_pages_mpol+0x258/0x5f0 mm/mempolicy.c:2133
 vma_alloc_folio+0xad/0x220 mm/mempolicy.c:2172
 folio_prealloc mm/memory.c:1003 [inline]
 wp_page_copy mm/memory.c:3138 [inline]
 do_wp_page+0x1766/0x37b0 mm/memory.c:3525
 handle_pte_fault mm/memory.c:5160 [inline]
 __handle_mm_fault+0x1f87/0x4900 mm/memory.c:5285
 handle_mm_fault+0x47a/0xa10 mm/memory.c:5450
 do_user_addr_fault+0x3f8/0x1030 arch/x86/mm/fault.c:1415
 handle_page_fault arch/x86/mm/fault.c:1507 [inline]
 exc_page_fault+0x5d/0xc0 arch/x86/mm/fault.c:1563
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0010:__put_user_nocheck_4+0x7/0x10 arch/x86/lib/putuser.S:97
Code: 01 ca c3 f3 0f 1e fa 48 89 cb 48 c1 fb 3f 48 09 d9 0f 01 cb 89 01 31 c9 0f 01 ca c3 0f 1f 80 00 00 00 00 f3 0f 1e fa 0f 01 cb <89> 01 31 c9 0f 01 ca c3 90 f3 0f 1e fa 48 89 cb 48 c1 fb 3f 48 09
RSP: 0018:ffffc900162479d8 EFLAGS: 00050246
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000020fab030
RDX: 0000000000040000 RSI: ffffffff8870a5ca RDI: 0000000000000005
RBP: ffffc90016247d98 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000002 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000020fab000 R14: ffffc90016247ddc R15: 0000000000000002
 ____sys_recvmsg+0x2f5/0x5c0 net/socket.c:2816
 ___sys_recvmsg+0x115/0x1a0 net/socket.c:2845
 do_recvmmsg+0x2af/0x740 net/socket.c:2939
 __sys_recvmmsg net/socket.c:3018 [inline]
 __do_sys_recvmmsg net/socket.c:3041 [inline]
 __se_sys_recvmmsg net/socket.c:3034 [inline]
 __x64_sys_recvmmsg+0x235/0x290 net/socket.c:3034
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xd3/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f05e3c7cda9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f05e4a950c8 EFLAGS: 00000246 ORIG_RAX: 000000000000012b
RAX: ffffffffffffffda RBX: 00007f05e3dac050 RCX: 00007f05e3c7cda9
RDX: 00000000040002db RSI: 0000000020000740 RDI: 0000000000000004
RBP: 00007f05e3cc947a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000002 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f05e3dac050 R15: 00007ffd093c1a28
 </TASK>
rcu: rcu_preempt kthread starved for 10494 jiffies! g81977 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:27984 pid:17    tgid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5400 [inline]
 __schedule+0xf12/0x5c00 kernel/sched/core.c:6727
 __schedule_loop kernel/sched/core.c:6802 [inline]
 schedule+0xe9/0x270 kernel/sched/core.c:6817
 schedule_timeout+0x137/0x290 kernel/time/timer.c:2183
 rcu_gp_fqs_loop+0x1ec/0xb10 kernel/rcu/tree.c:1663
 rcu_gp_kthread+0x24b/0x380 kernel/rcu/tree.c:1862
 kthread+0x2c6/0x3a0 kernel/kthread.c:388
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 1 PID: 2926 Comm: kworker/u4:11 Not tainted 6.8.0-rc2-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:csd_lock_wait kernel/smp.c:311 [inline]
RIP: 0010:smp_call_function_many_cond+0x4e9/0x1550 kernel/smp.c:855
Code: 4d 48 b8 00 00 00 00 00 fc ff df 4d 89 f4 4c 89 f5 49 c1 ec 03 83 e5 07 49 01 c4 83 c5 03 e8 7e c5 0b 00 f3 90 41 0f b6 04 24 <40> 38 c5 7c 08 84 c0 0f 85 24 0e 00 00 8b 43 08 31 ff 83 e0 01 41
RSP: 0018:ffffc9000a277930 EFLAGS: 00000293
RAX: 0000000000000000 RBX: ffff8880b9844940 RCX: ffffffff817c6148
RDX: ffff88802a3bd940 RSI: ffffffff817c6122 RDI: 0000000000000005
RBP: 0000000000000003 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000006 R12: ffffed1017308929
R13: 0000000000000001 R14: ffff8880b9844948 R15: ffff8880b993de80
FS:  0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f410f578198 CR3: 000000000cf78000 CR4: 0000000000350ef0
Call Trace:
 <IRQ>
 </IRQ>
 <TASK>
 on_each_cpu_cond_mask+0x40/0x90 kernel/smp.c:1023
 on_each_cpu include/linux/smp.h:71 [inline]
 text_poke_sync arch/x86/kernel/alternative.c:2087 [inline]
 text_poke_bp_batch+0x22b/0x750 arch/x86/kernel/alternative.c:2297
 text_poke_flush arch/x86/kernel/alternative.c:2488 [inline]
 text_poke_flush arch/x86/kernel/alternative.c:2485 [inline]
 text_poke_finish+0x30/0x40 arch/x86/kernel/alternative.c:2495
 arch_jump_label_transform_apply+0x1c/0x30 arch/x86/kernel/jump_label.c:146
 jump_label_update+0x1d7/0x400 kernel/jump_label.c:829
 static_key_enable_cpuslocked+0x1b7/0x270 kernel/jump_label.c:205
 static_key_enable+0x1a/0x20 kernel/jump_label.c:218
 toggle_allocation_gate mm/kfence/core.c:826 [inline]
 toggle_allocation_gate+0xf4/0x250 mm/kfence/core.c:818
 process_one_work+0x886/0x15d0 kernel/workqueue.c:2633
 process_scheduled_works kernel/workqueue.c:2706 [inline]
 worker_thread+0x8b9/0x1290 kernel/workqueue.c:2787
 kthread+0x2c6/0x3a0 kernel/kthread.c:388
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
 </TASK>

Crashes (32):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/01/29 14:59 upstream 41bccc98fb79 991a98f4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2024/01/05 20:16 upstream 6d0dc8559c84 d0304e9c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_recvmmsg
2023/12/19 05:01 upstream 2cf4f94d8e86 3ad490ea .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_recvmmsg
2023/10/07 11:05 upstream 82714078aee4 5e837c76 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_recvmmsg
2023/10/06 00:15 upstream f291209eca5e db17ad9f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_recvmmsg
2023/10/05 01:02 upstream ba7d997a2a29 b7d7ff54 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_recvmmsg
2023/09/25 00:51 upstream 8a511e7efc5a 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_recvmmsg
2023/08/01 17:35 upstream 5d0c230f1de8 df07ffe8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/07/27 16:22 upstream 0a8db05b571a 92476829 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in sys_recvmmsg
2023/07/25 01:41 upstream 0b5547c51827 9a0ddda3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/07/20 13:31 upstream bfa3037d8280 7b630fdb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in sys_recvmmsg
2023/07/16 10:59 upstream 831fe284d827 35d9ecc5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in sys_recvmmsg
2023/07/08 16:10 upstream 8689f4f2ea56 668cb1fa .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root INFO: rcu detected stall in sys_recvmmsg
2023/07/05 22:51 upstream 6cd06ab12d1a ba5dba36 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/06/29 06:27 upstream e8f75c0270d9 ca69c785 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_recvmmsg
2023/06/04 09:42 upstream e5282a7d8f6b a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_recvmmsg
2023/06/03 04:51 upstream 4ecd704a4c51 a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/05/14 23:56 upstream 838a854820ee 2b9ba477 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_recvmmsg
2023/02/03 08:49 upstream 66a87fff1a87 16d19e30 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_recvmmsg
2023/02/02 07:07 upstream 9f266ccaa2f5 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_recvmmsg
2023/01/25 19:45 upstream 948ef7bb70c4 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_recvmmsg
2023/01/13 18:14 upstream d9fc1511728c 529798b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_recvmmsg
2023/01/07 22:23 upstream 9b43a525db12 1dac8c7a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/12/13 22:14 linux-next 48e8992e33ab 3222d10c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/12/12 08:09 linux-next abb240f7a2bd 28b24332 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/08/29 09:02 linux-next ae782d4e2bf5 7ba13a15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/07/20 12:07 linux-next c58c49dd8932 7b630fdb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/06/12 00:06 linux-next 715abedee4cd 7086cdb9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/06/03 11:07 linux-next 715abedee4cd a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/05/20 15:26 linux-next 715abedee4cd 4bce1a3e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/05/09 06:03 linux-next 52025ebbb518 f4168103 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
2023/04/14 19:25 linux-next d3f2cd248191 3cfcaa1b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_recvmmsg
* Struck through repros no longer work on HEAD.