syzbot


INFO: rcu detected stall in sys_sendmmsg (6)

Status: auto-obsoleted due to no activity on 2024/03/11 17:33
Subsystems: mm net
[Documentation on labels]
First crash: 264d, last: 218d
Similar bugs (11)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in sys_sendmmsg (2) kernel 2 1688d 1689d 0/27 closed as invalid on 2019/12/04 14:14
linux-6.1 INFO: rcu detected stall in sys_sendmmsg 1 39d 39d 0/3 upstream: reported on 2024/06/08 04:23
linux-6.1 BUG: soft lockup in sys_sendmmsg 2 383d 411d 0/3 auto-obsoleted due to no activity on 2023/10/08 22:57
linux-5.15 INFO: rcu detected stall in sys_sendmmsg 1 470d 470d 0/3 auto-obsoleted due to no activity on 2023/08/02 15:37
upstream INFO: rcu detected stall in sys_sendmmsg (3) kernel 3 923d 1023d 0/27 closed as invalid on 2022/02/08 10:00
linux-5.15 INFO: rcu detected stall in sys_sendmmsg (2) 1 69d 69d 0/3 upstream: reported on 2024/05/09 11:17
upstream INFO: rcu detected stall in sys_sendmmsg (5) net 3 408d 422d 0/27 auto-obsoleted due to no activity on 2023/09/03 00:09
upstream INFO: rcu detected stall in sys_sendmmsg net 2 1772d 1773d 13/27 fixed on 2019/10/09 10:54
upstream INFO: rcu detected stall in sys_sendmmsg (4) net 1 721d 721d 0/27 auto-obsoleted due to no activity on 2022/10/25 00:28
android-5-15 BUG: soft lockup in sys_sendmmsg 1 52d 52d 0/2 premoderation: reported on 2024/05/26 08:47
android-6-1 BUG: soft lockup in sys_sendmmsg 1 264d 264d 0/2 auto-obsoleted due to no activity on 2024/01/25 01:21

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P7691/1:b..l P7689/1:b..l
rcu: 	(detected by 0, t=10502 jiffies, g=30921, q=534 ncpus=2)
task:syz-executor.4  state:R  running task     stack:26080 pid:7689  tgid:7686  ppid:5117   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5376 [inline]
 __schedule+0xedb/0x5af0 kernel/sched/core.c:6688
 preempt_schedule_common+0x45/0xc0 kernel/sched/core.c:6865
 preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:45
 __raw_spin_unlock include/linux/spinlock_api_smp.h:143 [inline]
 _raw_spin_unlock+0x3a/0x40 kernel/locking/spinlock.c:186
 spin_unlock include/linux/spinlock.h:391 [inline]
 wp_page_copy mm/memory.c:3228 [inline]
 do_wp_page+0x1a65/0x36b0 mm/memory.c:3511
 handle_pte_fault mm/memory.c:5055 [inline]
 __handle_mm_fault+0x1d7d/0x3d70 mm/memory.c:5180
 handle_mm_fault+0x47a/0xa10 mm/memory.c:5345
 do_user_addr_fault+0x3d1/0x1000 arch/x86/mm/fault.c:1413
 handle_page_fault arch/x86/mm/fault.c:1505 [inline]
 exc_page_fault+0x5d/0xc0 arch/x86/mm/fault.c:1561
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0010:rep_movs_alternative+0x4a/0x70 arch/x86/lib/copy_user_64.S:71
Code: 75 f1 c3 66 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 8b 06 48 89 07 48 83 c6 08 48 83 c7 08 83 e9 08 74 df 83 f9 08 73 e8 eb c9 <f3> a4 c3 48 89 c8 48 c1 e9 03 83 e0 07 f3 48 a5 89 c1 85 c9 75 b3
RSP: 0018:ffffc9000326f968 EFLAGS: 00050206
RAX: 0000000000000001 RBX: 0000000000001000 RCX: 0000000000000e80
RDX: 0000000000000000 RSI: ffff888010b22180 RDI: 0000000020352000
RBP: 0000000000001000 R08: 0000000000000000 R09: ffffed10021645ff
R10: ffff888010b22fff R11: 0000000000000000 R12: 0000000000351b80
R13: ffffc9000326fd60 R14: ffff888010b22000 R15: 0000000020351e80
 copy_user_generic arch/x86/include/asm/uaccess_64.h:112 [inline]
 raw_copy_to_user arch/x86/include/asm/uaccess_64.h:133 [inline]
 copy_to_user_iter lib/iov_iter.c:25 [inline]
 iterate_iovec include/linux/iov_iter.h:51 [inline]
 iterate_and_advance2 include/linux/iov_iter.h:247 [inline]
 iterate_and_advance include/linux/iov_iter.h:271 [inline]
 _copy_to_iter+0x4ce/0x11e0 lib/iov_iter.c:186
 copy_page_to_iter lib/iov_iter.c:381 [inline]
 copy_page_to_iter+0xf1/0x180 lib/iov_iter.c:368
 process_vm_rw_pages mm/process_vm_access.c:45 [inline]
 process_vm_rw_single_vec mm/process_vm_access.c:117 [inline]
 process_vm_rw_core.constprop.0+0x5cd/0xa10 mm/process_vm_access.c:215
 process_vm_rw+0x2ff/0x360 mm/process_vm_access.c:283
 __do_sys_process_vm_readv mm/process_vm_access.c:295 [inline]
 __se_sys_process_vm_readv mm/process_vm_access.c:291 [inline]
 __x64_sys_process_vm_readv+0xe2/0x1b0 mm/process_vm_access.c:291
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0x40/0x110 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f763ec7cba9
RSP: 002b:00007f763faaa0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000136
RAX: ffffffffffffffda RBX: 00007f763ed9bf80 RCX: 00007f763ec7cba9
RDX: 0000000000000002 RSI: 0000000020008400 RDI: 000000000000019b
RBP: 00007f763ecc847a R08: 0000000000000286 R09: 0000000000000000
R10: 0000000020008640 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f763ed9bf80 R15: 00007f763eebfa48
 </TASK>
task:syz-executor.4  state:R  running task     stack:25632 pid:7691  tgid:7686  ppid:5117   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5376 [inline]
 __schedule+0xedb/0x5af0 kernel/sched/core.c:6688
 preempt_schedule_irq+0x52/0x90 kernel/sched/core.c:7008
 irqentry_exit+0x36/0x80 kernel/entry/common.c:432
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:__sanitizer_cov_trace_pc+0x34/0x60 kernel/kcov.c:207
Code: bc 03 00 65 8b 05 b4 2e 7c 7e a9 00 01 ff 00 48 8b 34 24 74 0f f6 c4 01 74 35 8b 82 fc 15 00 00 85 c0 74 2b 8b 82 d8 15 00 00 <83> f8 02 75 20 48 8b 8a e0 15 00 00 8b 92 dc 15 00 00 48 8b 01 48
RSP: 0018:ffffc900031cf0a0 EFLAGS: 00000246
RAX: 0000000000000002 RBX: 0000000000000001 RCX: ffffffff81c6d07a
RDX: ffff88807e9a9dc0 RSI: ffffffff81c6d525 RDI: 0000000000000005
RBP: 0000000000000000 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000003 R12: 0000000000000000
R13: 1ffff92000639e1b R14: ffffea000042c880 R15: 0000000010b22225
 rcu_read_unlock include/linux/rcupdate.h:776 [inline]
 pte_unmap include/linux/pgtable.h:113 [inline]
 follow_page_pte+0x755/0x1d80 mm/gup.c:682
 follow_pmd_mask mm/gup.c:727 [inline]
 follow_pud_mask mm/gup.c:765 [inline]
 follow_p4d_mask mm/gup.c:782 [inline]
 follow_page_mask+0x3ce/0xda0 mm/gup.c:832
 __get_user_pages+0x366/0x1490 mm/gup.c:1237
 __get_user_pages_locked mm/gup.c:1507 [inline]
 __gup_longterm_locked+0x278/0x2ad0 mm/gup.c:2209
 internal_get_user_pages_fast+0x1acb/0x2a10 mm/gup.c:3213
 pin_user_pages_fast+0xa8/0xf0 mm/gup.c:3319
 iov_iter_extract_user_pages lib/iov_iter.c:1617 [inline]
 iov_iter_extract_pages+0x388/0x1750 lib/iov_iter.c:1680
 extract_user_to_sg lib/scatterlist.c:1125 [inline]
 extract_iter_to_sg lib/scatterlist.c:1351 [inline]
 extract_iter_to_sg+0xbe3/0x19c0 lib/scatterlist.c:1341
 hash_sendmsg+0x431/0xf40 crypto/algif_hash.c:119
 sock_sendmsg_nosec net/socket.c:730 [inline]
 __sock_sendmsg+0xd5/0x180 net/socket.c:745
 ____sys_sendmsg+0x2ac/0x940 net/socket.c:2584
 ___sys_sendmsg+0x135/0x1d0 net/socket.c:2638
 __sys_sendmmsg+0x1a1/0x450 net/socket.c:2724
 __do_sys_sendmmsg net/socket.c:2753 [inline]
 __se_sys_sendmmsg net/socket.c:2750 [inline]
 __x64_sys_sendmmsg+0x9c/0x100 net/socket.c:2750
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0x40/0x110 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f763ec7cba9
RSP: 002b:00007f763fa890c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000133
RAX: ffffffffffffffda RBX: 00007f763ed9c050 RCX: 00007f763ec7cba9
RDX: 0000000000000001 RSI: 0000000020000640 RDI: 0000000000000004
RBP: 00007f763ecc847a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f763ed9c050 R15: 00007f763eebfa48
 </TASK>
rcu: rcu_preempt kthread starved for 6609 jiffies! g30921 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:27904 pid:17    tgid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5376 [inline]
 __schedule+0xedb/0x5af0 kernel/sched/core.c:6688
 __schedule_loop kernel/sched/core.c:6763 [inline]
 schedule+0xe9/0x270 kernel/sched/core.c:6778
 schedule_timeout+0x137/0x290 kernel/time/timer.c:2167
 rcu_gp_fqs_loop+0x1ec/0xb10 kernel/rcu/tree.c:1631
 rcu_gp_kthread+0x24b/0x380 kernel/rcu/tree.c:1830
 kthread+0x2c6/0x3a0 kernel/kthread.c:388
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:86 [inline]
NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt+0x1b/0x20 drivers/acpi/processor_idle.c:112

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/12/12 17:26 upstream 26aff849438c ebcad15c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in sys_sendmmsg
2023/10/31 02:13 upstream 14ab6d425e80 b5729d82 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in sys_sendmmsg
2023/10/27 01:29 net-next ea23fbd2a8f7 bf285f0c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in sys_sendmmsg
* Struck through repros no longer work on HEAD.