syzbot


INFO: rcu detected stall in sys_getsockopt (10)

Status: auto-obsoleted due to no activity on 2023/11/20 15:28
Subsystems: mm
[Documentation on labels]
First crash: 331d, last: 249d
Similar bugs (14)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in sys_getsockopt (2) kernel 1 1572d 1572d 0/26 closed as invalid on 2020/01/08 05:23
linux-4.19 INFO: rcu detected stall in sys_getsockopt 1 585d 585d 0/1 auto-obsoleted due to no activity on 2023/01/19 05:52
upstream INFO: rcu detected stall in sys_getsockopt (11) netfilter 4 118d 131d 0/26 auto-obsoleted due to no activity on 2024/03/30 17:47
linux-5.15 INFO: rcu detected stall in sys_getsockopt 1 137d 137d 0/3 auto-obsoleted due to no activity on 2024/03/21 16:23
upstream INFO: rcu detected stall in sys_getsockopt (3) kernel 1 1572d 1572d 0/26 closed as invalid on 2020/01/08 05:33
upstream INFO: rcu detected stall in sys_getsockopt (4) kernel 3 1571d 1571d 0/26 closed as invalid on 2020/01/09 08:13
upstream INFO: rcu detected stall in sys_getsockopt (6) kvm 3 989d 1057d 0/26 auto-closed as invalid on 2021/11/10 12:58
upstream INFO: rcu detected stall in sys_getsockopt (7) netfilter 2 846d 896d 0/26 closed as invalid on 2022/02/08 10:10
upstream INFO: rcu detected stall in sys_getsockopt (5) sctp 2 1389d 1432d 0/26 auto-closed as invalid on 2020/10/06 19:28
linux-6.1 INFO: rcu detected stall in sys_getsockopt 4 54d 212d 0/3 upstream: reported on 2023/09/28 22:07
upstream INFO: rcu detected stall in sys_getsockopt (9) kernel 6 453d 608d 0/26 auto-obsoleted due to no activity on 2023/05/01 05:21
upstream INFO: rcu detected stall in sys_getsockopt (8) net 2 737d 753d 0/26 auto-closed as invalid on 2022/07/21 00:25
upstream INFO: rcu detected stall in sys_getsockopt kernel 2 1607d 1607d 0/26 closed as invalid on 2019/12/04 14:04
android-5-15 BUG: soft lockup in sys_getsockopt 2 2d14h 5d01h 0/2 premoderation: reported on 2024/04/23 07:54

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P7213/1:b..l
rcu: 	(detected by 1, t=10502 jiffies, g=142693, q=74 ncpus=2)
task:kworker/u4:11   state:R  running task     stack:23200 pid:7213  ppid:2      flags:0x00004000
Workqueue: bat_events batadv_nc_worker
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xee1/0x59f0 kernel/sched/core.c:6710
 preempt_schedule_irq+0x52/0x90 kernel/sched/core.c:7022
 irqentry_exit+0x35/0x80 kernel/entry/common.c:433
 asm_sysvec_reschedule_ipi+0x1a/0x20 arch/x86/include/asm/idtentry.h:650
RIP: 0010:rcu_is_watching+0x7c/0xb0 kernel/rcu/tree.c:696
Code: 89 da 48 c1 ea 03 0f b6 14 02 48 89 d8 83 e0 07 83 c0 03 38 d0 7c 04 84 d2 75 1c 8b 03 c1 e8 02 83 e0 01 65 ff 0d 8c ee 94 7e <74> 03 5b 5d c3 e8 1a b7 91 ff 5b 5d c3 48 89 df e8 8f 4a 6b 00 eb
RSP: 0018:ffffc900159f7ab0 EFLAGS: 00000286
RAX: 0000000000000001 RBX: ffff8880b9936ce8 RCX: ffffffff81671997
RDX: 0000000000000000 RSI: ffffffff8ac81180 RDI: ffffffff8c39ca08
RBP: 0000000000000001 R08: 0000000000000000 R09: fffffbfff1d57472
R10: ffffffff8eaba397 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000000 R14: ffffffff8c9a7400 R15: 0000000000000000
 trace_lock_acquire include/trace/events/lock.h:24 [inline]
 lock_acquire+0x464/0x510 kernel/locking/lockdep.c:5732
 rcu_lock_acquire include/linux/rcupdate.h:303 [inline]
 rcu_read_lock include/linux/rcupdate.h:749 [inline]
 batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:408 [inline]
 batadv_nc_worker+0x175/0x10f0 net/batman-adv/network-coding.c:719
 process_one_work+0xaa2/0x16f0 kernel/workqueue.c:2600
 worker_thread+0x687/0x1110 kernel/workqueue.c:2751
 kthread+0x33a/0x430 kernel/kthread.c:389
 ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
 </TASK>
rcu: rcu_preempt kthread timer wakeup didn't happen for 10499 jiffies! g142693 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402
rcu: 	Possible timer handling issue on cpu=0 timer-softirq=138089
rcu: rcu_preempt kthread starved for 10500 jiffies! g142693 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x402 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:I stack:28064 pid:16    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0xee1/0x59f0 kernel/sched/core.c:6710
 schedule+0xe7/0x1b0 kernel/sched/core.c:6786
 schedule_timeout+0x157/0x2c0 kernel/time/timer.c:2167
 rcu_gp_fqs_loop+0x1ec/0xa50 kernel/rcu/tree.c:1609
 rcu_gp_kthread+0x249/0x380 kernel/rcu/tree.c:1808
 kthread+0x33a/0x430 kernel/kthread.c:389
 ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 5049 Comm: syz-executor.0 Not tainted 6.5.0-rc7-syzkaller-00004-gf7757129e3de #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023
RIP: 0010:variable_test_bit arch/x86/include/asm/bitops.h:228 [inline]
RIP: 0010:arch_test_bit arch/x86/include/asm/bitops.h:240 [inline]
RIP: 0010:_test_bit include/asm-generic/bitops/instrumented-non-atomic.h:142 [inline]
RIP: 0010:cpumask_test_cpu include/linux/cpumask.h:504 [inline]
RIP: 0010:cpu_online include/linux/cpumask.h:1082 [inline]
RIP: 0010:trace_hrtimer_start include/trace/events/timer.h:202 [inline]
RIP: 0010:debug_activate kernel/time/hrtimer.c:478 [inline]
RIP: 0010:enqueue_hrtimer+0x75/0x310 kernel/time/hrtimer.c:1087
Code: 00 e8 1f e9 10 00 45 89 e4 be 08 00 00 00 4c 89 e0 48 c1 e8 06 48 8d 3c c5 90 a3 ab 8e e8 a3 fc 64 00 4c 0f a3 25 eb 83 36 0d <41> 0f 92 c4 31 ff 44 89 e6 e8 1d e4 10 00 45 84 e4 0f 85 ef 01 00
RSP: 0018:ffffc90000007e18 EFLAGS: 00000047
RAX: 0000000000000001 RBX: ffff8880b982b980 RCX: ffffffff81751f9d
RDX: fffffbfff1d57473 RSI: 0000000000000008 RDI: ffffffff8eaba390
RBP: ffff88802bd8b340 R08: 0000000000000000 R09: fffffbfff1d57472
R10: ffffffff8eaba397 R11: 0000000000000000 R12: 0000000000000000
R13: 0000000000000000 R14: ffff88802bd8b340 R15: 0000000000000001
FS:  0000555555e3f480(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2ef22000 CR3: 000000002e07d000 CR4: 0000000000350ef0
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 __run_hrtimer kernel/time/hrtimer.c:1705 [inline]
 __hrtimer_run_queues+0xa0a/0xc10 kernel/time/hrtimer.c:1752
 hrtimer_interrupt+0x31b/0x800 kernel/time/hrtimer.c:1814
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1098 [inline]
 __sysvec_apic_timer_interrupt+0x14a/0x430 arch/x86/kernel/apic/apic.c:1115
 sysvec_apic_timer_interrupt+0x8e/0xc0 arch/x86/kernel/apic/apic.c:1109
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:orc_ip arch/x86/kernel/unwind_orc.c:80 [inline]
RIP: 0010:__orc_find+0x86/0xf0 arch/x86/kernel/unwind_orc.c:102
Code: f2 48 d1 fa 48 8d 5c 95 00 48 89 da 48 c1 ea 03 0f b6 34 0a 48 89 da 83 e2 07 83 c2 03 40 38 f2 7c 05 40 84 f6 75 43 48 63 13 <48> 01 da 49 39 d5 73 af 4c 8d 63 fc 49 39 ec 73 b2 4d 29 f7 49 c1
RSP: 0018:ffffc900043bf258 EFLAGS: 00000246
RAX: ffffffff8f3ea900 RBX: ffffffff8ec3006c RCX: dffffc0000000000
RDX: fffffffff313d211 RSI: 0000000000000000 RDI: ffffffff8ec3005c
RBP: ffffffff8ec3005c R08: ffffffff8f3ea936 R09: ffffffff8f3e1f64
R10: ffffc900043bf308 R11: 000000000000d6d2 R12: ffffffff8ec3007c
R13: ffffffff81d6d269 R14: ffffffff8ec3005c R15: ffffffff8ec3005c
 orc_find arch/x86/kernel/unwind_orc.c:227 [inline]
 unwind_next_frame+0x2b4/0x2020 arch/x86/kernel/unwind_orc.c:494
 arch_stack_walk+0x8b/0xf0 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x96/0xd0 kernel/stacktrace.c:122
 save_stack+0x160/0x1f0 mm/page_owner.c:128
 __set_page_owner+0x1f/0x60 mm/page_owner.c:192
 set_page_owner include/linux/page_owner.h:31 [inline]
 post_alloc_hook+0x2d2/0x350 mm/page_alloc.c:1570
 prep_new_page mm/page_alloc.c:1577 [inline]
 get_page_from_freelist+0x10a9/0x31e0 mm/page_alloc.c:3221
 __alloc_pages+0x1d0/0x4a0 mm/page_alloc.c:4477
 __alloc_pages_bulk+0x77a/0x1110 mm/page_alloc.c:4425
 alloc_pages_bulk_array_mempolicy+0x1ca/0x370 mm/mempolicy.c:2387
 vm_area_alloc_pages mm/vmalloc.c:3024 [inline]
 __vmalloc_area_node mm/vmalloc.c:3135 [inline]
 __vmalloc_node_range+0xd08/0x1540 mm/vmalloc.c:3316
 __vmalloc_node mm/vmalloc.c:3381 [inline]
 vzalloc+0x6b/0x80 mm/vmalloc.c:3454
 alloc_counters net/ipv4/netfilter/ip_tables.c:799 [inline]
 copy_entries_to_user net/ipv4/netfilter/ip_tables.c:821 [inline]
 get_entries net/ipv4/netfilter/ip_tables.c:1022 [inline]
 do_ipt_get_ctl+0x68b/0xa60 net/ipv4/netfilter/ip_tables.c:1660
 nf_getsockopt+0x76/0xe0 net/netfilter/nf_sockopt.c:116
 ip_getsockopt+0x186/0x1d0 net/ipv4/ip_sockglue.c:1825
 tcp_getsockopt+0x97/0xf0 net/ipv4/tcp.c:4304
 __sys_getsockopt+0x220/0x6a0 net/socket.c:2307
 __do_sys_getsockopt net/socket.c:2322 [inline]
 __se_sys_getsockopt net/socket.c:2319 [inline]
 __x64_sys_getsockopt+0xbd/0x150 net/socket.c:2319
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7fe1ade7e68a
Code: c4 c1 e0 1a 0d 00 00 04 00 89 01 e9 e0 fe ff ff e8 3b 05 00 00 66 2e 0f 1f 84 00 00 00 00 00 90 49 89 ca b8 37 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 06 c3 0f 1f 44 00 00 48 c7 c2 b0 ff ff ff f7
RSP: 002b:00007ffda5f97a18 EFLAGS: 00000216 ORIG_RAX: 0000000000000037
RAX: ffffffffffffffda RBX: 00007ffda5f97aa0 RCX: 00007fe1ade7e68a
RDX: 0000000000000041 RSI: 0000000000000000 RDI: 0000000000000003
RBP: 0000000000000003 R08: 00007ffda5f97a3c R09: 00007ffda5f97e57
R10: 00007ffda5f97aa0 R11: 0000000000000216 R12: 00007fe1adf78d00
R13: 00007ffda5f97a3c R14: 0000000000000000 R15: 00007fe1adf7aec0
 </TASK>

Crashes (9):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/08/22 15:23 upstream f7757129e3de b81ca3f6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_getsockopt
2023/08/19 19:21 upstream 12e6ccedb311 d216d8a0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_getsockopt
2023/08/17 19:19 upstream 16931859a650 74b106b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_getsockopt
2023/08/13 19:18 upstream 4c75bf7e4a0e 39990d51 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_getsockopt
2023/07/23 22:02 upstream 269f4a4b85a1 27cbe77f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_getsockopt
2023/08/15 03:57 net 855067defa36 39990d51 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in sys_getsockopt
2023/06/02 16:11 net 714069daa5d3 a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in sys_getsockopt
2023/06/01 19:02 net be7f8012a513 a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in sys_getsockopt
2023/07/07 23:35 linux-next 123212f53f3e 668cb1fa .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in sys_getsockopt
* Struck through repros no longer work on HEAD.