syzbot


INFO: rcu detected stall in sys_socket (6)

Status: auto-obsoleted due to no activity on 2022/12/12 22:48
Subsystems: cgroups mm
[Documentation on labels]
First crash: 645d, last: 601d
Similar bugs (12)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in sys_socket (4) fs 1 1105d 1105d 0/26 auto-closed as invalid on 2021/07/27 17:27
linux-5.15 INFO: rcu detected stall in sys_socket 1 217d 217d 0/3 auto-obsoleted due to no activity on 2024/01/11 06:48
upstream INFO: rcu detected stall in sys_socket (5) net 2 942d 968d 0/26 auto-closed as invalid on 2022/01/06 01:08
upstream INFO: rcu detected stall in sys_socket kernel 11 1616d 1617d 0/26 closed as invalid on 2019/12/04 14:04
linux-4.19 INFO: rcu detected stall in sys_socket 1 746d 746d 0/1 auto-closed as invalid on 2022/08/20 07:08
upstream INFO: rcu detected stall in sys_socket (10) fs C done 10 8d13h 159d 0/26 upstream: reported C repro on 2023/11/30 15:24
upstream INFO: rcu detected stall in sys_socket (7) kernel 2 461d 493d 0/26 auto-obsoleted due to no activity on 2023/05/02 14:50
upstream INFO: rcu detected stall in sys_socket (9) kasan mm 2 278d 288d 0/26 closed as invalid on 2023/09/07 14:25
upstream INFO: rcu detected stall in sys_socket (2) kernel 3 1581d 1581d 0/26 closed as invalid on 2020/01/08 05:23
linux-5.15 INFO: rcu detected stall in sys_socket (2) origin:upstream C 2 27d 66d 0/3 upstream: reported C repro on 2024/03/02 17:55
upstream INFO: rcu detected stall in sys_socket (3) kernel 4 1581d 1581d 0/26 closed as invalid on 2020/01/09 08:13
android-5-15 BUG: soft lockup in sys_socket origin:lts C 13 1d01h 27d 0/2 upstream: reported C repro on 2024/04/10 16:23

Sample crash report:
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	1-...!: (1 GPs behind) idle=d94c/1/0x4000000000000000 softirq=44394/44395 fqs=0
	(detected by 0, t=10505 jiffies, g=65401, q=78 ncpus=2)
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 12613 Comm: syz-executor.1 Not tainted 6.0.0-rc5-syzkaller-00017-gd1221cea11fc #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
RIP: 0010:native_irq_disable arch/x86/include/asm/irqflags.h:40 [inline]
RIP: 0010:arch_local_irq_disable arch/x86/include/asm/irqflags.h:75 [inline]
RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:107 [inline]
RIP: 0010:lock_is_held_type+0x54/0x140 kernel/locking/lockdep.c:5705
Code: c0 0f 85 ca 00 00 00 65 4c 8b 24 25 80 6f 02 00 41 8b 94 24 74 0a 00 00 85 d2 0f 85 b1 00 00 00 48 89 fd 41 89 f6 9c 8f 04 24 <fa> 48 c7 c7 e0 ab ec 89 31 db e8 3d 15 00 00 41 8b 84 24 70 0a 00
RSP: 0018:ffffc900001e0dc0 EFLAGS: 00000046
RAX: 0000000000000000 RBX: ffff8880b9b2a640 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 00000000ffffffff RDI: ffffffff8bf89340
RBP: ffffffff8bf89340 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffff888053e58000
R13: 00000000ffffffff R14: 00000000ffffffff R15: 0000000000000001
FS:  00007f2425831700(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f664dda0000 CR3: 00000000447db000 CR4: 0000000000350ee0
Call Trace:
 <IRQ>
 lock_is_held include/linux/lockdep.h:283 [inline]
 rcu_read_lock_sched_held+0x3a/0x70 kernel/rcu/update.c:125
 trace_hrtimer_start include/trace/events/timer.h:198 [inline]
 debug_activate kernel/time/hrtimer.c:476 [inline]
 enqueue_hrtimer+0x2b8/0x3e0 kernel/time/hrtimer.c:1084
 __run_hrtimer kernel/time/hrtimer.c:1702 [inline]
 __hrtimer_run_queues+0xaf3/0xe40 kernel/time/hrtimer.c:1749
 hrtimer_interrupt+0x31c/0x790 kernel/time/hrtimer.c:1811
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
 __sysvec_apic_timer_interrupt+0x146/0x530 arch/x86/kernel/apic/apic.c:1112
 sysvec_apic_timer_interrupt+0x8e/0xc0 arch/x86/kernel/apic/apic.c:1106
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:lock_release+0x3f1/0x780 kernel/locking/lockdep.c:5674
Code: 7e 83 f8 01 0f 85 cb 01 00 00 9c 58 f6 c4 02 0f 85 b6 01 00 00 48 f7 04 24 00 02 00 00 74 01 fb 48 b8 00 00 00 00 00 fc ff df <48> 01 c5 48 c7 45 00 00 00 00 00 c7 45 08 00 00 00 00 48 8b 84 24
RSP: 0018:ffffc9000b9c7c40 EFLAGS: 00000206
RAX: dffffc0000000000 RBX: e460fc627dc0ad1e RCX: ffffc9000b9c7c90
RDX: 1ffff1100a7cb14d RSI: 0000000000000000 RDI: 0000000000000000
RBP: 1ffff92001738f8a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000001
R13: 0000000000000002 R14: ffff888053e58a70 R15: ffff888053e58000
 rcu_lock_release include/linux/rcupdate.h:285 [inline]
 rcu_read_unlock include/linux/rcupdate.h:739 [inline]
 percpu_ref_tryget_many include/linux/percpu-refcount.h:250 [inline]
 percpu_ref_tryget include/linux/percpu-refcount.h:266 [inline]
 obj_cgroup_tryget include/linux/memcontrol.h:796 [inline]
 __get_obj_cgroup_from_memcg+0xaf/0x270 mm/memcontrol.c:2945
 get_obj_cgroup_from_current+0x116/0x250 mm/memcontrol.c:2965
 memcg_slab_pre_alloc_hook mm/slab.h:480 [inline]
 slab_pre_alloc_hook mm/slab.h:705 [inline]
 slab_alloc_node mm/slub.c:3157 [inline]
 slab_alloc mm/slub.c:3251 [inline]
 __kmem_cache_alloc_lru mm/slub.c:3258 [inline]
 kmem_cache_alloc_lru+0x85/0x720 mm/slub.c:3275
 alloc_inode_sb include/linux/fs.h:3103 [inline]
 sock_alloc_inode+0x23/0x1d0 net/socket.c:304
 alloc_inode+0x61/0x230 fs/inode.c:260
 new_inode_pseudo+0x13/0x80 fs/inode.c:1019
 sock_alloc+0x3c/0x260 net/socket.c:627
 __sock_create+0xb9/0x790 net/socket.c:1479
 sock_create net/socket.c:1566 [inline]
 __sys_socket_create net/socket.c:1603 [inline]
 __sys_socket_create net/socket.c:1588 [inline]
 __sys_socket+0x12f/0x240 net/socket.c:1636
 __do_sys_socket net/socket.c:1649 [inline]
 __se_sys_socket net/socket.c:1647 [inline]
 __x64_sys_socket+0x6f/0xb0 net/socket.c:1647
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f2424689409
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f2425831168 EFLAGS: 00000246 ORIG_RAX: 0000000000000029
RAX: ffffffffffffffda RBX: 00007f242479bf80 RCX: 00007f2424689409
RDX: 0000000000000000 RSI: 0000000000000003 RDI: 0000000000000010
RBP: 00007f24246e4367 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fffa02d34ff R14: 00007f2425831300 R15: 0000000000022000
 </TASK>
rcu: rcu_preempt kthread starved for 10505 jiffies! g65401 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:28728 pid:   16 ppid:     2 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5182 [inline]
 __schedule+0xadf/0x52b0 kernel/sched/core.c:6494
 schedule+0xda/0x1b0 kernel/sched/core.c:6570
 schedule_timeout+0x14a/0x2a0 kernel/time/timer.c:1935
 rcu_gp_fqs_loop+0x190/0x910 kernel/rcu/tree.c:1657
 rcu_gp_kthread+0x236/0x360 kernel/rcu/tree.c:1854
 kthread+0x2e4/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
NMI backtrace for cpu 0
CPU: 0 PID: 12609 Comm: syz-executor.1 Not tainted 6.0.0-rc5-syzkaller-00017-gd1221cea11fc #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
Call Trace:
 <IRQ>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x46/0x14f lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x206/0x250 lib/nmi_backtrace.c:62
 trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
 rcu_check_gp_kthread_starvation.cold+0x1fb/0x200 kernel/rcu/tree_stall.h:514
 print_other_cpu_stall kernel/rcu/tree_stall.h:619 [inline]
 check_cpu_stall kernel/rcu/tree_stall.h:762 [inline]
 rcu_pending kernel/rcu/tree.c:3660 [inline]
 rcu_sched_clock_irq+0x2404/0x2530 kernel/rcu/tree.c:2342
 update_process_times+0x11a/0x1a0 kernel/time/timer.c:1839
 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:243
 tick_sched_timer+0xee/0x120 kernel/time/tick-sched.c:1480
 __run_hrtimer kernel/time/hrtimer.c:1685 [inline]
 __hrtimer_run_queues+0x1c0/0xe40 kernel/time/hrtimer.c:1749
 hrtimer_interrupt+0x31c/0x790 kernel/time/hrtimer.c:1811
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
 __sysvec_apic_timer_interrupt+0x146/0x530 arch/x86/kernel/apic/apic.c:1112
 sysvec_apic_timer_interrupt+0x8e/0xc0 arch/x86/kernel/apic/apic.c:1106
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:csd_lock_wait kernel/smp.c:414 [inline]
RIP: 0010:smp_call_function_many_cond+0x5c3/0x1430 kernel/smp.c:988
Code: 89 ee e8 30 ad 0a 00 85 ed 74 48 48 8b 44 24 08 49 89 c4 83 e0 07 49 c1 ec 03 48 89 c5 4d 01 f4 83 c5 03 e8 4f b0 0a 00 f3 90 <41> 0f b6 04 24 40 38 c5 7c 08 84 c0 0f 85 b5 0b 00 00 8b 43 08 31
RSP: 0000:ffffc9000af879a0 EFLAGS: 00000293
RAX: 0000000000000000 RBX: ffff8880b9b3edc0 RCX: 0000000000000000
RDX: ffff888026b45880 RSI: ffffffff817158d1 RDI: 0000000000000005
RBP: 0000000000000003 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffffed1017367db9
R13: 0000000000000001 R14: dffffc0000000000 R15: 0000000000000001
 on_each_cpu_cond_mask+0x56/0xa0 kernel/smp.c:1154
 __flush_tlb_multi arch/x86/include/asm/paravirt.h:87 [inline]
 flush_tlb_multi arch/x86/mm/tlb.c:924 [inline]
 flush_tlb_mm_range+0x35d/0x4c0 arch/x86/mm/tlb.c:1010
 flush_tlb_page arch/x86/include/asm/tlbflush.h:240 [inline]
 ptep_clear_flush+0x12b/0x160 mm/pgtable-generic.c:98
 wp_page_copy+0x869/0x1b60 mm/memory.c:3177
 do_wp_page+0x52c/0x1910 mm/memory.c:3479
 handle_pte_fault mm/memory.c:4929 [inline]
 __handle_mm_fault+0x1813/0x39b0 mm/memory.c:5053
 handle_mm_fault+0x1c8/0x780 mm/memory.c:5151
 do_user_addr_fault+0x475/0x1210 arch/x86/mm/fault.c:1397
 handle_page_fault arch/x86/mm/fault.c:1488 [inline]
 exc_page_fault+0x94/0x170 arch/x86/mm/fault.c:1544
 asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0033:0x7f24246346f5

================================
WARNING: inconsistent lock state
6.0.0-rc5-syzkaller-00017-gd1221cea11fc #0 Not tainted
--------------------------------
inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
syz-executor.1/12609 [HC1[1]:SC0[0]:HE0:SE1] takes:
ffffffff8c0bf338 (vmap_area_lock){?.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:349 [inline]
ffffffff8c0bf338 (vmap_area_lock){?.+.}-{2:2}, at: find_vmap_area+0x1c/0x130 mm/vmalloc.c:1836
{HARDIRQ-ON-W} state was registered at:
  lock_acquire kernel/locking/lockdep.c:5666 [inline]
  lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631
  __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
  _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
  spin_lock include/linux/spinlock.h:349 [inline]
  alloc_vmap_area+0xa0b/0x1d50 mm/vmalloc.c:1617
  __get_vm_area_node+0x142/0x3f0 mm/vmalloc.c:2484
  get_vm_area_caller+0x43/0x50 mm/vmalloc.c:2537
  __ioremap_caller.constprop.0+0x292/0x600 arch/x86/mm/ioremap.c:280
  acpi_os_ioremap include/acpi/acpi_io.h:13 [inline]
  acpi_map drivers/acpi/osl.c:296 [inline]
  acpi_os_map_iomem+0x463/0x550 drivers/acpi/osl.c:355
  acpi_tb_acquire_table+0xd8/0x209 drivers/acpi/acpica/tbdata.c:142
  acpi_tb_validate_table drivers/acpi/acpica/tbdata.c:317 [inline]
  acpi_tb_validate_table+0x50/0x8c drivers/acpi/acpica/tbdata.c:308
  acpi_tb_verify_temp_table+0x84/0x674 drivers/acpi/acpica/tbdata.c:504
  acpi_reallocate_root_table+0x374/0x3e0 drivers/acpi/acpica/tbxface.c:180
  acpi_early_init+0x13a/0x438 drivers/acpi/bus.c:1214
  start_kernel+0x3cf/0x48f init/main.c:1099
  secondary_startup_64_no_verify+0xce/0xdb
irq event stamp: 23666
hardirqs last  enabled at (23665): [<ffffffff89a00cc6>] asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
hardirqs last disabled at (23666): [<ffffffff89800b9b>] sysvec_apic_timer_interrupt+0xb/0xc0 arch/x86/kernel/apic/apic.c:1106
softirqs last  enabled at (2924): [<ffffffff81491843>] invoke_softirq kernel/softirq.c:445 [inline]
softirqs last  enabled at (2924): [<ffffffff81491843>] __irq_exit_rcu+0x123/0x180 kernel/softirq.c:650
softirqs last disabled at (2873): [<ffffffff81491843>] invoke_softirq kernel/softirq.c:445 [inline]
softirqs last disabled at (2873): [<ffffffff81491843>] __irq_exit_rcu+0x123/0x180 kernel/softirq.c:650

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(vmap_area_lock);
  <Interrupt>
    lock(vmap_area_lock);

 *** DEADLOCK ***

2 locks held by syz-executor.1/12609:
 #0: ffff88807b8e4728 (&mm->mmap_lock#2){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:136 [inline]
 #0: ffff88807b8e4728 (&mm->mmap_lock#2){++++}-{3:3}, at: do_user_addr_fault+0x276/0x1210 arch/x86/mm/fault.c:1338
 #1: ffff888072e97f18 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: spin_lock include/linux/spinlock.h:349 [inline]
 #1: ffff888072e97f18 (ptlock_ptr(page)#2){+.+.}-{2:2}, at: wp_page_copy+0x639/0x1b60 mm/memory.c:3147

stack backtrace:
CPU: 0 PID: 12609 Comm: syz-executor.1 Not tainted 6.0.0-rc5-syzkaller-00017-gd1221cea11fc #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022
Call Trace:
 <IRQ>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 print_usage_bug kernel/locking/lockdep.c:3961 [inline]
 valid_state kernel/locking/lockdep.c:3973 [inline]
 mark_lock_irq kernel/locking/lockdep.c:4176 [inline]
 mark_lock.part.0.cold+0x18/0xd8 kernel/locking/lockdep.c:4632
 mark_lock kernel/locking/lockdep.c:4596 [inline]
 mark_usage kernel/locking/lockdep.c:4524 [inline]
 __lock_acquire+0x14a2/0x56d0 kernel/locking/lockdep.c:5007
 lock_acquire kernel/locking/lockdep.c:5666 [inline]
 lock_acquire+0x1ab/0x570 kernel/locking/lockdep.c:5631
 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
 _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154
 spin_lock include/linux/spinlock.h:349 [inline]
 find_vmap_area+0x1c/0x130 mm/vmalloc.c:1836
 check_heap_object mm/usercopy.c:176 [inline]
 __check_object_size mm/usercopy.c:250 [inline]
 __check_object_size+0x1f8/0x700 mm/usercopy.c:212
 check_object_size include/linux/thread_info.h:199 [inline]
 __copy_from_user_inatomic include/linux/uaccess.h:62 [inline]
 copy_from_user_nmi arch/x86/lib/usercopy.c:47 [inline]
 copy_from_user_nmi+0xcb/0x130 arch/x86/lib/usercopy.c:31
 copy_code arch/x86/kernel/dumpstack.c:91 [inline]
 show_opcodes+0x59/0xb0 arch/x86/kernel/dumpstack.c:121
 show_iret_regs+0xd/0x33 arch/x86/kernel/dumpstack.c:149
 __show_regs+0x1e/0x60 arch/x86/kernel/process_64.c:74
 show_trace_log_lvl+0x25b/0x2ba arch/x86/kernel/dumpstack.c:292
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x46/0x14f lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x206/0x250 lib/nmi_backtrace.c:62
 trigger_single_cpu_backtrace include/linux/nmi.h:166 [inline]
 rcu_check_gp_kthread_starvation.cold+0x1fb/0x200 kernel/rcu/tree_stall.h:514
 print_other_cpu_stall kernel/rcu/tree_stall.h:619 [inline]
 check_cpu_stall kernel/rcu/tree_stall.h:762 [inline]
 rcu_pending kernel/rcu/tree.c:3660 [inline]
 rcu_sched_clock_irq+0x2404/0x2530 kernel/rcu/tree.c:2342
 update_process_times+0x11a/0x1a0 kernel/time/timer.c:1839
 tick_sched_handle+0x9b/0x180 kernel/time/tick-sched.c:243
 tick_sched_timer+0xee/0x120 kernel/time/tick-sched.c:1480
 __run_hrtimer kernel/time/hrtimer.c:1685 [inline]
 __hrtimer_run_queues+0x1c0/0xe40 kernel/time/hrtimer.c:1749
 hrtimer_interrupt+0x31c/0x790 kernel/time/hrtimer.c:1811
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1095 [inline]
 __sysvec_apic_timer_interrupt+0x146/0x530 arch/x86/kernel/apic/apic.c:1112
 sysvec_apic_timer_interrupt+0x8e/0xc0 arch/x86/kernel/apic/apic.c:1106
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:csd_lock_wait kernel/smp.c:414 [inline]
RIP: 0010:smp_call_function_many_cond+0x5c3/0x1430 kernel/smp.c:988
Code: 89 ee e8 30 ad 0a 00 85 ed 74 48 48 8b 44 24 08 49 89 c4 83 e0 07 49 c1 ec 03 48 89 c5 4d 01 f4 83 c5 03 e8 4f b0 0a 00 f3 90 <41> 0f b6 04 24 40 38 c5 7c 08 84 c0 0f 85 b5 0b 00 00 8b 43 08 31
RSP: 0000:ffffc9000af879a0 EFLAGS: 00000293
RAX: 0000000000000000 RBX: ffff8880b9b3edc0 RCX: 0000000000000000
RDX: ffff888026b45880 RSI: ffffffff817158d1 RDI: 0000000000000005
RBP: 0000000000000003 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffffed1017367db9
R13: 0000000000000001 R14: dffffc0000000000 R15: 0000000000000001
 on_each_cpu_cond_mask+0x56/0xa0 kernel/smp.c:1154
 __flush_tlb_multi arch/x86/include/asm/paravirt.h:87 [inline]
 flush_tlb_multi arch/x86/mm/tlb.c:924 [inline]
 flush_tlb_mm_range+0x35d/0x4c0 arch/x86/mm/tlb.c:1010
 flush_tlb_page arch/x86/include/asm/tlbflush.h:240 [inline]
 ptep_clear_flush+0x12b/0x160 mm/pgtable-generic.c:98
 wp_page_copy+0x869/0x1b60 mm/memory.c:3177
 do_wp_page+0x52c/0x1910 mm/memory.c:3479
 handle_pte_fault mm/memory.c:4929 [inline]
 __handle_mm_fault+0x1813/0x39b0 mm/memory.c:5053
 handle_mm_fault+0x1c8/0x780 mm/memory.c:5151
 do_user_addr_fault+0x475/0x1210 arch/x86/mm/fault.c:1397
 handle_page_fault arch/x86/mm/fault.c:1488 [inline]
 exc_page_fault+0x94/0x170 arch/x86/mm/fault.c:1544
 asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0033:0x7f24246346f5
Code: 5c 41 5d c3 90 48 8b 57 18 48 83 fa ff 74 22 48 81 fa e7 03 00 00 0f 87 ee 00 00 00 48 c1 e2 04 48 8d 0d ce 39 16 00 48 01 ca <c6> 02 01 48 89 42 08 48 8b 53 10 4c 8d 2d f9 b8 56 00 4c 39 ea 0f
RSP: 002b:00007fffa02d3540 EFLAGS: 00010202
RAX: 0000000000000003 RBX: 00007f242479c050 RCX: 00007f24247980c0
RDX: 00007f24247980d0 RSI: 0000000000000080 RDI: 00007f242479c050
RBP: 00007f242479bf80 R08: 00007fffa037e080 R09: 00000000000000d0
R10: 00007fffa02d3660 R11: 0000000000000246 R12: 00000000000a21f4
R13: 00007fffa02d3660 R14: 00007f242479c050 R15: 0000000000000032
 </TASK>
Code: 5c 41 5d c3 90 48 8b 57 18 48 83 fa ff 74 22 48 81 fa e7 03 00 00 0f 87 ee 00 00 00 48 c1 e2 04 48 8d 0d ce 39 16 00 48 01 ca <c6> 02 01 48 89 42 08 48 8b 53 10 4c 8d 2d f9 b8 56 00 4c 39 ea 0f
RSP: 002b:00007fffa02d3540 EFLAGS: 00010202
RAX: 0000000000000003 RBX: 00007f242479c050 RCX: 00007f24247980c0
RDX: 00007f24247980d0 RSI: 0000000000000080 RDI: 00007f242479c050
RBP: 00007f242479bf80 R08: 00007fffa037e080 R09: 00000000000000d0
R10: 00007fffa02d3660 R11: 0000000000000246 R12: 00000000000a21f4
R13: 00007fffa02d3660 R14: 00007f242479c050 R15: 0000000000000032
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/09/13 22:39 upstream d1221cea11fc b884348d .config console log report info [disk image] [vmlinux] ci-upstream-kasan-gce-root INFO: rcu detected stall in sys_socket
2022/07/31 22:01 upstream 334c0ef6429f fef302b1 .config console log report info ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in sys_socket
* Struck through repros no longer work on HEAD.