==================================================================
BUG: KASAN: stack-out-of-bounds in profile_pc+0xd3/0x150 arch/x86/kernel/time.c:42
Read of size 8 at addr ffffc9000398ef60 by task syz-executor427/5056
CPU: 1 PID: 5056 Comm: syz-executor427 Not tainted 6.7.0-rc5-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
Call Trace:
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
print_address_description mm/kasan/report.c:364 [inline]
print_report+0x163/0x540 mm/kasan/report.c:475
kasan_report+0x142/0x170 mm/kasan/report.c:588
profile_pc+0xd3/0x150 arch/x86/kernel/time.c:42
profile_tick+0xd8/0x130 kernel/profile.c:339
tick_sched_handle kernel/time/tick-sched.c:256 [inline]
tick_nohz_highres_handler+0x383/0x550 kernel/time/tick-sched.c:1516
__run_hrtimer kernel/time/hrtimer.c:1688 [inline]
__hrtimer_run_queues+0x562/0xd20 kernel/time/hrtimer.c:1752
hrtimer_interrupt+0x396/0x980 kernel/time/hrtimer.c:1814
local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1065 [inline]
__sysvec_apic_timer_interrupt+0x104/0x3a0 arch/x86/kernel/apic/apic.c:1082
sysvec_apic_timer_interrupt+0x92/0xb0 arch/x86/kernel/apic/apic.c:1076
asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:649
RIP: 0010:__raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
RIP: 0010:_raw_spin_unlock_irqrestore+0xd8/0x140 kernel/locking/spinlock.c:194
Code: 9c 8f 44 24 20 42 80 3c 23 00 74 08 4c 89 f7 e8 ee 77 c8 f6 f6 44 24 21 02 75 4e 41 f7 c7 00 02 00 00 74 01 fb bf 01 00 00 00 e3 a0 3e f6 65 8b 05 04 ac e1 74 85 c0 74 3f 48 c7 04 24 0e 36
RSP: 0018:ffffc9000398ef60 EFLAGS: 00000206
RAX: 51173545a6652600 RBX: 1ffff92000731df0 RCX: ffffffff816d97aa
RDX: dffffc0000000000 RSI: ffffffff8b6aaa40 RDI: 0000000000000001
RBP: ffffc9000398eff0 R08: ffffffff90dd9367 R09: 1ffffffff21bb26c
R10: dffffc0000000000 R11: fffffbfff21bb26d R12: dffffc0000000000
R13: 1ffff92000731dec R14: ffffc9000398ef80 R15: 0000000000000246
spin_unlock_irqrestore include/linux/spinlock.h:406 [inline]
rmqueue_bulk mm/page_alloc.c:2155 [inline]
__rmqueue_pcplist+0x20a5/0x2550 mm/page_alloc.c:2821
rmqueue_pcplist mm/page_alloc.c:2863 [inline]
rmqueue mm/page_alloc.c:2900 [inline]
get_page_from_freelist+0x896/0x3570 mm/page_alloc.c:3309
__alloc_pages+0x255/0x680 mm/page_alloc.c:4568
alloc_pages_mpol+0x3de/0x640 mm/mempolicy.c:2133
__get_free_pages+0xc/0x30 mm/page_alloc.c:4615
kasan_populate_vmalloc_pte+0x34/0xe0 mm/kasan/shadow.c:323
apply_to_pte_range mm/memory.c:2599 [inline]
apply_to_pmd_range mm/memory.c:2643 [inline]
apply_to_pud_range mm/memory.c:2679 [inline]
apply_to_p4d_range mm/memory.c:2715 [inline]
__apply_to_page_range+0x8e8/0xe30 mm/memory.c:2749
alloc_vmap_area+0x1ad5/0x1c10 mm/vmalloc.c:1641
__get_vm_area_node+0x16e/0x370 mm/vmalloc.c:2595
__vmalloc_node_range+0x3df/0x14a0 mm/vmalloc.c:3280
__vmalloc_node mm/vmalloc.c:3385 [inline]
vzalloc+0x79/0x90 mm/vmalloc.c:3458
profile_init+0xee/0x130 kernel/profile.c:131
profiling_store+0x5e/0xc0 kernel/ksysfs.c:104
kernfs_fop_write_iter+0x3b3/0x510 fs/kernfs/file.c:334
call_write_iter include/linux/fs.h:2020 [inline]
new_sync_write fs/read_write.c:491 [inline]
vfs_write+0x792/0xb20 fs/read_write.c:584
ksys_write+0x1a0/0x2c0 fs/read_write.c:637
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f3e4eb08569
Code: 48 83 c4 28 c3 e8 37 17 00 00 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffd230efd58 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f3e4eb51004 RCX: 00007f3e4eb08569
RDX: 0000000000000048 RSI: 0000000020002480 RDI: 0000000000000003
RBP: 00007f3e4eb7b610 R08: 00007ffd230efae4 R09: 00007ffd230eff28
R10: 0000000000000014 R11: 0000000000000246 R12: 0000000000000001
R13: 00007ffd230eff18 R14: 0000000000000001 R15: 0000000000000001
The buggy address belongs to stack of task syz-executor427/5056
and is located at offset 0 in frame:
_raw_spin_unlock_irqrestore+0x0/0x140 kernel/locking/spinlock.c:187
This frame has 1 object:
[32, 40) 'flags.i.i.i.i'
The buggy address belongs to the virtual mapping at
[ffffc90003988000, ffffc90003991000) created by:
copy_process+0x5d1/0x3fb0 kernel/fork.c:2332
The buggy address belongs to the physical page:
page:ffffea0001db0c00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x76c30
flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff)
page_type: 0xffffffff()
raw: 00fff00000000000 0000000000000000 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000001ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 0, migratetype Unmovable, gfp_mask 0x2dc2(GFP_KERNEL|__GFP_HIGHMEM|__GFP_NOWARN|__GFP_ZERO), pid 5048, tgid 5048 (sshd), ts 66200551835, free_ts 66194494581
set_page_owner include/linux/page_owner.h:31 [inline]
post_alloc_hook+0x1e6/0x210 mm/page_alloc.c:1537
prep_new_page mm/page_alloc.c:1544 [inline]
get_page_from_freelist+0x33ea/0x3570 mm/page_alloc.c:3312
__alloc_pages+0x255/0x680 mm/page_alloc.c:4568
alloc_pages_mpol+0x3de/0x640 mm/mempolicy.c:2133
vm_area_alloc_pages mm/vmalloc.c:3063 [inline]
__vmalloc_area_node mm/vmalloc.c:3139 [inline]
__vmalloc_node_range+0x9a3/0x14a0 mm/vmalloc.c:3320
alloc_thread_stack_node kernel/fork.c:309 [inline]
dup_task_struct+0x3e5/0x7d0 kernel/fork.c:1118
copy_process+0x5d1/0x3fb0 kernel/fork.c:2332
kernel_clone+0x222/0x840 kernel/fork.c:2907
__do_sys_clone kernel/fork.c:3050 [inline]
__se_sys_clone kernel/fork.c:3034 [inline]
__x64_sys_clone+0x258/0x2a0 kernel/fork.c:3034
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
page last free stack trace:
reset_page_owner include/linux/page_owner.h:24 [inline]
free_pages_prepare mm/page_alloc.c:1137 [inline]
free_unref_page_prepare+0x931/0xa60 mm/page_alloc.c:2347
free_unref_page_list+0x5a0/0x840 mm/page_alloc.c:2533
release_pages+0x2117/0x2400 mm/swap.c:1042
tlb_batch_pages_flush mm/mmu_gather.c:98 [inline]
tlb_flush_mmu_free mm/mmu_gather.c:293 [inline]
tlb_flush_mmu+0x34c/0x4e0 mm/mmu_gather.c:300
tlb_finish_mmu+0xd4/0x1f0 mm/mmu_gather.c:392
exit_mmap+0x4d3/0xc60 mm/mmap.c:3321
__mmput+0x115/0x3c0 kernel/fork.c:1349
exit_mm+0x21f/0x300 kernel/exit.c:567
do_exit+0x9b7/0x2750 kernel/exit.c:858
do_group_exit+0x206/0x2c0 kernel/exit.c:1021
__do_sys_exit_group kernel/exit.c:1032 [inline]
__se_sys_exit_group kernel/exit.c:1030 [inline]
__x64_sys_exit_group+0x3f/0x40 kernel/exit.c:1030
do_syscall_x64 arch/x86/entry/common.c:52 [inline]
do_syscall_64+0x45/0x110 arch/x86/entry/common.c:83
entry_SYSCALL_64_after_hwframe+0x63/0x6b
Memory state around the buggy address:
ffffc9000398ee00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
ffffc9000398ee80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
>ffffc9000398ef00: 00 00 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1
^
ffffc9000398ef80: 00 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00
ffffc9000398f000: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================
----------------
Code disassembly (best guess):
0: 9c pushf
1: 8f 44 24 20 pop 0x20(%rsp)
5: 42 80 3c 23 00 cmpb $0x0,(%rbx,%r12,1)
a: 74 08 je 0x14
c: 4c 89 f7 mov %r14,%rdi
f: e8 ee 77 c8 f6 call 0xf6c87802
14: f6 44 24 21 02 testb $0x2,0x21(%rsp)
19: 75 4e jne 0x69
1b: 41 f7 c7 00 02 00 00 test $0x200,%r15d
22: 74 01 je 0x25
24: fb sti
25: bf 01 00 00 00 mov $0x1,%edi
* 2a: e8 e3 a0 3e f6 call 0xf63ea112 <-- trapping instruction
2f: 65 8b 05 04 ac e1 74 mov %gs:0x74e1ac04(%rip),%eax # 0x74e1ac3a
36: 85 c0 test %eax,%eax
38: 74 3f je 0x79
3a: 48 rex.W
3b: c7 .byte 0xc7
3c: 04 24 add $0x24,%al
3e: 0e (bad)
3f: 36 ss