RDX: 0000000008350180 RSI: 0000000008350178 RDI: 0000000008350184 RBP: 00000000f5ecb228 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 ================================================================== BUG: KASAN: use-after-free in vmx_vcpu_load+0xfde/0x1030 arch/x86/kvm/vmx.c:3110 Read of size 8 at addr ffff88019755dea0 by task syz-executor1/13202 CPU: 1 PID: 13202 Comm: syz-executor1 Not tainted 4.19.0-rc4+ #143 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1c4/0x2b4 lib/dump_stack.c:113 print_address_description.cold.8+0x9/0x1ff mm/kasan/report.c:256 kasan_report_error mm/kasan/report.c:354 [inline] kasan_report.cold.9+0x242/0x309 mm/kasan/report.c:412 __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:433 vmx_vcpu_load+0xfde/0x1030 arch/x86/kvm/vmx.c:3110 kvm_arch_vcpu_load+0x247/0x970 arch/x86/kvm/x86.c:3106 kvm_sched_in+0x82/0xa0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3975 __fire_sched_in_preempt_notifiers kernel/sched/core.c:2481 [inline] fire_sched_in_preempt_notifiers kernel/sched/core.c:2487 [inline] finish_task_switch+0x56e/0x900 kernel/sched/core.c:2679 context_switch kernel/sched/core.c:2828 [inline] __schedule+0x874/0x1ed0 kernel/sched/core.c:3473 preempt_schedule_common+0x1f/0xd0 kernel/sched/core.c:3597 preempt_schedule+0x4d/0x60 kernel/sched/core.c:3623 ___preempt_schedule+0x16/0x18 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:161 [inline] _raw_spin_unlock_irqrestore+0xbb/0xd0 kernel/locking/spinlock.c:184 try_to_wake_up+0x10a/0x12f0 kernel/sched/core.c:2055 wake_up_process kernel/sched/core.c:2123 [inline] wake_up_q+0xa4/0x100 kernel/sched/core.c:441 futex_wake+0x61f/0x760 kernel/futex.c:1556 do_futex+0x2e4/0x26d0 kernel/futex.c:3533 __do_compat_sys_futex kernel/futex_compat.c:201 [inline] __se_compat_sys_futex kernel/futex_compat.c:175 [inline] __ia32_compat_sys_futex+0x3d9/0x5f0 kernel/futex_compat.c:175 do_syscall_32_irqs_on arch/x86/entry/common.c:326 [inline] do_fast_syscall_32+0x34d/0xfb2 arch/x86/entry/common.c:397 entry_SYSENTER_compat+0x70/0x7f arch/x86/entry/entry_64_compat.S:139 RIP: 0023:0xf7f11ca9 Code: 85 d2 74 02 89 0a 5b 5d c3 8b 04 24 c3 8b 0c 24 c3 8b 1c 24 c3 90 90 90 90 90 90 90 90 90 90 90 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 eb 0d 90 90 90 90 90 90 90 90 90 90 90 90 RSP: 002b:00000000f5ecb12c EFLAGS: 00000296 ORIG_RAX: 00000000000000f0 RAX: ffffffffffffffda RBX: 0000000008350184 RCX: 0000000000000081 RDX: 0000000008350180 RSI: 0000000008350178 RDI: 0000000008350184 RBP: 00000000f5ecb228 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 Allocated by task 13197: save_stack+0x43/0xd0 mm/kasan/kasan.c:448 set_track mm/kasan/kasan.c:460 [inline] kasan_kmalloc+0xc7/0xe0 mm/kasan/kasan.c:553 kasan_slab_alloc+0x12/0x20 mm/kasan/kasan.c:490 kmem_cache_alloc+0x12e/0x730 mm/slab.c:3554 kmem_cache_zalloc include/linux/slab.h:697 [inline] vmx_create_vcpu+0xcf/0x25e0 arch/x86/kvm/vmx.c:10954 kvm_arch_vcpu_create+0xe5/0x220 arch/x86/kvm/x86.c:8452 kvm_vm_ioctl_create_vcpu arch/x86/kvm/../../../virt/kvm/kvm_main.c:2476 [inline] kvm_vm_ioctl+0x470/0x1d40 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2977 kvm_vm_compat_ioctl+0x143/0x430 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3170 __do_compat_sys_ioctl fs/compat_ioctl.c:1419 [inline] __se_compat_sys_ioctl fs/compat_ioctl.c:1365 [inline] __ia32_compat_sys_ioctl+0x20e/0x630 fs/compat_ioctl.c:1365 do_syscall_32_irqs_on arch/x86/entry/common.c:326 [inline] do_fast_syscall_32+0x34d/0xfb2 arch/x86/entry/common.c:397 entry_SYSENTER_compat+0x70/0x7f arch/x86/entry/entry_64_compat.S:139 Freed by task 13193: save_stack+0x43/0xd0 mm/kasan/kasan.c:448 set_track mm/kasan/kasan.c:460 [inline] __kasan_slab_free+0x102/0x150 mm/kasan/kasan.c:521 kasan_slab_free+0xe/0x10 mm/kasan/kasan.c:528 __cache_free mm/slab.c:3498 [inline] kmem_cache_free+0x83/0x290 mm/slab.c:3756 vmx_free_vcpu+0x26b/0x300 arch/x86/kvm/vmx.c:10948 kvm_arch_vcpu_free arch/x86/kvm/x86.c:8438 [inline] kvm_free_vcpus arch/x86/kvm/x86.c:8888 [inline] kvm_arch_destroy_vm+0x365/0x7c0 arch/x86/kvm/x86.c:8985 kvm_destroy_vm arch/x86/kvm/../../../virt/kvm/kvm_main.c:752 [inline] kvm_put_kvm+0x6c8/0xff0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:773 kvm_vcpu_release+0x7b/0xa0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2407 __fput+0x385/0xa30 fs/file_table.c:278 ____fput+0x15/0x20 fs/file_table.c:309 task_work_run+0x1e8/0x2a0 kernel/task_work.c:113 tracehook_notify_resume include/linux/tracehook.h:193 [inline] exit_to_usermode_loop+0x318/0x380 arch/x86/entry/common.c:166 prepare_exit_to_usermode arch/x86/entry/common.c:197 [inline] syscall_return_slowpath arch/x86/entry/common.c:268 [inline] do_syscall_32_irqs_on arch/x86/entry/common.c:341 [inline] do_fast_syscall_32+0xcd5/0xfb2 arch/x86/entry/common.c:397 entry_SYSENTER_compat+0x70/0x7f arch/x86/entry/entry_64_compat.S:139 The buggy address belongs to the object at ffff8801975586c0 which belongs to the cache kvm_vcpu(81:syz1) of size 23872 The buggy address is located 22496 bytes inside of 23872-byte region [ffff8801975586c0, ffff88019755e400) The buggy address belongs to the page: page:ffffea00065d5600 count:1 mapcount:0 mapping:ffff8801c4459dc0 index:0x0 compound_mapcount: 0 flags: 0x2fffc0000008100(slab|head) raw: 02fffc0000008100 ffffea0006607208 ffffea00066a3408 ffff8801c4459dc0 raw: 0000000000000000 ffff8801975586c0 0000000100000001 ffff880196f7a5c0 page dumped because: kasan: bad access detected page->mem_cgroup:ffff880196f7a5c0 Memory state around the buggy address: ffff88019755dd80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88019755de00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff88019755de80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88019755df00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88019755df80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================