================================================================== BUG: KASAN: slab-use-after-free in ucma_create_uevent+0xadb/0xb30 drivers/infiniband/core/ucma.c:275 Read of size 8 at addr ffff8880266f4d10 by task kworker/u32:2/46 CPU: 0 UID: 0 PID: 46 Comm: kworker/u32:2 Not tainted 6.16.0-rc7-syzkaller-g2942242dde89 #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Workqueue: rdma_cm cma_iboe_join_work_handler Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xcd/0x630 mm/kasan/report.c:482 kasan_report+0xe0/0x110 mm/kasan/report.c:595 ucma_create_uevent+0xadb/0xb30 drivers/infiniband/core/ucma.c:275 ucma_event_handler+0x102/0x940 drivers/infiniband/core/ucma.c:351 cma_cm_event_handler+0x97/0x300 drivers/infiniband/core/cma.c:2173 cma_iboe_join_work_handler+0xca/0x170 drivers/infiniband/core/cma.c:3008 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238 process_scheduled_works kernel/workqueue.c:3321 [inline] worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402 kthread+0x3c2/0x780 kernel/kthread.c:464 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Allocated by task 6549: kasan_save_stack+0x33/0x60 mm/kasan/common.c:47 kasan_save_track+0x14/0x30 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:377 [inline] __kasan_kmalloc+0xaa/0xb0 mm/kasan/common.c:394 kmalloc_noprof include/linux/slab.h:905 [inline] kzalloc_noprof include/linux/slab.h:1039 [inline] ucma_process_join+0x237/0xa30 drivers/infiniband/core/ucma.c:1465 ucma_join_multicast+0xe8/0x160 drivers/infiniband/core/ucma.c:1557 ucma_write+0x1fb/0x330 drivers/infiniband/core/ucma.c:1738 vfs_write+0x2a0/0x1150 fs/read_write.c:684 ksys_write+0x1f8/0x250 fs/read_write.c:738 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0x4c0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 6549: kasan_save_stack+0x33/0x60 mm/kasan/common.c:47 kasan_save_track+0x14/0x30 mm/kasan/common.c:68 kasan_save_free_info+0x3b/0x60 mm/kasan/generic.c:576 poison_slab_object mm/kasan/common.c:247 [inline] __kasan_slab_free+0x51/0x70 mm/kasan/common.c:264 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2381 [inline] slab_free mm/slub.c:4643 [inline] kfree+0x2b4/0x4d0 mm/slub.c:4842 ucma_process_join+0x3b9/0xa30 drivers/infiniband/core/ucma.c:1516 ucma_join_multicast+0xe8/0x160 drivers/infiniband/core/ucma.c:1557 ucma_write+0x1fb/0x330 drivers/infiniband/core/ucma.c:1738 vfs_write+0x2a0/0x1150 fs/read_write.c:684 ksys_write+0x1f8/0x250 fs/read_write.c:738 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0x4c0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff8880266f4d00 which belongs to the cache kmalloc-192 of size 192 The buggy address is located 16 bytes inside of freed 192-byte region [ffff8880266f4d00, ffff8880266f4dc0) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x266f4 flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000000 ffff88801b8423c0 dead000000000100 dead000000000122 raw: 0000000000000000 0000000080100010 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 53, tgid 53 (kworker/2:1), ts 8366918434, free_ts 7796621020 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1c0/0x230 mm/page_alloc.c:1704 prep_new_page mm/page_alloc.c:1712 [inline] get_page_from_freelist+0x1321/0x3890 mm/page_alloc.c:3669 __alloc_frozen_pages_noprof+0x261/0x23f0 mm/page_alloc.c:4959 alloc_pages_mpol+0x1fb/0x550 mm/mempolicy.c:2419 alloc_slab_page mm/slub.c:2451 [inline] allocate_slab mm/slub.c:2619 [inline] new_slab+0x23b/0x330 mm/slub.c:2673 ___slab_alloc+0xd9c/0x1940 mm/slub.c:3859 __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3949 __slab_alloc_node mm/slub.c:4024 [inline] slab_alloc_node mm/slub.c:4185 [inline] __do_kmalloc_node mm/slub.c:4327 [inline] __kmalloc_noprof+0x2f2/0x510 mm/slub.c:4340 kmalloc_noprof include/linux/slab.h:909 [inline] virtio_gpu_array_alloc+0x21/0xb0 drivers/gpu/drm/virtio/virtgpu_gem.c:170 virtio_gpu_update_dumb_bo drivers/gpu/drm/virtio/virtgpu_plane.c:170 [inline] virtio_gpu_primary_plane_update+0xd43/0x1540 drivers/gpu/drm/virtio/virtgpu_plane.c:264 drm_atomic_helper_commit_planes+0x957/0x1010 drivers/gpu/drm/drm_atomic_helper.c:2838 drm_atomic_helper_commit_tail+0x69/0xf0 drivers/gpu/drm/drm_atomic_helper.c:1788 commit_tail+0x35b/0x400 drivers/gpu/drm/drm_atomic_helper.c:1873 drm_atomic_helper_commit+0x2fd/0x380 drivers/gpu/drm/drm_atomic_helper.c:2111 drm_atomic_commit+0x231/0x300 drivers/gpu/drm/drm_atomic.c:1577 drm_atomic_helper_dirtyfb+0x5fd/0x780 drivers/gpu/drm/drm_damage_helper.c:181 page last free pid 53 tgid 53 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1248 [inline] __free_frozen_pages+0x7fe/0x1180 mm/page_alloc.c:2706 vfree+0x1fd/0xb50 mm/vmalloc.c:3434 delayed_vfree_work+0x56/0x70 mm/vmalloc.c:3353 process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238 process_scheduled_works kernel/workqueue.c:3321 [inline] worker_thread+0x6c8/0xf10 kernel/workqueue.c:3402 kthread+0x3c2/0x780 kernel/kthread.c:464 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Memory state around the buggy address: ffff8880266f4c00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880266f4c80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc >ffff8880266f4d00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880266f4d80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff8880266f4e00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ==================================================================