================================================================== BUG: KASAN: slab-use-after-free in __bpf_trace_run kernel/trace/bpf_trace.c:2382 [inline] BUG: KASAN: slab-use-after-free in bpf_trace_run2+0xfa/0x530 kernel/trace/bpf_trace.c:2437 Read of size 8 at addr ffff88802d88c618 by task kworker/u8:5/140 CPU: 1 PID: 140 Comm: kworker/u8:5 Not tainted 6.8.0-syzkaller-05243-g14bb1e8c8d4a #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Workqueue: events_unbound bpf_map_free_deferred Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:377 [inline] print_report+0x169/0x550 mm/kasan/report.c:488 kasan_report+0x143/0x180 mm/kasan/report.c:601 __bpf_trace_run kernel/trace/bpf_trace.c:2382 [inline] bpf_trace_run2+0xfa/0x530 kernel/trace/bpf_trace.c:2437 __traceiter_kfree+0x2b/0x50 include/trace/events/kmem.h:94 trace_kfree include/trace/events/kmem.h:94 [inline] kfree+0x291/0x380 mm/slub.c:4396 sock_map_free+0x3a1/0x3e0 net/core/sock_map.c:361 bpf_map_free_deferred+0xe6/0x110 kernel/bpf/syscall.c:734 process_one_work kernel/workqueue.c:3254 [inline] process_scheduled_works+0xa00/0x1770 kernel/workqueue.c:3335 worker_thread+0x86d/0xd70 kernel/workqueue.c:3416 kthread+0x2f0/0x390 kernel/kthread.c:388 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243 Allocated by task 6556: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:370 [inline] __kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:387 kasan_kmalloc include/linux/kasan.h:211 [inline] kmalloc_trace+0x1d9/0x360 mm/slub.c:4012 kmalloc include/linux/slab.h:590 [inline] kzalloc include/linux/slab.h:711 [inline] bpf_raw_tp_link_attach+0x2a0/0x6e0 kernel/bpf/syscall.c:3816 bpf_raw_tracepoint_open+0x1c2/0x240 kernel/bpf/syscall.c:3863 __sys_bpf+0x3c0/0x810 kernel/bpf/syscall.c:5673 __do_sys_bpf kernel/bpf/syscall.c:5738 [inline] __se_sys_bpf kernel/bpf/syscall.c:5736 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5736 do_syscall_64+0xfb/0x240 entry_SYSCALL_64_after_hwframe+0x6d/0x75 Freed by task 6556: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:589 poison_slab_object+0xa6/0xe0 mm/kasan/common.c:240 __kasan_slab_free+0x37/0x60 mm/kasan/common.c:256 kasan_slab_free include/linux/kasan.h:184 [inline] slab_free_hook mm/slub.c:2121 [inline] slab_free mm/slub.c:4299 [inline] kfree+0x14a/0x380 mm/slub.c:4409 bpf_link_release+0x3b/0x50 kernel/bpf/syscall.c:3071 __fput+0x429/0x8a0 fs/file_table.c:423 __do_sys_close fs/open.c:1557 [inline] __se_sys_close fs/open.c:1542 [inline] __x64_sys_close+0x7f/0x110 fs/open.c:1542 do_syscall_64+0xfb/0x240 entry_SYSCALL_64_after_hwframe+0x6d/0x75 The buggy address belongs to the object at ffff88802d88c600 which belongs to the cache kmalloc-128 of size 128 The buggy address is located 24 bytes inside of freed 128-byte region [ffff88802d88c600, ffff88802d88c680) The buggy address belongs to the physical page: page:ffffea0000b62300 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2d88c flags: 0xfff00000000800(slab|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000000800 ffff888014c418c0 dead000000000122 0000000000000000 raw: 0000000000000000 0000000080100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x12cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY), pid 6554, tgid 6554 (syz-executor212), ts 71336535233, free_ts 71336420748 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x1ea/0x210 mm/page_alloc.c:1533 prep_new_page mm/page_alloc.c:1540 [inline] get_page_from_freelist+0x33ea/0x3580 mm/page_alloc.c:3311 __alloc_pages+0x256/0x680 mm/page_alloc.c:4569 __alloc_pages_node include/linux/gfp.h:238 [inline] alloc_pages_node include/linux/gfp.h:261 [inline] alloc_slab_page+0x5f/0x160 mm/slub.c:2190 allocate_slab mm/slub.c:2354 [inline] new_slab+0x84/0x2f0 mm/slub.c:2407 ___slab_alloc+0xd1b/0x13e0 mm/slub.c:3540 __slab_alloc mm/slub.c:3625 [inline] __slab_alloc_node mm/slub.c:3678 [inline] slab_alloc_node mm/slub.c:3850 [inline] __do_kmalloc_node mm/slub.c:3980 [inline] __kmalloc_node+0x2d9/0x4e0 mm/slub.c:3988 kmalloc_node include/linux/slab.h:610 [inline] kvmalloc_node+0x72/0x190 mm/util.c:634 kvmalloc include/linux/slab.h:728 [inline] bpf_jit_binary_pack_alloc+0x167/0x340 kernel/bpf/core.c:1154 bpf_int_jit_compile+0x723/0x15e0 arch/x86/net/bpf_jit_comp.c:3281 bpf_prog_select_runtime+0x93e/0xc90 kernel/bpf/core.c:2407 bpf_prog_load+0x16c6/0x20f0 kernel/bpf/syscall.c:2899 __sys_bpf+0x4ee/0x810 kernel/bpf/syscall.c:5631 __do_sys_bpf kernel/bpf/syscall.c:5738 [inline] __se_sys_bpf kernel/bpf/syscall.c:5736 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5736 do_syscall_64+0xfb/0x240 entry_SYSCALL_64_after_hwframe+0x6d/0x75 page last free pid 6554 tgid 6554 stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1140 [inline] free_unref_page_prepare+0x968/0xa90 mm/page_alloc.c:2346 free_unref_page+0x37/0x3f0 mm/page_alloc.c:2486 vfree+0x186/0x2e0 mm/vmalloc.c:2914 bpf_check+0x8089/0x190c0 kernel/bpf/verifier.c:21387 bpf_prog_load+0x1667/0x20f0 kernel/bpf/syscall.c:2895 __sys_bpf+0x4ee/0x810 kernel/bpf/syscall.c:5631 __do_sys_bpf kernel/bpf/syscall.c:5738 [inline] __se_sys_bpf kernel/bpf/syscall.c:5736 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5736 do_syscall_64+0xfb/0x240 entry_SYSCALL_64_after_hwframe+0x6d/0x75 Memory state around the buggy address: ffff88802d88c500: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88802d88c580: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff88802d88c600: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88802d88c680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff88802d88c700: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================