================================================================== BUG: KASAN: double-free in slab_free mm/slub.c:3801 [inline] BUG: KASAN: double-free in __kmem_cache_free+0xb8/0x2f0 mm/slub.c:3814 Free of addr ffff88801b5de500 by task kswapd0/112 CPU: 2 PID: 112 Comm: kswapd0 Not tainted 6.5.0-rc5-syzkaller #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:364 [inline] print_report+0xc4/0x620 mm/kasan/report.c:475 kasan_report_invalid_free+0xab/0xd0 mm/kasan/report.c:550 ____kasan_slab_free+0x183/0x1b0 mm/kasan/common.c:225 kasan_slab_free include/linux/kasan.h:162 [inline] slab_free_hook mm/slub.c:1792 [inline] slab_free_freelist_hook+0x10b/0x1e0 mm/slub.c:1818 slab_free mm/slub.c:3801 [inline] __kmem_cache_free+0xb8/0x2f0 mm/slub.c:3814 hfs_release_folio+0x431/0x570 fs/hfs/inode.c:123 filemap_release_folio+0x143/0x1b0 mm/filemap.c:4079 shrink_folio_list+0x2e7a/0x3d40 mm/vmscan.c:2068 evict_folios+0x6bc/0x18f0 mm/vmscan.c:5181 try_to_shrink_lruvec+0x769/0xb00 mm/vmscan.c:5357 shrink_one+0x45f/0x700 mm/vmscan.c:5401 shrink_many mm/vmscan.c:5453 [inline] lru_gen_shrink_node mm/vmscan.c:5570 [inline] shrink_node+0x20c2/0x3730 mm/vmscan.c:6510 kswapd_shrink_node mm/vmscan.c:7315 [inline] balance_pgdat+0xa37/0x1b90 mm/vmscan.c:7505 kswapd+0x5be/0xbf0 mm/vmscan.c:7765 kthread+0x33a/0x430 kernel/kthread.c:389 ret_from_fork+0x2c/0x70 arch/x86/kernel/process.c:145 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:304 Allocated by task 30518: kasan_save_stack+0x33/0x50 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:374 [inline] __kasan_kmalloc+0xa2/0xb0 mm/kasan/common.c:383 kasan_kmalloc include/linux/kasan.h:196 [inline] __do_kmalloc_node mm/slab_common.c:985 [inline] __kmalloc+0x5d/0x100 mm/slab_common.c:998 kmalloc include/linux/slab.h:586 [inline] kzalloc include/linux/slab.h:703 [inline] __hfs_bnode_create+0x108/0x850 fs/hfs/bnode.c:259 hfs_bnode_create+0x181/0x520 fs/hfs/bnode.c:425 hfs_bmap_alloc+0x758/0x880 fs/hfs/btree.c:291 hfs_bnode_split+0xe5/0xdc0 fs/hfsplus/brec.c:245 hfs_brec_insert+0x2da/0xb80 fs/hfs/brec.c:102 hfs_cat_create+0x355/0x820 fs/hfs/catalog.c:118 hfs_create+0x67/0xe0 fs/hfs/dir.c:202 lookup_open.isra.0+0x1049/0x1360 fs/namei.c:3492 open_last_lookups fs/namei.c:3560 [inline] path_openat+0x931/0x29c0 fs/namei.c:3790 do_filp_open+0x1de/0x430 fs/namei.c:3820 do_sys_openat2+0x176/0x1e0 fs/open.c:1407 do_sys_open fs/open.c:1422 [inline] __do_compat_sys_openat fs/open.c:1482 [inline] __se_compat_sys_openat fs/open.c:1480 [inline] __ia32_compat_sys_openat+0x16e/0x200 fs/open.c:1480 do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline] __do_fast_syscall_32+0x61/0xe0 arch/x86/entry/common.c:178 do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203 entry_SYSENTER_compat_after_hwframe+0x70/0x82 Freed by task 5182: kasan_save_stack+0x33/0x50 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 kasan_save_free_info+0x2b/0x40 mm/kasan/generic.c:522 ____kasan_slab_free mm/kasan/common.c:236 [inline] ____kasan_slab_free+0x15e/0x1b0 mm/kasan/common.c:200 kasan_slab_free include/linux/kasan.h:162 [inline] slab_free_hook mm/slub.c:1792 [inline] slab_free_freelist_hook+0x10b/0x1e0 mm/slub.c:1818 slab_free mm/slub.c:3801 [inline] __kmem_cache_free+0xb8/0x2f0 mm/slub.c:3814 hfs_btree_close+0xac/0x390 fs/hfs/btree.c:154 hfs_mdb_put+0xbf/0x380 fs/hfs/mdb.c:360 generic_shutdown_super+0x158/0x480 fs/super.c:499 kill_block_super+0x64/0xb0 fs/super.c:1417 deactivate_locked_super+0x9a/0x170 fs/super.c:330 deactivate_super+0xde/0x100 fs/super.c:361 cleanup_mnt+0x222/0x3d0 fs/namespace.c:1254 task_work_run+0x14d/0x240 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x210/0x240 kernel/entry/common.c:204 __syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline] syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:297 __do_fast_syscall_32+0x6d/0xe0 arch/x86/entry/common.c:181 do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203 entry_SYSENTER_compat_after_hwframe+0x70/0x82 The buggy address belongs to the object at ffff88801b5de500 which belongs to the cache kmalloc-192 of size 192 The buggy address is located 0 bytes inside of 192-byte region [ffff88801b5de500, ffff88801b5de5c0) The buggy address belongs to the physical page: page:ffffea00006d7780 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1b5de anon flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000000200 ffff888012842a00 0000000000000000 dead000000000001 raw: 0000000000000000 0000000080100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112c40(GFP_NOFS|__GFP_NOWARN|__GFP_NORETRY|__GFP_HARDWALL), pid 30104, tgid 30103 (syz-executor.2), ts 737420461380, free_ts 737293793969 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x2d2/0x350 mm/page_alloc.c:1570 prep_new_page mm/page_alloc.c:1577 [inline] get_page_from_freelist+0x10a9/0x31e0 mm/page_alloc.c:3221 __alloc_pages+0x1d0/0x4a0 mm/page_alloc.c:4477 __alloc_pages_node include/linux/gfp.h:237 [inline] alloc_slab_page mm/slub.c:1864 [inline] allocate_slab+0xa1/0x380 mm/slub.c:2009 new_slab mm/slub.c:2062 [inline] ___slab_alloc+0x8bc/0x1570 mm/slub.c:3215 __slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3314 __slab_alloc_node mm/slub.c:3367 [inline] slab_alloc_node mm/slub.c:3460 [inline] __kmem_cache_alloc_node+0x137/0x350 mm/slub.c:3509 __do_kmalloc_node mm/slab_common.c:984 [inline] __kmalloc_node+0x4f/0x100 mm/slab_common.c:992 kmalloc_array_node include/linux/slab.h:680 [inline] kcalloc_node include/linux/slab.h:685 [inline] memcg_alloc_slab_cgroups+0xa9/0x170 mm/memcontrol.c:2899 memcg_slab_post_alloc_hook+0xaa/0x390 mm/slab.h:530 slab_post_alloc_hook mm/slab.h:770 [inline] slab_alloc_node mm/slub.c:3470 [inline] slab_alloc mm/slub.c:3478 [inline] __kmem_cache_alloc_lru mm/slub.c:3485 [inline] kmem_cache_alloc+0x1a7/0x3b0 mm/slub.c:3494 kmem_cache_zalloc include/linux/slab.h:693 [inline] alloc_buffer_head+0x21/0x140 fs/buffer.c:3037 folio_alloc_buffers+0x2ad/0x800 fs/buffer.c:940 folio_create_empty_buffers+0x36/0x470 fs/buffer.c:1669 folio_create_buffers+0x109/0x160 fs/buffer.c:1795 __block_write_begin_int+0x1b3/0x1470 fs/buffer.c:2107 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1161 [inline] free_unref_page_prepare+0x508/0xb90 mm/page_alloc.c:2348 free_unref_page_list+0xe6/0xb30 mm/page_alloc.c:2489 release_pages+0x32a/0x14e0 mm/swap.c:1042 __folio_batch_release+0x77/0xe0 mm/swap.c:1062 folio_batch_release include/linux/pagevec.h:83 [inline] truncate_inode_pages_range+0x33e/0xfb0 mm/truncate.c:372 kill_bdev block/bdev.c:76 [inline] blkdev_flush_mapping+0x156/0x320 block/bdev.c:647 blkdev_put_whole+0xb9/0xe0 block/bdev.c:678 blkdev_put+0x40f/0x8e0 block/bdev.c:915 deactivate_locked_super+0x9a/0x170 fs/super.c:330 deactivate_super+0xde/0x100 fs/super.c:361 cleanup_mnt+0x222/0x3d0 fs/namespace.c:1254 task_work_run+0x14d/0x240 kernel/task_work.c:179 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x210/0x240 kernel/entry/common.c:204 __syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline] syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:297 __do_fast_syscall_32+0x6d/0xe0 arch/x86/entry/common.c:181 do_fast_syscall_32+0x33/0x70 arch/x86/entry/common.c:203 Memory state around the buggy address: ffff88801b5de400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88801b5de480: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc >ffff88801b5de500: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88801b5de580: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff88801b5de600: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================