loop0: detected capacity change from 0 to 16 erofs: (device loop0): mounted with root inode @ nid 36. ================================================================== BUG: KASAN: use-after-free in z_erofs_shifted_transform+0x37c/0x580 fs/erofs/decompressor.c:349 Read of size 4096 at addr ffff88801a5ae000 by task syz-executor.0/4163 CPU: 1 PID: 4163 Comm: syz-executor.0 Not tainted 5.18.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x12a/0x1be lib/dump_stack.c:106 print_address_description+0x65/0x4b0 mm/kasan/report.c:313 print_report+0xf4/0x1e0 mm/kasan/report.c:429 kasan_report+0xc3/0xf0 mm/kasan/report.c:491 kasan_check_range+0x2a7/0x2e0 mm/kasan/generic.c:189 memcpy+0x25/0x60 mm/kasan/shadow.c:65 z_erofs_shifted_transform+0x37c/0x580 fs/erofs/decompressor.c:349 z_erofs_decompress_pcluster+0x13fb/0x1ed0 fs/erofs/zdata.c:961 z_erofs_decompress_queue fs/erofs/zdata.c:1045 [inline] z_erofs_runqueue+0x9e5/0xb60 fs/erofs/zdata.c:1413 z_erofs_readpage+0x312/0x420 fs/erofs/zdata.c:1510 filemap_read_folio+0xa9/0x340 mm/filemap.c:2422 filemap_update_page+0x184/0x3d0 mm/filemap.c:2504 filemap_get_pages+0x700/0xd80 mm/filemap.c:2616 filemap_read+0x339/0xb20 mm/filemap.c:2679 __kernel_read+0x44f/0x7d0 fs/read_write.c:440 integrity_kernel_read+0xa3/0xf0 security/integrity/iint.c:199 ima_calc_file_hash_tfm security/integrity/ima/ima_crypto.c:484 [inline] ima_calc_file_shash security/integrity/ima/ima_crypto.c:515 [inline] ima_calc_file_hash+0x702/0x16b0 security/integrity/ima/ima_crypto.c:572 ima_collect_measurement+0x20f/0x470 security/integrity/ima/ima_api.c:252 process_measurement+0xb4d/0x14e0 security/integrity/ima/ima_main.c:337 ima_file_check+0xd0/0x120 security/integrity/ima/ima_main.c:517 do_open fs/namei.c:3478 [inline] path_openat+0x1e76/0x2460 fs/namei.c:3609 do_filp_open+0x23b/0x480 fs/namei.c:3636 do_sys_openat2+0xfc/0x410 fs/open.c:1213 do_sys_open fs/open.c:1229 [inline] __do_sys_open fs/open.c:1237 [inline] __se_sys_open fs/open.c:1233 [inline] __x64_sys_open+0x1eb/0x240 fs/open.c:1233 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f1fe988c0d9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f1fea574168 EFLAGS: 00000246 ORIG_RAX: 0000000000000002 RAX: ffffffffffffffda RBX: 00007f1fe99abf80 RCX: 00007f1fe988c0d9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000100 RBP: 00007f1fe98e7ae9 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffc1915091f R14: 00007f1fea574300 R15: 0000000000022000 Allocated by task 4143: kasan_save_stack mm/kasan/common.c:38 [inline] kasan_set_track mm/kasan/common.c:45 [inline] set_alloc_info mm/kasan/common.c:436 [inline] __kasan_slab_alloc+0xa3/0xd0 mm/kasan/common.c:469 kasan_slab_alloc include/linux/kasan.h:224 [inline] slab_post_alloc_hook mm/slab.h:749 [inline] slab_alloc_node mm/slub.c:3217 [inline] slab_alloc mm/slub.c:3225 [inline] __kmem_cache_alloc_lru mm/slub.c:3232 [inline] kmem_cache_alloc+0x199/0x2f0 mm/slub.c:3242 vm_area_dup+0x1d/0x160 kernel/fork.c:467 dup_mmap+0x558/0xaf0 kernel/fork.c:643 dup_mm+0x86/0x290 kernel/fork.c:1521 copy_mm+0xea/0x160 kernel/fork.c:1573 copy_process+0x1431/0x39f0 kernel/fork.c:2234 kernel_clone+0x16e/0x610 kernel/fork.c:2639 __do_sys_clone kernel/fork.c:2756 [inline] __se_sys_clone kernel/fork.c:2740 [inline] __x64_sys_clone+0x231/0x2a0 kernel/fork.c:2740 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Freed by task 4165: kasan_save_stack mm/kasan/common.c:38 [inline] kasan_set_track+0x3d/0x60 mm/kasan/common.c:45 kasan_set_free_info+0x1f/0x40 mm/kasan/generic.c:370 ____kasan_slab_free+0xd8/0x110 mm/kasan/common.c:366 kasan_slab_free include/linux/kasan.h:200 [inline] slab_free_hook mm/slub.c:1728 [inline] slab_free_freelist_hook+0x12e/0x1a0 mm/slub.c:1754 slab_free mm/slub.c:3510 [inline] kmem_cache_free+0xc7/0x270 mm/slub.c:3527 remove_vma mm/mmap.c:189 [inline] exit_mmap+0x1a8/0x460 mm/mmap.c:3148 __mmput+0xc7/0x2f0 kernel/fork.c:1183 exec_mmap+0x44f/0x4a0 fs/exec.c:1034 begin_new_exec+0x633/0xe90 fs/exec.c:1293 load_elf_binary+0x820/0x24d0 fs/binfmt_elf.c:1002 search_binary_handler fs/exec.c:1726 [inline] exec_binprm fs/exec.c:1767 [inline] bprm_execve+0x901/0x1250 fs/exec.c:1836 do_execveat_common+0x448/0x610 fs/exec.c:1941 do_execve fs/exec.c:2011 [inline] __do_sys_execve fs/exec.c:2087 [inline] __se_sys_execve fs/exec.c:2082 [inline] __x64_sys_execve+0x89/0xa0 fs/exec.c:2082 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae The buggy address belongs to the object at ffff88801a5ae000 which belongs to the cache vm_area_struct of size 200 The buggy address is located 0 bytes inside of 200-byte region [ffff88801a5ae000, ffff88801a5ae0c8) The buggy address belongs to the physical page: page:ffffea0000696b80 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1a5ae flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000000200 ffffea000078fc40 dead000000000004 ffff888140006a00 raw: 0000000000000000 00000000000f000f 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x12cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY), pid 3471, tgid 3471 (dhcpcd-run-hook), ts 32478826347, free_ts 10350992593 prep_new_page mm/page_alloc.c:2441 [inline] get_page_from_freelist+0x72e/0x7a0 mm/page_alloc.c:4182 __alloc_pages+0x26c/0x5f0 mm/page_alloc.c:5408 alloc_slab_page+0x70/0xf0 mm/slub.c:1799 allocate_slab+0x5e/0x560 mm/slub.c:1944 new_slab mm/slub.c:2004 [inline] ___slab_alloc+0x3ee/0xc40 mm/slub.c:3005 __slab_alloc mm/slub.c:3092 [inline] slab_alloc_node mm/slub.c:3183 [inline] slab_alloc mm/slub.c:3225 [inline] __kmem_cache_alloc_lru mm/slub.c:3232 [inline] kmem_cache_alloc+0x246/0x2f0 mm/slub.c:3242 vm_area_dup+0x1d/0x160 kernel/fork.c:467 __split_vma+0x83/0x3e0 mm/mmap.c:2712 __do_munmap+0x24b/0x1560 mm/mmap.c:2823 __do_sys_brk+0x3d7/0x530 mm/mmap.c:256 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1356 [inline] free_pcp_prepare+0x812/0x900 mm/page_alloc.c:1406 free_unref_page_prepare mm/page_alloc.c:3328 [inline] free_unref_page+0x7d/0x360 mm/page_alloc.c:3423 kasan_depopulate_vmalloc_pte+0x66/0x80 mm/kasan/shadow.c:359 apply_to_pte_range mm/memory.c:2547 [inline] apply_to_pmd_range mm/memory.c:2591 [inline] apply_to_pud_range mm/memory.c:2627 [inline] apply_to_p4d_range mm/memory.c:2663 [inline] __apply_to_page_range+0x6ef/0x880 mm/memory.c:2697 kasan_release_vmalloc+0x96/0xb0 mm/kasan/shadow.c:469 __purge_vmap_area_lazy+0x14d4/0x1620 mm/vmalloc.c:1722 _vm_unmap_aliases+0x353/0x3c0 mm/vmalloc.c:2127 change_page_attr_set_clr+0x1e7/0x560 arch/x86/mm/pat/set_memory.c:1743 change_page_attr_set arch/x86/mm/pat/set_memory.c:1793 [inline] set_memory_nx+0xcb/0x110 arch/x86/mm/pat/set_memory.c:1941 free_init_pages arch/x86/mm/init.c:898 [inline] free_kernel_image_pages arch/x86/mm/init.c:917 [inline] free_initmem+0x57/0xa0 arch/x86/mm/init.c:944 kernel_init+0x28/0x1a0 init/main.c:1511 ret_from_fork+0x1f/0x30 Memory state around the buggy address: ffff88801a5adf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff88801a5adf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff88801a5ae000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88801a5ae080: fb fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc ffff88801a5ae100: fc fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================