================================================================== BUG: KASAN: slab-out-of-bounds in dtSplitPage+0x10d4/0x31d0 fs/jfs/jfs_dtree.c:-1 Read of size 1 at addr ffff888056b5d335 by task syz.5.1262/11201 CPU: 0 PID: 11201 Comm: syz.5.1262 Not tainted syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025 Call Trace: dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106 print_address_description mm/kasan/report.c:316 [inline] print_report+0xa8/0x210 mm/kasan/report.c:420 kasan_report+0x10b/0x140 mm/kasan/report.c:524 dtSplitPage+0x10d4/0x31d0 fs/jfs/jfs_dtree.c:-1 dtSplitUp fs/jfs/jfs_dtree.c:1092 [inline] dtInsert+0xfbd/0x58a0 fs/jfs/jfs_dtree.c:871 jfs_mkdir+0x6e5/0xa70 fs/jfs/namei.c:270 vfs_mkdir+0x387/0x570 fs/namei.c:4106 do_mkdirat+0x1d0/0x430 fs/namei.c:4131 __do_sys_mkdirat fs/namei.c:4146 [inline] __se_sys_mkdirat fs/namei.c:4144 [inline] __x64_sys_mkdirat+0x85/0x90 fs/namei.c:4144 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 RIP: 0033:0x7f587938eba9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f587a1b1038 EFLAGS: 00000246 ORIG_RAX: 0000000000000102 RAX: ffffffffffffffda RBX: 00007f58795d5fa0 RCX: 00007f587938eba9 RDX: 0000000000000000 RSI: 00002000000000c0 RDI: ffffffffffffff9c RBP: 00007f5879411e19 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f58795d6038 R14: 00007f58795d5fa0 R15: 00007ffdc2bc9e98 Allocated by task 11201: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4b/0x70 mm/kasan/common.c:52 __kasan_slab_alloc+0x6b/0x80 mm/kasan/common.c:328 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook+0x4b/0x480 mm/slab.h:737 slab_alloc_node mm/slub.c:3359 [inline] slab_alloc mm/slub.c:3367 [inline] __kmem_cache_alloc_lru mm/slub.c:3374 [inline] kmem_cache_alloc_lru+0x11a/0x2e0 mm/slub.c:3390 alloc_inode_sb include/linux/fs.h:3245 [inline] jfs_alloc_inode+0x24/0x60 fs/jfs/super.c:105 alloc_inode fs/inode.c:261 [inline] iget_locked+0x1a9/0x830 fs/inode.c:1373 jfs_iget+0x20/0x3c0 fs/jfs/inode.c:29 jfs_lookup+0x1c2/0x380 fs/jfs/namei.c:1467 __lookup_slow+0x27d/0x3a0 fs/namei.c:1690 lookup_slow+0x53/0x70 fs/namei.c:1707 walk_component+0x2be/0x3f0 fs/namei.c:1998 lookup_last fs/namei.c:2455 [inline] path_lookupat+0x169/0x440 fs/namei.c:2479 filename_lookup+0x1f0/0x500 fs/namei.c:2508 user_path_at_empty+0x3e/0x60 fs/namei.c:2905 user_path_at include/linux/namei.h:57 [inline] __do_sys_chdir fs/open.c:514 [inline] __se_sys_chdir+0x91/0x280 fs/open.c:508 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 Last potentially related work creation: kasan_save_stack+0x3a/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xb2/0xc0 mm/kasan/generic.c:486 call_rcu+0x154/0x980 kernel/rcu/tree.c:2849 destroy_inode fs/inode.c:316 [inline] evict+0x7da/0x870 fs/inode.c:720 jfs_umount+0x202/0x360 fs/jfs/jfs_umount.c:91 jfs_put_super+0x88/0x190 fs/jfs/super.c:194 generic_shutdown_super+0x130/0x340 fs/super.c:501 kill_block_super+0x7c/0xe0 fs/super.c:1470 deactivate_locked_super+0x93/0xf0 fs/super.c:332 cleanup_mnt+0x463/0x4f0 fs/namespace.c:1182 task_work_run+0x1ca/0x250 kernel/task_work.c:203 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177 exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210 __syscall_exit_to_user_mode_work kernel/entry/common.c:292 [inline] syscall_exit_to_user_mode+0x16/0x40 kernel/entry/common.c:303 do_syscall_64+0x58/0xa0 arch/x86/entry/common.c:87 entry_SYSCALL_64_after_hwframe+0x68/0xd2 Second to last potentially related work creation: kasan_save_stack+0x3a/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xb2/0xc0 mm/kasan/generic.c:486 call_rcu+0x154/0x980 kernel/rcu/tree.c:2849 destroy_inode fs/inode.c:316 [inline] evict+0x7da/0x870 fs/inode.c:720 jfs_umount+0x115/0x360 fs/jfs/jfs_umount.c:65 jfs_put_super+0x88/0x190 fs/jfs/super.c:194 generic_shutdown_super+0x130/0x340 fs/super.c:501 kill_block_super+0x7c/0xe0 fs/super.c:1470 deactivate_locked_super+0x93/0xf0 fs/super.c:332 cleanup_mnt+0x463/0x4f0 fs/namespace.c:1182 task_work_run+0x1ca/0x250 kernel/task_work.c:203 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop+0xe6/0x110 kernel/entry/common.c:177 exit_to_user_mode_prepare+0xee/0x180 kernel/entry/common.c:210 __syscall_exit_to_user_mode_work kernel/entry/common.c:292 [inline] syscall_exit_to_user_mode+0x16/0x40 kernel/entry/common.c:303 do_syscall_64+0x58/0xa0 arch/x86/entry/common.c:87 entry_SYSCALL_64_after_hwframe+0x68/0xd2 The buggy address belongs to the object at ffff888056b5ca00 which belongs to the cache jfs_ip of size 2240 The buggy address is located 117 bytes to the right of 2240-byte region [ffff888056b5ca00, ffff888056b5d2c0) The buggy address belongs to the physical page: page:ffffea00015ad600 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x56b58 head:ffffea00015ad600 order:3 compound_mapcount:0 compound_pincount:0 memcg:ffff88807e425501 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 0000000000000000 dead000000000122 ffff88801bf57b40 raw: 0000000000000000 00000000800d000d 00000001ffffffff ffff88807e425501 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Reclaimable, gfp_mask 0x1d2050(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL|__GFP_RECLAIMABLE), pid 5308, tgid 5307 (syz.3.195), ts 156644253013, free_ts 12557689174 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x173/0x1a0 mm/page_alloc.c:2532 prep_new_page mm/page_alloc.c:2539 [inline] get_page_from_freelist+0x1a26/0x1ac0 mm/page_alloc.c:4328 __alloc_pages+0x1df/0x4e0 mm/page_alloc.c:5614 alloc_slab_page+0x5d/0x160 mm/slub.c:1799 allocate_slab mm/slub.c:1944 [inline] new_slab+0x87/0x2c0 mm/slub.c:1997 ___slab_alloc+0xbc6/0x1230 mm/slub.c:3154 __slab_alloc mm/slub.c:3240 [inline] slab_alloc_node mm/slub.c:3325 [inline] slab_alloc mm/slub.c:3367 [inline] __kmem_cache_alloc_lru mm/slub.c:3374 [inline] kmem_cache_alloc_lru+0x1ae/0x2e0 mm/slub.c:3390 alloc_inode_sb include/linux/fs.h:3245 [inline] jfs_alloc_inode+0x24/0x60 fs/jfs/super.c:105 alloc_inode fs/inode.c:261 [inline] new_inode_pseudo+0x5f/0x1c0 fs/inode.c:1063 new_inode+0x25/0x1c0 fs/inode.c:1091 jfs_fill_super+0x392/0xac0 fs/jfs/super.c:544 mount_bdev+0x287/0x3c0 fs/super.c:1443 legacy_get_tree+0xe6/0x180 fs/fs_context.c:632 vfs_get_tree+0x88/0x270 fs/super.c:1573 do_new_mount+0x24a/0xa40 fs/namespace.c:3069 do_mount fs/namespace.c:3412 [inline] __do_sys_mount fs/namespace.c:3620 [inline] __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:3597 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1459 [inline] free_pcp_prepare mm/page_alloc.c:1509 [inline] free_unref_page_prepare+0x8b4/0x9a0 mm/page_alloc.c:3384 free_unref_page+0x2e/0x3f0 mm/page_alloc.c:3479 free_contig_range+0x9d/0x150 mm/page_alloc.c:9574 destroy_args+0x100/0xa31 mm/debug_vm_pgtable.c:1031 debug_vm_pgtable+0x32a/0x37e mm/debug_vm_pgtable.c:1359 do_one_initcall+0x214/0x7a0 init/main.c:1298 do_initcall_level+0x137/0x1e4 init/main.c:1371 do_initcalls+0x4b/0x8a init/main.c:1387 kernel_init_freeable+0x3fa/0x5ac init/main.c:1626 kernel_init+0x19/0x1b0 init/main.c:1514 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Memory state around the buggy address: ffff888056b5d200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff888056b5d280: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc >ffff888056b5d300: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb ^ ffff888056b5d380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888056b5d400: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================