syzbot


KASAN: slab-use-after-free Read in hfsplus_read_wrapper (2)

Status: upstream: reported C repro on 2024/03/31 14:35
Subsystems: hfs
[Documentation on labels]
Reported-by: syzbot+fa7b3ab32bcb56c10961@syzkaller.appspotmail.com
First crash: 34d, last: 8d15h
Discussions (3)
Title Replies (including bot) Last reply
[PATCH V2] fs/hfsplus: fix uaf in hfsplus_read_wrapper 1 (1) 2024/04/01 06:37
[PATCH] fs/hfsplus: fix in hfsplus_read_wrapper 1 (1) 2024/04/01 06:16
[syzbot] [hfs?] KASAN: slab-use-after-free Read in hfsplus_read_wrapper (2) 0 (2) 2024/04/01 03:01
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream KASAN: slab-use-after-free Read in hfsplus_read_wrapper hfs C inconclusive done 7 125d 348d 26/26 fixed on 2024/02/15 11:44
Last patch testing requests (1)
Created Duration User Patch Repo Result
2024/04/01 03:01 40m lizhi.xu@windriver.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git fe46a7dd189e OK log

Sample crash report:
==================================================================
BUG: KASAN: slab-use-after-free in hfsplus_read_wrapper+0xf86/0x1070 fs/hfsplus/wrapper.c:226
Read of size 2 at addr ffff88802ebe4c00 by task syz-executor134/5129

CPU: 1 PID: 5129 Comm: syz-executor134 Not tainted 6.9.0-rc4-syzkaller-00329-g48cf398f15fc #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 print_address_description mm/kasan/report.c:377 [inline]
 print_report+0xc3/0x620 mm/kasan/report.c:488
 kasan_report+0xd9/0x110 mm/kasan/report.c:601
 hfsplus_read_wrapper+0xf86/0x1070 fs/hfsplus/wrapper.c:226
 hfsplus_fill_super+0x352/0x1bc0 fs/hfsplus/super.c:419
 mount_bdev+0x1e6/0x2d0 fs/super.c:1658
 legacy_get_tree+0x10c/0x220 fs/fs_context.c:662
 vfs_get_tree+0x92/0x380 fs/super.c:1779
 do_new_mount fs/namespace.c:3352 [inline]
 path_mount+0x14e6/0x1f20 fs/namespace.c:3679
 do_mount fs/namespace.c:3692 [inline]
 __do_sys_mount fs/namespace.c:3898 [inline]
 __se_sys_mount fs/namespace.c:3875 [inline]
 __x64_sys_mount+0x297/0x320 fs/namespace.c:3875
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcf/0x260 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fc4059d265a
Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 5e 04 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffeb92725b8 EFLAGS: 00000286 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fc4059d265a
RDX: 0000000020000000 RSI: 0000000020000c80 RDI: 00007ffeb9272600
RBP: 0000000000000005 R08: 00007ffeb9272640 R09: 0000000000000601
R10: 0000000000200008 R11: 0000000000000286 R12: 00007ffeb9272600
R13: 00007ffeb9272640 R14: 0000000000080000 R15: 0000000000000004
 </TASK>

The buggy address belongs to the object at ffff88802ebe4c00
 which belongs to the cache kmalloc-512 of size 512
The buggy address is located 0 bytes inside of
 freed 512-byte region [ffff88802ebe4c00, ffff88802ebe4e00)

The buggy address belongs to the physical page:
page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2ebe4
head: order:2 entire_mapcount:0 nr_pages_mapped:0 pincount:0
flags: 0xfff80000000840(slab|head|node=0|zone=1|lastcpupid=0xfff)
page_type: 0xffffffff()
raw: 00fff80000000840 ffff888015041c80 ffffea0000a55900 dead000000000002
raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000
head: 00fff80000000840 ffff888015041c80 ffffea0000a55900 dead000000000002
head: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000
head: 00fff80000000002 ffffea0000baf901 ffffea0000baf948 00000000ffffffff
head: 0000000400000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as allocated
page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 4547, tgid -809417370 (udevadm), ts 4547, free_ts 50463936514
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x2d4/0x350 mm/page_alloc.c:1534
 prep_new_page mm/page_alloc.c:1541 [inline]
 get_page_from_freelist+0xa28/0x3780 mm/page_alloc.c:3317
 __alloc_pages+0x22b/0x2460 mm/page_alloc.c:4575
 __alloc_pages_node include/linux/gfp.h:238 [inline]
 alloc_pages_node include/linux/gfp.h:261 [inline]
 alloc_slab_page mm/slub.c:2175 [inline]
 allocate_slab mm/slub.c:2338 [inline]
 new_slab+0xcc/0x3a0 mm/slub.c:2391
 ___slab_alloc+0x66d/0x1790 mm/slub.c:3525
 __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3610
 __slab_alloc_node mm/slub.c:3663 [inline]
 slab_alloc_node mm/slub.c:3835 [inline]
 kmalloc_trace+0x2fb/0x330 mm/slub.c:3992
 kmalloc include/linux/slab.h:628 [inline]
 kzalloc include/linux/slab.h:749 [inline]
 kernfs_fop_open+0x28b/0xdb0 fs/kernfs/file.c:623
 do_dentry_open+0x8dd/0x18c0 fs/open.c:955
 do_open fs/namei.c:3642 [inline]
 path_openat+0x1dfb/0x2990 fs/namei.c:3799
 do_filp_open+0x1dc/0x430 fs/namei.c:3826
 do_sys_openat2+0x17a/0x1e0 fs/open.c:1406
 do_sys_open fs/open.c:1421 [inline]
 __do_sys_openat fs/open.c:1437 [inline]
 __se_sys_openat fs/open.c:1432 [inline]
 __x64_sys_openat+0x175/0x210 fs/open.c:1432
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcf/0x260 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
page last free pid 4573 tgid 4573 stack trace:
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1141 [inline]
 free_unref_page_prepare+0x527/0xb10 mm/page_alloc.c:2347
 free_unref_page+0x33/0x3c0 mm/page_alloc.c:2487
 __put_partials+0x14c/0x170 mm/slub.c:2906
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x4e/0x140 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x192/0x1e0 mm/kasan/quarantine.c:286
 __kasan_slab_alloc+0x69/0x90 mm/kasan/common.c:322
 kasan_slab_alloc include/linux/kasan.h:201 [inline]
 slab_post_alloc_hook mm/slub.c:3798 [inline]
 slab_alloc_node mm/slub.c:3845 [inline]
 kmem_cache_alloc+0x136/0x320 mm/slub.c:3852
 getname_flags.part.0+0x50/0x4f0 fs/namei.c:139
 getname_flags+0x9b/0xf0 include/linux/audit.h:322
 vfs_fstatat+0x9a/0x150 fs/stat.c:303
 __do_sys_newfstatat+0x98/0x120 fs/stat.c:468
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcf/0x260 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Memory state around the buggy address:
 ffff88802ebe4b00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffff88802ebe4b80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff88802ebe4c00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                   ^
 ffff88802ebe4c80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
 ffff88802ebe4d00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
==================================================================
hfsplus: unable to set blocksize to 1024!
hfsplus: unable to find HFS+ superblock

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/04/22 03:15 upstream 48cf398f15fc af24b050 .config strace log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-kasan-badwrites-root KASAN: slab-use-after-free Read in hfsplus_read_wrapper
2024/03/27 14:26 upstream fe46a7dd189e 454571b6 .config strace log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-kasan-badwrites-root KASAN: slab-use-after-free Read in hfsplus_read_wrapper
2024/04/09 05:28 upstream fec50db7033e f3234354 .config console log report syz C [disk image (non-bootable)] [vmlinux] [kernel image] [mounted in repro] ci-qemu-upstream KASAN: slab-use-after-free Read in hfsplus_read_wrapper
2024/04/15 08:18 upstream fe46a7dd189e c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root KASAN: slab-use-after-free Read in hfsplus_read_wrapper
* Struck through repros no longer work on HEAD.