syzbot


KASAN: use-after-free Read in hpfs_count_dnodes

Status: upstream: reported C repro on 2025/10/27 21:17
Subsystems: fs
[Documentation on labels]
Reported-by: syzbot+7d1563afac6cb196a444@syzkaller.appspotmail.com
First crash: 3d11h, last: 2d04h
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [fs?] KASAN: use-after-free Read in hpfs_count_dnodes 1 (3) 2025/10/28 16:13
Last patch testing requests (1)
Created Duration User Patch Repo Result
2025/10/28 15:10 21m eadavis@qq.com patch upstream OK log

Sample crash report:
HPFS: dnode_end_de: dnode->first_free = 7b3184b6
HPFS: de_next_de: de->length = 0
HPFS: dnode_end_de: dnode->first_free = 7b3184b6
HPFS: de_next_de: de->length = 0
HPFS: dnode_end_de: dnode->first_free = 7b3184b6
HPFS: de_next_de: de->length = 0
HPFS: dnode_end_de: dnode->first_free = 7b3184b6
==================================================================
BUG: KASAN: use-after-free in hpfs_count_dnodes+0x854/0xb20 fs/hpfs/dnode.c:773
Read of size 2 at addr ffff8880471a64d0 by task syz.0.17/5986

CPU: 1 UID: 0 PID: 5986 Comm: syz.0.17 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 print_address_description mm/kasan/report.c:378 [inline]
 print_report+0xca/0x240 mm/kasan/report.c:482
 kasan_report+0x118/0x150 mm/kasan/report.c:595
 hpfs_count_dnodes+0x854/0xb20 fs/hpfs/dnode.c:773
 hpfs_read_inode+0xc52/0x1010 fs/hpfs/inode.c:128
 hpfs_fill_super+0x12a9/0x2050 fs/hpfs/super.c:650
 get_tree_bdev_flags+0x40e/0x4d0 fs/super.c:1691
 vfs_get_tree+0x92/0x2b0 fs/super.c:1751
 fc_mount fs/namespace.c:1208 [inline]
 do_new_mount_fc fs/namespace.c:3651 [inline]
 do_new_mount+0x302/0xa10 fs/namespace.c:3727
 do_mount fs/namespace.c:4050 [inline]
 __do_sys_mount fs/namespace.c:4238 [inline]
 __se_sys_mount+0x313/0x410 fs/namespace.c:4215
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6afb54076a
Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 de 1a 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffd5770e378 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007ffd5770e400 RCX: 00007f6afb54076a
RDX: 000020000000a000 RSI: 0000200000009ec0 RDI: 00007ffd5770e3c0
RBP: 000020000000a000 R08: 00007ffd5770e400 R09: 0000000003200041
R10: 0000000003200041 R11: 0000000000000246 R12: 0000200000009ec0
R13: 00007ffd5770e3c0 R14: 0000000000009e21 R15: 0000200000000000
 </TASK>

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x5604d8ef1 pfn:0x471a6
flags: 0x80000000000000(node=0|zone=1)
raw: 0080000000000000 ffffea0000f20148 ffffea00011c6e48 0000000000000000
raw: 00000005604d8ef1 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as freed
page last allocated via order 0, migratetype Movable, gfp_mask 0x140cca(GFP_HIGHUSER_MOVABLE|__GFP_COMP), pid 5928, tgid 5928 (udevd), ts 106316997485, free_ts 115294923226
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1850
 prep_new_page mm/page_alloc.c:1858 [inline]
 get_page_from_freelist+0x28c0/0x2960 mm/page_alloc.c:3884
 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5183
 alloc_pages_mpol+0xd1/0x380 mm/mempolicy.c:2416
 folio_alloc_mpol_noprof mm/mempolicy.c:2435 [inline]
 vma_alloc_folio_noprof+0xe4/0x280 mm/mempolicy.c:2470
 folio_prealloc+0x30/0x180 mm/memory.c:-1
 wp_page_copy mm/memory.c:3679 [inline]
 do_wp_page+0x11f4/0x4930 mm/memory.c:4140
 handle_pte_fault mm/memory.c:6193 [inline]
 __handle_mm_fault mm/memory.c:6318 [inline]
 handle_mm_fault+0x97c/0x3400 mm/memory.c:6487
 do_user_addr_fault+0xa7c/0x1380 arch/x86/mm/fault.c:1336
 handle_page_fault arch/x86/mm/fault.c:1476 [inline]
 exc_page_fault+0x82/0x100 arch/x86/mm/fault.c:1532
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
page last free pid 5928 tgid 5928 stack trace:
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1394 [inline]
 free_unref_folios+0xc22/0x1860 mm/page_alloc.c:2963
 folios_put_refs+0x569/0x670 mm/swap.c:1002
 free_pages_and_swap_cache+0x277/0x520 mm/swap_state.c:355
 __tlb_batch_free_encoded_pages mm/mmu_gather.c:136 [inline]
 tlb_batch_pages_flush mm/mmu_gather.c:149 [inline]
 tlb_flush_mmu_free mm/mmu_gather.c:397 [inline]
 tlb_flush_mmu+0x3a0/0x680 mm/mmu_gather.c:404
 tlb_finish_mmu+0xc3/0x1d0 mm/mmu_gather.c:497
 exit_mmap+0x444/0xb40 mm/mmap.c:1293
 __mmput+0xcb/0x3d0 kernel/fork.c:1133
 exit_mm+0x1da/0x2c0 kernel/exit.c:582
 do_exit+0x648/0x2300 kernel/exit.c:954
 do_group_exit+0x21c/0x2d0 kernel/exit.c:1107
 __do_sys_exit_group kernel/exit.c:1118 [inline]
 __se_sys_exit_group kernel/exit.c:1116 [inline]
 __x64_sys_exit_group+0x3f/0x40 kernel/exit.c:1116
 x64_sys_call+0x21f7/0x2200 arch/x86/include/generated/asm/syscalls_64.h:232
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f

Memory state around the buggy address:
 ffff8880471a6380: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
 ffff8880471a6400: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>ffff8880471a6480: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
                                                 ^
 ffff8880471a6500: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
 ffff8880471a6580: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
==================================================================

Crashes (9):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/10/26 14:16 upstream 72761a7e3122 c0460fcd .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/25 15:52 linux-next 72fb0170ef1f c0460fcd .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/25 14:23 linux-next 72fb0170ef1f c0460fcd .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/25 13:01 linux-next 72fb0170ef1f c0460fcd .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/25 11:39 linux-next 72fb0170ef1f c0460fcd .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/25 10:07 linux-next 72fb0170ef1f c0460fcd .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/25 08:14 linux-next 72fb0170ef1f c0460fcd .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/26 12:46 upstream 72761a7e3122 c0460fcd .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs KASAN: use-after-free Read in hpfs_count_dnodes
2025/10/25 06:32 linux-next 72fb0170ef1f c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in hpfs_count_dnodes
* Struck through repros no longer work on HEAD.