bcachefs (loop4): check_indirect_extents... done bcachefs (loop4): check_dirents... dirent points to missing inode: u64s 7 type dirent 4098:1541969748923303820:U32_MAX len 0 ver 0: file1 -> 4100 type lnk, fixing dirent points to missing inode: u64s 7 type dirent 4098:8225298456217505393:U32_MAX len 0 ver 0: file0 -> 4099 type reg, fixing ================================================================== BUG: KASAN: use-after-free in check_dirent fs/bcachefs/fsck.c:2203 [inline] BUG: KASAN: use-after-free in bch2_check_dirents+0x2acb/0x3ba0 fs/bcachefs/fsck.c:2228 Read of size 1 at addr ffff88805fd00170 by task syz.4.62/7171 CPU: 0 UID: 0 PID: 7171 Comm: syz.4.62 Not tainted 6.14.0-syzkaller-07422-gacb4f33713b9 #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:408 [inline] print_report+0x16e/0x5b0 mm/kasan/report.c:521 kasan_report+0x143/0x180 mm/kasan/report.c:634 check_dirent fs/bcachefs/fsck.c:2203 [inline] bch2_check_dirents+0x2acb/0x3ba0 fs/bcachefs/fsck.c:2228 bch2_run_recovery_pass+0xf0/0x1e0 fs/bcachefs/recovery_passes.c:226 bch2_run_recovery_passes+0x2ad/0xa90 fs/bcachefs/recovery_passes.c:291 bch2_fs_recovery+0x2c65/0x3e20 fs/bcachefs/recovery.c:973 bch2_fs_start+0x37c/0x620 fs/bcachefs/super.c:1057 bch2_fs_get_tree+0x1270/0x18d0 fs/bcachefs/fs.c:2253 vfs_get_tree+0x90/0x2b0 fs/super.c:1759 do_new_mount+0x2cf/0xb70 fs/namespace.c:3878 do_mount fs/namespace.c:4218 [inline] __do_sys_mount fs/namespace.c:4429 [inline] __se_sys_mount+0x38c/0x400 fs/namespace.c:4406 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fa1a53874ca Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 de 1a 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fa1a610ce68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007fa1a610cef0 RCX: 00007fa1a53874ca RDX: 0000000020000040 RSI: 0000000020000080 RDI: 00007fa1a610ceb0 RBP: 0000000020000040 R08: 00007fa1a610cef0 R09: 0000000002000000 R10: 0000000002000000 R11: 0000000000000246 R12: 0000000020000080 R13: 00007fa1a610ceb0 R14: 000000000000599d R15: 00000000200000c0 The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5fd00 flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f0(buddy) raw: 00fff00000000000 ffffea000211c808 ffff88813fffc788 0000000000000000 raw: 0000000000000000 0000000000000005 00000000f0000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as freed page last allocated via order 5, migratetype Unmovable, gfp_mask 0x52800(GFP_NOWAIT|__GFP_NORETRY|__GFP_COMP), pid 7205, tgid 7205 (bch-reclaim/loo), ts 125366661256, free_ts 126048826183 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f4/0x240 mm/page_alloc.c:1551 prep_new_page mm/page_alloc.c:1559 [inline] get_page_from_freelist+0x368a/0x37d0 mm/page_alloc.c:3477 __alloc_frozen_pages_noprof+0x2c5/0x7b0 mm/page_alloc.c:4740 __alloc_pages_noprof+0xa/0x30 mm/page_alloc.c:4774 __alloc_pages_node_noprof include/linux/gfp.h:265 [inline] alloc_pages_node_noprof include/linux/gfp.h:292 [inline] ___kmalloc_large_node+0x92/0x210 mm/slub.c:4262 __kmalloc_large_node_noprof+0x1a/0x80 mm/slub.c:4290 __do_kmalloc_node mm/slub.c:4306 [inline] __kvmalloc_node_noprof+0x7c/0x5a0 mm/slub.c:5003 btree_bounce_alloc fs/bcachefs/btree_io.c:124 [inline] btree_node_sort+0x67c/0x1870 fs/bcachefs/btree_io.c:323 bch2_btree_post_write_cleanup+0x11a/0xaa0 fs/bcachefs/btree_io.c:2500 bch2_btree_node_write_trans+0x18a/0x7a0 fs/bcachefs/btree_io.c:2569 btree_node_write_if_need fs/bcachefs/btree_io.h:157 [inline] __btree_node_flush+0x3a1/0x470 fs/bcachefs/btree_trans_commit.c:253 bch2_btree_node_flush1+0x2a/0x40 fs/bcachefs/btree_trans_commit.c:267 journal_flush_pins+0x89b/0xe40 fs/bcachefs/journal_reclaim.c:589 __bch2_journal_reclaim+0x789/0xda0 fs/bcachefs/journal_reclaim.c:720 bch2_journal_reclaim_thread+0x16d/0x580 fs/bcachefs/journal_reclaim.c:762 kthread+0x7b7/0x940 kernel/kthread.c:464 page last free pid 7171 tgid 7170 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1127 [inline] __free_pages_ok+0xbbf/0xe40 mm/page_alloc.c:1271 __folio_put+0x2b5/0x360 mm/swap.c:112 folio_put include/linux/mm.h:1477 [inline] free_large_kmalloc+0x143/0x1e0 mm/slub.c:4758 kfree+0x216/0x430 mm/slub.c:4826 btree_bounce_free fs/bcachefs/btree_io.c:112 [inline] btree_node_sort+0x1124/0x1870 fs/bcachefs/btree_io.c:380 bch2_btree_post_write_cleanup+0x11a/0xaa0 fs/bcachefs/btree_io.c:2500 bch2_btree_node_prep_for_write+0x35a/0x670 fs/bcachefs/btree_trans_commit.c:93 bch2_trans_lock_write+0x66f/0xb60 fs/bcachefs/btree_trans_commit.c:129 do_bch2_trans_commit fs/bcachefs/btree_trans_commit.c:840 [inline] __bch2_trans_commit+0x3252/0x9dc0 fs/bcachefs/btree_trans_commit.c:1050 bch2_trans_commit fs/bcachefs/btree_update.h:193 [inline] check_dirent fs/bcachefs/fsck.c:2198 [inline] bch2_check_dirents+0x2820/0x3ba0 fs/bcachefs/fsck.c:2228 bch2_run_recovery_pass+0xf0/0x1e0 fs/bcachefs/recovery_passes.c:226 bch2_run_recovery_passes+0x2ad/0xa90 fs/bcachefs/recovery_passes.c:291 bch2_fs_recovery+0x2c65/0x3e20 fs/bcachefs/recovery.c:973 bch2_fs_start+0x37c/0x620 fs/bcachefs/super.c:1057 bch2_fs_get_tree+0x1270/0x18d0 fs/bcachefs/fs.c:2253 vfs_get_tree+0x90/0x2b0 fs/super.c:1759 Memory state around the buggy address: ffff88805fd00000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff88805fd00080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff >ffff88805fd00100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ^ ffff88805fd00180: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff88805fd00200: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ==================================================================