syzbot


KASAN: use-after-free Read in check_extent_overbig

Status: upstream: reported C repro on 2024/12/05 17:59
Subsystems: bcachefs
[Documentation on labels]
Reported-by: syzbot+fbc1f6040dd365cce0d8@syzkaller.appspotmail.com
First crash: 15d, last: 11d
Cause bisection: introduced by (bisect log) :
commit bf4baaa087e2be0279991f1dbf9acaa7a4c9148c
Author: Kent Overstreet <kent.overstreet@linux.dev>
Date: Sat Oct 5 21:37:02 2024 +0000

  bcachefs: Fix lockdep splat in bch2_accounting_read

Crash: KASAN: use-after-free Read in check_extent_overbig (log)
Repro: C syz .config
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [bcachefs?] KASAN: use-after-free Read in check_extent_overbig 0 (2) 2024/12/06 06:11

Sample crash report:
  u64s 7 type extent 536870913:24:U32_MAX len 24 ver 0: durability: 1 crc: c_size 8 size 24 offset 0 nonce 0 csum none 0:0  compress lz4 ptr: 0:34:8 gen 0, fixing
==================================================================
BUG: KASAN: use-after-free in __extent_entry_type fs/bcachefs/extents.h:54 [inline]
BUG: KASAN: use-after-free in extent_entry_is_crc fs/bcachefs/extents.h:121 [inline]
BUG: KASAN: use-after-free in check_extent_overbig+0x27b/0x7d0 fs/bcachefs/fsck.c:1904
Read of size 8 at addr ffff8880518a0188 by task syz-executor254/7637

CPU: 1 UID: 0 PID: 7637 Comm: syz-executor254 Not tainted 6.13.0-rc1-syzkaller-00025-gfeffde684ac2 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_address_description mm/kasan/report.c:378 [inline]
 print_report+0x169/0x550 mm/kasan/report.c:489
 kasan_report+0x143/0x180 mm/kasan/report.c:602
 __extent_entry_type fs/bcachefs/extents.h:54 [inline]
 extent_entry_is_crc fs/bcachefs/extents.h:121 [inline]
 check_extent_overbig+0x27b/0x7d0 fs/bcachefs/fsck.c:1904
 bch2_check_extents+0xa23/0x65a0 fs/bcachefs/fsck.c:2037
 bch2_run_recovery_pass+0xf0/0x1e0 fs/bcachefs/recovery_passes.c:191
 bch2_run_recovery_passes+0x3a7/0x880 fs/bcachefs/recovery_passes.c:244
 bch2_fs_recovery+0x25cc/0x39d0 fs/bcachefs/recovery.c:861
 bch2_fs_start+0x356/0x5b0 fs/bcachefs/super.c:1037
 bch2_fs_get_tree+0xd68/0x1710 fs/bcachefs/fs.c:2170
 vfs_get_tree+0x90/0x2b0 fs/super.c:1814
 do_new_mount+0x2be/0xb40 fs/namespace.c:3507
 do_mount fs/namespace.c:3847 [inline]
 __do_sys_mount fs/namespace.c:4057 [inline]
 __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fc6920b111a
Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 1e 09 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fc691859048 EFLAGS: 00000282 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 0000000020000000 RCX: 00007fc6920b111a
RDX: 00000000200000c0 RSI: 0000000020000000 RDI: 00007fc6918590a0
RBP: 00000000200000c0 R08: 00007fc6918590e0 R09: 000000000000598a
R10: 0000000000800000 R11: 0000000000000282 R12: 00007fc6918590a0
R13: 00007fc6918590e0 R14: 0000000000005991 R15: 0000000020006a00
 </TASK>

The buggy address belongs to the physical page:
page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x518a0
flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff)
raw: 00fff00000000000 0000000000000000 dead000000000122 0000000000000000
raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
page dumped because: kasan: bad access detected
page_owner tracks the page as freed
page last allocated via order 5, migratetype Unmovable, gfp_mask 0x52800(GFP_NOWAIT|__GFP_NORETRY|__GFP_COMP), pid 7637, tgid 7636 (syz-executor254), ts 149145294872, free_ts 151843036086
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1556
 prep_new_page mm/page_alloc.c:1564 [inline]
 get_page_from_freelist+0x3651/0x37a0 mm/page_alloc.c:3474
 __alloc_pages_noprof+0x292/0x710 mm/page_alloc.c:4751
 __alloc_pages_node_noprof include/linux/gfp.h:269 [inline]
 alloc_pages_node_noprof include/linux/gfp.h:296 [inline]
 ___kmalloc_large_node+0x8b/0x1d0 mm/slub.c:4228
 __kmalloc_large_node_noprof+0x1a/0x80 mm/slub.c:4255
 __do_kmalloc_node mm/slub.c:4271 [inline]
 __kmalloc_node_noprof+0x33a/0x4d0 mm/slub.c:4289
 __kvmalloc_node_noprof+0x72/0x190 mm/util.c:650
 btree_bounce_alloc fs/bcachefs/btree_io.c:124 [inline]
 bch2_btree_node_read_done+0x3808/0x5e90 fs/bcachefs/btree_io.c:1188
 btree_node_read_work+0x68b/0x1260 fs/bcachefs/btree_io.c:1323
 bch2_btree_node_read+0x2433/0x29f0
 __bch2_btree_root_read fs/bcachefs/btree_io.c:1749 [inline]
 bch2_btree_root_read+0x617/0x7a0 fs/bcachefs/btree_io.c:1771
 read_btree_roots+0x296/0x840 fs/bcachefs/recovery.c:523
 bch2_fs_recovery+0x2585/0x39d0 fs/bcachefs/recovery.c:853
 bch2_fs_start+0x356/0x5b0 fs/bcachefs/super.c:1037
 bch2_fs_get_tree+0xd68/0x1710 fs/bcachefs/fs.c:2170
 vfs_get_tree+0x90/0x2b0 fs/super.c:1814
page last free pid 7637 tgid 7636 stack trace:
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1127 [inline]
 __free_pages_ok+0xc87/0xf80 mm/page_alloc.c:1269
 __folio_put+0x2c7/0x440 mm/swap.c:112
 folio_put include/linux/mm.h:1488 [inline]
 free_large_kmalloc+0x105/0x1c0 mm/slub.c:4717
 kfree+0x212/0x430 mm/slub.c:4740
 btree_bounce_free fs/bcachefs/btree_io.c:112 [inline]
 btree_node_sort+0x1100/0x1830 fs/bcachefs/btree_io.c:380
 bch2_btree_post_write_cleanup+0x11a/0xa70 fs/bcachefs/btree_io.c:2248
 bch2_btree_node_prep_for_write+0x55b/0x8f0 fs/bcachefs/btree_trans_commit.c:93
 bch2_trans_lock_write+0x68e/0xc60 fs/bcachefs/btree_trans_commit.c:129
 do_bch2_trans_commit fs/bcachefs/btree_trans_commit.c:896 [inline]
 __bch2_trans_commit+0x26f2/0x93c0 fs/bcachefs/btree_trans_commit.c:1121
 bch2_trans_commit fs/bcachefs/btree_update.h:184 [inline]
 check_extent fs/bcachefs/fsck.c:1994 [inline]
 bch2_check_extents+0x4579/0x65a0 fs/bcachefs/fsck.c:2037
 bch2_run_recovery_pass+0xf0/0x1e0 fs/bcachefs/recovery_passes.c:191
 bch2_run_recovery_passes+0x3a7/0x880 fs/bcachefs/recovery_passes.c:244
 bch2_fs_recovery+0x25cc/0x39d0 fs/bcachefs/recovery.c:861
 bch2_fs_start+0x356/0x5b0 fs/bcachefs/super.c:1037
 bch2_fs_get_tree+0xd68/0x1710 fs/bcachefs/fs.c:2170
 vfs_get_tree+0x90/0x2b0 fs/super.c:1814

Memory state around the buggy address:
 ffff8880518a0080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
 ffff8880518a0100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
>ffff8880518a0180: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
                      ^
 ffff8880518a0200: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
 ffff8880518a0280: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
==================================================================

Crashes (5):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/12/05 17:46 upstream feffde684ac2 29f61fce .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro #1] [mounted in repro #2] [mounted in repro #3] ci2-upstream-fs KASAN: use-after-free Read in check_extent_overbig
2024/12/05 15:47 upstream feffde684ac2 29f61fce .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro #1] [mounted in repro #2] [mounted in repro #3] ci2-upstream-fs KASAN: use-after-free Read in check_extent_overbig
2024/12/07 15:23 upstream b5f217084ab3 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs KASAN: use-after-free Read in check_extent_overbig
2024/12/05 14:26 upstream feffde684ac2 29f61fce .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs KASAN: use-after-free Read in check_extent_overbig
2024/12/09 15:50 linux-next af2ea8ab7a54 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root KASAN: use-after-free Read in check_extent_overbig
* Struck through repros no longer work on HEAD.