bcachefs (loop3): mounting version 1.7: mi_btree_bitmap opts=compression=lz4,nojournal_transaction_names bcachefs (loop3): recovering from clean shutdown, journal seq 15 ================================================================== BUG: KASAN: use-after-free in memcpy_dir crypto/scatterwalk.c:23 [inline] BUG: KASAN: use-after-free in scatterwalk_copychunks+0x1cc/0x460 crypto/scatterwalk.c:38 Read of size 40 at addr ffff888063242000 by task syz.3.647/11311 CPU: 1 PID: 11311 Comm: syz.3.647 Not tainted 6.10.0-rc6-syzkaller-00212-g1dd28064d416 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114 print_address_description mm/kasan/report.c:377 [inline] print_report+0x169/0x550 mm/kasan/report.c:488 kasan_report+0x143/0x180 mm/kasan/report.c:601 kasan_check_range+0x282/0x290 mm/kasan/generic.c:189 __asan_memcpy+0x29/0x70 mm/kasan/shadow.c:105 memcpy_dir crypto/scatterwalk.c:23 [inline] scatterwalk_copychunks+0x1cc/0x460 crypto/scatterwalk.c:38 skcipher_next_slow+0x39d/0x480 crypto/skcipher.c:276 skcipher_walk_next+0x634/0xba0 crypto/skcipher.c:361 chacha_simd_stream_xor+0x67f/0xd10 arch/x86/crypto/chacha_glue.c:192 do_encrypt_sg fs/bcachefs/checksum.c:108 [inline] do_encrypt+0x4ef/0x7d0 fs/bcachefs/checksum.c:128 bset_encrypt fs/bcachefs/btree_io.h:118 [inline] bch2_btree_node_read_done+0x185b/0x6750 fs/bcachefs/btree_io.c:1129 btree_node_read_work+0x68b/0x1260 fs/bcachefs/btree_io.c:1345 bch2_btree_node_read+0x2433/0x2a10 __bch2_btree_root_read fs/bcachefs/btree_io.c:1769 [inline] bch2_btree_root_read+0x61e/0x970 fs/bcachefs/btree_io.c:1793 read_btree_roots+0x22d/0x7c0 fs/bcachefs/recovery.c:481 bch2_fs_recovery+0x235c/0x3730 fs/bcachefs/recovery.c:809 bch2_fs_start+0x356/0x5b0 fs/bcachefs/super.c:1035 bch2_fs_open+0xa8d/0xdf0 fs/bcachefs/super.c:2132 bch2_mount+0x6b0/0x13a0 fs/bcachefs/fs.c:1926 legacy_get_tree+0xee/0x190 fs/fs_context.c:662 vfs_get_tree+0x90/0x2a0 fs/super.c:1789 do_new_mount+0x2be/0xb40 fs/namespace.c:3352 do_mount fs/namespace.c:3692 [inline] __do_sys_mount fs/namespace.c:3898 [inline] __se_sys_mount+0x2d9/0x3c0 fs/namespace.c:3875 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f7bc61772da Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 7e 1a 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f7bc6f73e78 EFLAGS: 00000206 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007f7bc6f73f00 RCX: 00007f7bc61772da RDX: 000000002000fec0 RSI: 000000002000ff00 RDI: 00007f7bc6f73ec0 RBP: 000000002000fec0 R08: 00007f7bc6f73f00 R09: 0000000000000080 R10: 0000000000000080 R11: 0000000000000206 R12: 000000002000ff00 R13: 00007f7bc6f73ec0 R14: 000000000000fe8a R15: 0000000020000000 The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff888063242f00 pfn:0x63242 flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000000000 ffffea0001c04388 ffffea0001b05848 0000000000000000 raw: ffff888063242f00 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as freed page last allocated via order 0, migratetype Unmovable, gfp_mask 0x100cc0(GFP_USER), pid 10491, tgid 10491 (syz-executor), ts 457561535465, free_ts 529609371422 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1473 prep_new_page mm/page_alloc.c:1481 [inline] get_page_from_freelist+0x2e4c/0x2f10 mm/page_alloc.c:3425 __alloc_pages_noprof+0x256/0x6c0 mm/page_alloc.c:4683 alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2265 get_free_pages_noprof+0xc/0x30 mm/page_alloc.c:4730 kasan_populate_vmalloc_pte+0x38/0xe0 mm/kasan/shadow.c:304 apply_to_pte_range mm/memory.c:2746 [inline] apply_to_pmd_range mm/memory.c:2790 [inline] apply_to_pud_range mm/memory.c:2826 [inline] apply_to_p4d_range mm/memory.c:2862 [inline] __apply_to_page_range+0x8a8/0xe50 mm/memory.c:2896 alloc_vmap_area+0x1d41/0x23e0 mm/vmalloc.c:2034 __get_vm_area_node+0x1a9/0x270 mm/vmalloc.c:3110 __vmalloc_node_range_noprof+0x3bc/0x1460 mm/vmalloc.c:3792 vmalloc_user_noprof+0x74/0x80 mm/vmalloc.c:3986 kcov_ioctl+0x59/0x640 kernel/kcov.c:706 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:907 [inline] __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:893 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f page last free pid 8 tgid 8 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1093 [inline] free_unref_page+0xd19/0xea0 mm/page_alloc.c:2588 kasan_depopulate_vmalloc_pte+0x74/0x90 mm/kasan/shadow.c:408 apply_to_pte_range mm/memory.c:2746 [inline] apply_to_pmd_range mm/memory.c:2790 [inline] apply_to_pud_range mm/memory.c:2826 [inline] apply_to_p4d_range mm/memory.c:2862 [inline] __apply_to_page_range+0x8a8/0xe50 mm/memory.c:2896 kasan_release_vmalloc+0x9a/0xb0 mm/kasan/shadow.c:525 purge_vmap_node+0x3e3/0x770 mm/vmalloc.c:2207 __purge_vmap_area_lazy+0x708/0xae0 mm/vmalloc.c:2289 drain_vmap_area_work+0x27/0x40 mm/vmalloc.c:2323 process_one_work kernel/workqueue.c:3248 [inline] process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3329 worker_thread+0x86d/0xd50 kernel/workqueue.c:3409 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Memory state around the buggy address: ffff888063241f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff888063241f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff888063242000: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ^ ffff888063242080: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ffff888063242100: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ==================================================================