================================================================== BUG: KASAN: slab-out-of-bounds in _copy_to_iter+0xd18/0x1140 lib/iov_iter.c:527 Write of size 16 at addr ffff88806e486247 by task kworker/1:1/33 CPU: 1 PID: 33 Comm: kworker/1:1 Not tainted 6.1.0-rc5-syzkaller-00144-g84368d882b96 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Workqueue: events p9_read_work Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:284 [inline] print_report+0x15e/0x45d mm/kasan/report.c:395 kasan_report+0xbf/0x1f0 mm/kasan/report.c:495 check_region_inline mm/kasan/generic.c:183 [inline] kasan_check_range+0x141/0x190 mm/kasan/generic.c:189 memcpy+0x3d/0x60 mm/kasan/shadow.c:66 _copy_to_iter+0xd18/0x1140 lib/iov_iter.c:527 copy_page_to_iter+0xe0/0xa20 lib/iov_iter.c:725 pipe_read+0x50e/0x1110 fs/pipe.c:307 __kernel_read+0x2ca/0x7c0 fs/read_write.c:428 kernel_read+0xc3/0x1c0 fs/read_write.c:446 p9_fd_read net/9p/trans_fd.c:266 [inline] p9_read_work+0x2b0/0x1040 net/9p/trans_fd.c:301 process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289 worker_thread+0x669/0x1090 kernel/workqueue.c:2436 kthread+0x2e8/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 Allocated by task 4700: kasan_save_stack+0x22/0x40 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:371 [inline] ____kasan_kmalloc mm/kasan/common.c:330 [inline] __kasan_kmalloc+0xa5/0xb0 mm/kasan/common.c:380 kasan_kmalloc include/linux/kasan.h:211 [inline] __do_kmalloc_node mm/slab_common.c:955 [inline] __kmalloc+0x5a/0xd0 mm/slab_common.c:968 kmalloc include/linux/slab.h:558 [inline] p9_fcall_init+0x97/0x210 net/9p/client.c:228 p9_tag_alloc+0x208/0x840 net/9p/client.c:293 p9_client_prepare_req+0x177/0x590 net/9p/client.c:631 p9_client_rpc+0x1a1/0xd70 net/9p/client.c:678 p9_client_walk+0x1a0/0x540 net/9p/client.c:1152 v9fs_vfs_lookup.part.0+0x143/0x5d0 fs/9p/vfs_inode.c:777 v9fs_vfs_lookup+0x6d/0x90 fs/9p/vfs_inode.c:762 __lookup_slow+0x24c/0x460 fs/namei.c:1685 lookup_slow fs/namei.c:1702 [inline] walk_component+0x33f/0x5a0 fs/namei.c:1993 lookup_last fs/namei.c:2450 [inline] path_lookupat+0x1ba/0x840 fs/namei.c:2474 filename_lookup+0x1d2/0x590 fs/namei.c:2503 vfs_statx+0x14c/0x430 fs/stat.c:229 vfs_fstatat+0x90/0xb0 fs/stat.c:267 vfs_stat include/linux/fs.h:3292 [inline] __do_sys_newstat+0x8b/0x110 fs/stat.c:410 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd The buggy address belongs to the object at ffff88806e486240 which belongs to the cache kmalloc-32 of size 32 The buggy address is located 7 bytes inside of 32-byte region [ffff88806e486240, ffff88806e486260) The buggy address belongs to the physical page: page:ffffea0001b92180 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x6e486 flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000000200 0000000000000000 dead000000000001 ffff888012041500 raw: 0000000000000000 0000000080400040 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x112cc0(GFP_USER|__GFP_NOWARN|__GFP_NORETRY), pid 3901, tgid 3899 (syz-executor.0), ts 64294514216, free_ts 64283666502 prep_new_page mm/page_alloc.c:2539 [inline] get_page_from_freelist+0x10b5/0x2d50 mm/page_alloc.c:4288 __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5555 alloc_pages+0x1aa/0x270 mm/mempolicy.c:2285 alloc_slab_page mm/slub.c:1794 [inline] allocate_slab+0x213/0x300 mm/slub.c:1939 new_slab mm/slub.c:1992 [inline] ___slab_alloc+0xa91/0x1400 mm/slub.c:3180 __slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3279 slab_alloc_node mm/slub.c:3364 [inline] __kmem_cache_alloc_node+0x199/0x3e0 mm/slub.c:3437 __do_kmalloc_node mm/slab_common.c:954 [inline] __kmalloc_node_track_caller+0x4b/0xc0 mm/slab_common.c:975 kmemdup_nul+0x36/0xb0 mm/util.c:152 match_strdup lib/parser.c:360 [inline] match_number+0xaf/0x1c0 lib/parser.c:136 parse_opts.part.0+0x1f4/0x340 net/9p/trans_fd.c:793 parse_opts net/9p/trans_fd.c:775 [inline] p9_fd_create+0x9c/0x4b0 net/9p/trans_fd.c:1077 p9_client_create+0x870/0x1070 net/9p/client.c:996 v9fs_session_init+0x1e6/0x18b0 fs/9p/v9fs.c:408 v9fs_mount+0xbe/0xca0 fs/9p/vfs_super.c:126 legacy_get_tree+0x109/0x220 fs/fs_context.c:610 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1459 [inline] free_pcp_prepare+0x65c/0xd90 mm/page_alloc.c:1509 free_unref_page_prepare mm/page_alloc.c:3387 [inline] free_unref_page_list+0x176/0xc40 mm/page_alloc.c:3529 release_pages+0xc8a/0x1360 mm/swap.c:1055 tlb_batch_pages_flush+0xa8/0x1a0 mm/mmu_gather.c:59 tlb_flush_mmu_free mm/mmu_gather.c:256 [inline] tlb_flush_mmu mm/mmu_gather.c:263 [inline] tlb_finish_mmu+0x14b/0x7e0 mm/mmu_gather.c:363 exit_mmap+0x202/0x7b0 mm/mmap.c:3105 __mmput+0x128/0x4c0 kernel/fork.c:1185 mmput+0x60/0x70 kernel/fork.c:1207 exit_mm kernel/exit.c:516 [inline] do_exit+0xa41/0x2a30 kernel/exit.c:807 do_group_exit+0xd4/0x2a0 kernel/exit.c:950 get_signal+0x21b1/0x2440 kernel/signal.c:2858 arch_do_signal_or_restart+0x86/0x2300 arch/x86/kernel/signal.c:869 exit_to_user_mode_loop kernel/entry/common.c:168 [inline] exit_to_user_mode_prepare+0x15f/0x250 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:296 do_syscall_64+0x46/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd Memory state around the buggy address: ffff88806e486100: fb fb fb fb fc fc fc fc 00 00 00 00 fc fc fc fc ffff88806e486180: fb fb fb fb fc fc fc fc 00 00 00 00 fc fc fc fc >ffff88806e486200: 00 00 00 00 fc fc fc fc 00 00 06 fc fc fc fc fc ^ ffff88806e486280: fb fb fb fb fc fc fc fc 00 00 00 00 fc fc fc fc ffff88806e486300: fa fb fb fb fc fc fc fc fa fb fb fb fc fc fc fc ==================================================================