================================================================== BUG: KASAN: slab-out-of-bounds in instrument_atomic_read include/linux/instrumented.h:68 [inline] BUG: KASAN: slab-out-of-bounds in _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline] BUG: KASAN: slab-out-of-bounds in mapping_release_always include/linux/pagemap.h:279 [inline] BUG: KASAN: slab-out-of-bounds in folio_needs_release mm/internal.h:187 [inline] BUG: KASAN: slab-out-of-bounds in shrink_folio_list+0x2dbf/0x3e60 mm/vmscan.c:2067 Read of size 8 at addr ffff8880231cba71 by task kswapd0/84 CPU: 1 PID: 84 Comm: kswapd0 Not tainted 6.4.0-next-20230707-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/03/2023 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106 print_address_description.constprop.0+0x2c/0x3c0 mm/kasan/report.c:364 print_report mm/kasan/report.c:475 [inline] kasan_report+0x11d/0x130 mm/kasan/report.c:588 check_region_inline mm/kasan/generic.c:181 [inline] kasan_check_range+0xf0/0x190 mm/kasan/generic.c:187 instrument_atomic_read include/linux/instrumented.h:68 [inline] _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline] mapping_release_always include/linux/pagemap.h:279 [inline] folio_needs_release mm/internal.h:187 [inline] shrink_folio_list+0x2dbf/0x3e60 mm/vmscan.c:2067 evict_folios+0x794/0x1940 mm/vmscan.c:5181 try_to_shrink_lruvec+0x82c/0xb90 mm/vmscan.c:5357 shrink_one+0x462/0x710 mm/vmscan.c:5401 shrink_many mm/vmscan.c:5453 [inline] lru_gen_shrink_node mm/vmscan.c:5570 [inline] shrink_node+0x20ed/0x3690 mm/vmscan.c:6510 kswapd_shrink_node mm/vmscan.c:7315 [inline] balance_pgdat+0xa02/0x1ac0 mm/vmscan.c:7505 kswapd+0x677/0xd60 mm/vmscan.c:7765 kthread+0x344/0x440 kernel/kthread.c:389 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Allocated by task 5055: kasan_save_stack+0x22/0x40 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 __kasan_slab_alloc+0x7f/0x90 mm/kasan/common.c:328 kasan_slab_alloc include/linux/kasan.h:186 [inline] slab_post_alloc_hook mm/slab.h:762 [inline] slab_alloc_node mm/slub.c:3470 [inline] slab_alloc mm/slub.c:3478 [inline] __kmem_cache_alloc_lru mm/slub.c:3485 [inline] kmem_cache_alloc+0x173/0x390 mm/slub.c:3494 anon_vma_alloc mm/rmap.c:94 [inline] anon_vma_fork+0xe2/0x630 mm/rmap.c:361 dup_mmap+0xc0f/0x14b0 kernel/fork.c:732 dup_mm kernel/fork.c:1694 [inline] copy_mm kernel/fork.c:1743 [inline] copy_process+0x6663/0x75c0 kernel/fork.c:2509 kernel_clone+0xeb/0x890 kernel/fork.c:2917 __do_sys_clone+0xba/0x100 kernel/fork.c:3060 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd The buggy address belongs to the object at ffff8880231cb990 which belongs to the cache anon_vma of size 208 The buggy address is located 17 bytes to the right of allocated 208-byte region [ffff8880231cb990, ffff8880231cba60) The buggy address belongs to the physical page: page:ffffea00008c72c0 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x231cb flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000000200 ffff888014674140 ffffea0000a03bc0 dead000000000004 raw: 0000000000000000 00000000000f000f 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x12cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY), pid 4846, tgid 4846 (dhcpcd-run-hook), ts 40170470410, free_ts 36233888286 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x2db/0x350 mm/page_alloc.c:1569 prep_new_page mm/page_alloc.c:1576 [inline] get_page_from_freelist+0xfd9/0x2c40 mm/page_alloc.c:3256 __alloc_pages+0x1cb/0x4a0 mm/page_alloc.c:4512 alloc_pages+0x1aa/0x270 mm/mempolicy.c:2279 alloc_slab_page mm/slub.c:1862 [inline] allocate_slab+0x25f/0x390 mm/slub.c:2009 new_slab mm/slub.c:2062 [inline] ___slab_alloc+0xbc3/0x15d0 mm/slub.c:3215 __slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3314 __slab_alloc_node mm/slub.c:3367 [inline] slab_alloc_node mm/slub.c:3460 [inline] slab_alloc mm/slub.c:3478 [inline] __kmem_cache_alloc_lru mm/slub.c:3485 [inline] kmem_cache_alloc+0x371/0x390 mm/slub.c:3494 anon_vma_alloc mm/rmap.c:94 [inline] anon_vma_fork+0xe2/0x630 mm/rmap.c:361 dup_mmap+0xc0f/0x14b0 kernel/fork.c:732 dup_mm kernel/fork.c:1694 [inline] copy_mm kernel/fork.c:1743 [inline] copy_process+0x6663/0x75c0 kernel/fork.c:2509 kernel_clone+0xeb/0x890 kernel/fork.c:2917 __do_sys_clone+0xba/0x100 kernel/fork.c:3060 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1160 [inline] free_unref_page_prepare+0x62e/0xcb0 mm/page_alloc.c:2383 free_unref_page+0x33/0x370 mm/page_alloc.c:2478 vfree+0x180/0x7b0 mm/vmalloc.c:2842 delayed_vfree_work+0x57/0x70 mm/vmalloc.c:2763 process_one_work+0xa34/0x16f0 kernel/workqueue.c:2597 worker_thread+0x67d/0x10c0 kernel/workqueue.c:2748 kthread+0x344/0x440 kernel/kthread.c:389 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Memory state around the buggy address: ffff8880231cb900: 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc fc ffff8880231cb980: fc fc 00 00 00 00 00 00 00 00 00 00 00 00 00 00 >ffff8880231cba00: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ^ ffff8880231cba80: fc fc fc fc 00 00 00 00 00 00 00 00 00 00 00 00 ffff8880231cbb00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 fc fc ==================================================================