================================================================== BUG: KASAN: slab-out-of-bounds in atomic_read include/asm-generic/atomic-instrumented.h:26 [inline] BUG: KASAN: slab-out-of-bounds in atomic_fetch_add_unless include/linux/atomic-fallback.h:1086 [inline] BUG: KASAN: slab-out-of-bounds in atomic_add_unless include/linux/atomic-fallback.h:1111 [inline] BUG: KASAN: slab-out-of-bounds in atomic_inc_not_zero include/linux/atomic-fallback.h:1127 [inline] BUG: KASAN: slab-out-of-bounds in page_get_anon_vma+0x24b/0x4b0 mm/rmap.c:477 Read of size 4 at addr ffff888095216e60 by task syz-executor.0/1590 CPU: 0 PID: 1590 Comm: syz-executor.0 Not tainted 5.1.0+ #4 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x172/0x1f0 lib/dump_stack.c:113 print_address_description.cold+0x7c/0x20d mm/kasan/report.c:188 __kasan_report.cold+0x1b/0x40 mm/kasan/report.c:317 kasan_report+0x12/0x20 mm/kasan/common.c:614 check_memory_region_inline mm/kasan/generic.c:185 [inline] check_memory_region+0x123/0x190 mm/kasan/generic.c:191 kasan_check_read+0x11/0x20 mm/kasan/common.c:94 atomic_read include/asm-generic/atomic-instrumented.h:26 [inline] atomic_fetch_add_unless include/linux/atomic-fallback.h:1086 [inline] atomic_add_unless include/linux/atomic-fallback.h:1111 [inline] atomic_inc_not_zero include/linux/atomic-fallback.h:1127 [inline] page_get_anon_vma+0x24b/0x4b0 mm/rmap.c:477 split_huge_page_to_list+0x58a/0x2de0 mm/huge_memory.c:2675 split_huge_page include/linux/huge_mm.h:148 [inline] deferred_split_scan+0x64b/0xa60 mm/huge_memory.c:2853 do_shrink_slab+0x400/0xa80 mm/vmscan.c:551 shrink_slab mm/vmscan.c:700 [inline] shrink_slab+0x4be/0x5e0 mm/vmscan.c:680 shrink_node+0x552/0x1570 mm/vmscan.c:2717 shrink_zones mm/vmscan.c:2946 [inline] do_try_to_free_pages+0x3cb/0x11e0 mm/vmscan.c:3004 try_to_free_pages+0x294/0x8c0 mm/vmscan.c:3220 __perform_reclaim mm/page_alloc.c:4007 [inline] __alloc_pages_direct_reclaim mm/page_alloc.c:4029 [inline] __alloc_pages_slowpath+0x9b9/0x28b0 mm/page_alloc.c:4422 __alloc_pages_nodemask+0x602/0x8d0 mm/page_alloc.c:4636 __alloc_pages include/linux/gfp.h:473 [inline] __alloc_pages_node include/linux/gfp.h:486 [inline] kmem_getpages mm/slab.c:1398 [inline] cache_grow_begin+0x9c/0x860 mm/slab.c:2636 fallback_alloc+0x1fd/0x2d0 mm/slab.c:3184 ____cache_alloc_node+0x1be/0x1e0 mm/slab.c:3252 __do_cache_alloc mm/slab.c:3321 [inline] slab_alloc mm/slab.c:3349 [inline] kmem_cache_alloc+0x1e8/0x6f0 mm/slab.c:3519 anon_vma_chain_alloc mm/rmap.c:129 [inline] anon_vma_clone+0x238/0x480 mm/rmap.c:273 anon_vma_fork+0x8f/0x4a0 mm/rmap.c:332 dup_mmap kernel/fork.c:542 [inline] dup_mm+0x994/0x1370 kernel/fork.c:1329 copy_mm kernel/fork.c:1384 [inline] copy_process.part.0+0x2cd2/0x6710 kernel/fork.c:2004 copy_process kernel/fork.c:1772 [inline] _do_fork+0x25d/0xfd0 kernel/fork.c:2338 __ia32_sys_fork+0x1f/0x30 kernel/fork.c:2405 do_syscall_64+0x103/0x670 arch/x86/entry/common.c:298 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x20000310 Code: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 75 c4 81 c6 4c 24 24 4a 2a e9 2c b8 1c 1e 0f 05 03 00 00 00 c4 a3 7b f0 c5 5c 41 e2 e9 2e 36 3e 46 0f 1a 70 00 RSP: 002b:00007f4eaef3ebd8 EFLAGS: 00000216 ORIG_RAX: 0000000000000039 RAX: ffffffffffffffda RBX: 0000000000000009 RCX: 0000000020000310 RDX: b2228661c24ac738 RSI: 000000004a24244c RDI: 0000000000000003 RBP: 00000000000000f2 R08: 0000000000000005 R09: 0000000000000006 R10: 0000000000000007 R11: 0000000000000216 R12: 000000000000000b R13: 000000000000000c R14: 000000000000000d R15: 00000000ffffffff Allocated by task 361: save_stack+0x23/0x90 mm/kasan/common.c:71 set_track mm/kasan/common.c:79 [inline] __kasan_kmalloc mm/kasan/common.c:489 [inline] __kasan_kmalloc.constprop.0+0xcf/0xe0 mm/kasan/common.c:462 kasan_slab_alloc+0xf/0x20 mm/kasan/common.c:497 slab_post_alloc_hook mm/slab.h:437 [inline] slab_alloc mm/slab.c:3357 [inline] kmem_cache_alloc+0x11a/0x6f0 mm/slab.c:3519 anon_vma_chain_alloc mm/rmap.c:129 [inline] anon_vma_clone+0xde/0x480 mm/rmap.c:269 anon_vma_fork+0x8f/0x4a0 mm/rmap.c:332 dup_mmap kernel/fork.c:542 [inline] dup_mm+0x994/0x1370 kernel/fork.c:1329 copy_mm kernel/fork.c:1384 [inline] copy_process.part.0+0x2cd2/0x6710 kernel/fork.c:2004 copy_process kernel/fork.c:1772 [inline] _do_fork+0x25d/0xfd0 kernel/fork.c:2338 __ia32_sys_fork+0x1f/0x30 kernel/fork.c:2405 do_syscall_64+0x103/0x670 arch/x86/entry/common.c:298 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 0: (stack is not available) The buggy address belongs to the object at ffff888095216e00 which belongs to the cache anon_vma_chain(17:syz0) of size 80 The buggy address is located 16 bytes to the right of 80-byte region [ffff888095216e00, ffff888095216e50) The buggy address belongs to the page: page:ffffea0002548580 count:1 mapcount:0 mapping:ffff888094571dc0 index:0x0 flags: 0x1fffc0000000200(slab) raw: 01fffc0000000200 ffffea000066a1c8 ffffea00006675c8 ffff888094571dc0 raw: 0000000000000000 ffff888095216000 0000000100000024 ffff88805bc30180 page dumped because: kasan: bad access detected page->mem_cgroup:ffff88805bc30180 Memory state around the buggy address: ffff888095216d00: fc fc fc fc 00 00 00 00 00 00 00 00 00 00 fc fc ffff888095216d80: fc fc 00 00 00 00 00 00 00 00 00 00 fc fc fc fc >ffff888095216e00: 00 00 00 00 00 00 00 00 00 00 fc fc fc fc 00 00 ^ ffff888095216e80: 00 00 00 00 00 00 00 00 fc fc fc fc 00 00 00 00 ffff888095216f00: 00 00 00 00 00 00 fc fc fc fc 00 00 00 00 00 00 ==================================================================