================================================================== BUG: KASAN: slab-out-of-bounds in ip6_flow_hdr include/net/ipv6.h:1007 [inline] BUG: KASAN: slab-out-of-bounds in ip6_mc_hdr.constprop.0+0x4ec/0x5c0 net/ipv6/mcast.c:1717 Write of size 4 at addr ffff88801dd6ec60 by task kworker/1:1/26 CPU: 1 PID: 26 Comm: kworker/1:1 Not tainted 6.0.0-rc3-next-20220901-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022 Workqueue: mld mld_dad_work Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:317 [inline] print_report.cold+0x2ba/0x719 mm/kasan/report.c:433 kasan_report+0xb1/0x1e0 mm/kasan/report.c:495 ip6_flow_hdr include/net/ipv6.h:1007 [inline] ip6_mc_hdr.constprop.0+0x4ec/0x5c0 net/ipv6/mcast.c:1717 mld_newpack.isra.0+0x3c0/0x770 net/ipv6/mcast.c:1765 add_grhead+0x295/0x340 net/ipv6/mcast.c:1851 add_grec+0x1082/0x1560 net/ipv6/mcast.c:1989 mld_send_initial_cr.part.0+0xf6/0x230 net/ipv6/mcast.c:2236 mld_send_initial_cr net/ipv6/mcast.c:1232 [inline] mld_dad_work+0x1d3/0x690 net/ipv6/mcast.c:2262 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 Allocated by task 26: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 kasan_set_track mm/kasan/common.c:45 [inline] set_alloc_info mm/kasan/common.c:437 [inline] ____kasan_kmalloc mm/kasan/common.c:516 [inline] ____kasan_kmalloc mm/kasan/common.c:475 [inline] __kasan_kmalloc+0xa9/0xd0 mm/kasan/common.c:525 kasan_kmalloc include/linux/kasan.h:234 [inline] __do_kmalloc_node mm/slab_common.c:930 [inline] __kmalloc_node_track_caller+0x55/0xc0 mm/slab_common.c:950 kmalloc_reserve net/core/skbuff.c:358 [inline] __alloc_skb+0xd9/0x2f0 net/core/skbuff.c:430 alloc_skb include/linux/skbuff.h:1258 [inline] alloc_skb_with_frags+0x93/0x6c0 net/core/skbuff.c:6018 sock_alloc_send_pskb+0x7a3/0x930 net/core/sock.c:2708 sock_alloc_send_skb include/net/sock.h:1887 [inline] mld_newpack.isra.0+0x1b9/0x770 net/ipv6/mcast.c:1748 add_grhead+0x295/0x340 net/ipv6/mcast.c:1851 add_grec+0x1082/0x1560 net/ipv6/mcast.c:1989 mld_send_initial_cr.part.0+0xf6/0x230 net/ipv6/mcast.c:2236 mld_send_initial_cr net/ipv6/mcast.c:1232 [inline] mld_dad_work+0x1d3/0x690 net/ipv6/mcast.c:2262 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 The buggy address belongs to the object at ffff88801dd6ec00 which belongs to the cache kmalloc-64 of size 64 The buggy address is located 32 bytes to the right of 64-byte region [ffff88801dd6ec00, ffff88801dd6ec40) The buggy address belongs to the physical page: page:ffffea0000775b80 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff88801dd6ee00 pfn:0x1dd6e flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000000200 ffffea00009f1380 dead000000000002 ffff888011841640 raw: ffff88801dd6ee00 0000000080200015 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x12cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY), pid 8, tgid 8 (kworker/u4:0), ts 8231831176, free_ts 7336970850 prep_new_page mm/page_alloc.c:2534 [inline] get_page_from_freelist+0x109b/0x2ce0 mm/page_alloc.c:4284 __alloc_pages+0x1c7/0x510 mm/page_alloc.c:5542 alloc_pages+0x1a6/0x270 mm/mempolicy.c:2280 alloc_slab_page mm/slub.c:1721 [inline] allocate_slab+0x228/0x370 mm/slub.c:1866 new_slab mm/slub.c:1919 [inline] ___slab_alloc+0xad0/0x1440 mm/slub.c:3100 __slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3198 slab_alloc_node mm/slub.c:3283 [inline] __kmem_cache_alloc_node+0x18a/0x3d0 mm/slub.c:3356 kmalloc_node_trace+0x1d/0x60 mm/slab_common.c:1023 kmalloc_node include/linux/slab.h:581 [inline] kzalloc_node include/linux/slab.h:706 [inline] __get_vm_area_node+0xed/0x3f0 mm/vmalloc.c:2478 __vmalloc_node_range+0x250/0x13a0 mm/vmalloc.c:3157 alloc_thread_stack_node kernel/fork.c:311 [inline] dup_task_struct kernel/fork.c:977 [inline] copy_process+0x13cd/0x7120 kernel/fork.c:2089 kernel_clone+0xe7/0xab0 kernel/fork.c:2678 user_mode_thread+0xad/0xe0 kernel/fork.c:2754 call_usermodehelper_exec_work kernel/umh.c:174 [inline] call_usermodehelper_exec_work+0xcc/0x180 kernel/umh.c:160 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1451 [inline] free_pcp_prepare+0x5e4/0xd20 mm/page_alloc.c:1501 free_unref_page_prepare mm/page_alloc.c:3382 [inline] free_unref_page+0x19/0x4d0 mm/page_alloc.c:3478 __vunmap+0x85d/0xd30 mm/vmalloc.c:2697 free_work+0x58/0x70 mm/vmalloc.c:97 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 Memory state around the buggy address: ffff88801dd6eb00: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc ffff88801dd6eb80: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc >ffff88801dd6ec00: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc ^ ffff88801dd6ec80: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc ffff88801dd6ed00: 00 00 00 00 00 00 00 00 fc fc fc fc fc fc fc fc ==================================================================