================================================================== BUG: KASAN: use-after-free in xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] BUG: KASAN: use-after-free in xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 Read of size 8 at addr ffff8801cb81c2b8 by task kworker/0:6/10070 CPU: 0 PID: 10070 Comm: kworker/0:6 Not tainted 4.9.205-syzkaller #0 Workqueue: events xfrm_state_gc_task ffff8801a33bfa60 ffffffff81b55e6b 0000000000000000 ffffea00072e0600 ffff8801cb81c2b8 0000000000000008 ffffffff8277d876 ffff8801a33bfa98 ffffffff8150c361 0000000000000000 ffff8801cb81c2b8 ffff8801cb81c2b8 Call Trace: [<000000002cb98e86>] __dump_stack lib/dump_stack.c:15 [inline] [<000000002cb98e86>] dump_stack+0xcb/0x130 lib/dump_stack.c:56 [<00000000f4e8c459>] print_address_description+0x6f/0x23a mm/kasan/report.c:256 [<000000004599c391>] kasan_report_error mm/kasan/report.c:355 [inline] [<000000004599c391>] kasan_report mm/kasan/report.c:413 [inline] [<000000004599c391>] kasan_report.cold+0x8c/0x2ba mm/kasan/report.c:397 [<00000000e83ccef0>] __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:434 [<0000000097a69811>] xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] [<0000000097a69811>] xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 [<0000000030ecccca>] xfrm_state_gc_destroy net/xfrm/xfrm_state.c:368 [inline] [<0000000030ecccca>] xfrm_state_gc_task+0x3b9/0x520 net/xfrm/xfrm_state.c:388 [<0000000055495301>] process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 [<00000000ddaff7d4>] worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 [<00000000340b82d5>] kthread+0x278/0x310 kernel/kthread.c:211 [<000000002d1c7309>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 Allocated by task 18108: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_kmalloc.part.0+0x62/0xf0 mm/kasan/kasan.c:616 kasan_kmalloc+0xb7/0xd0 mm/kasan/kasan.c:601 __kmalloc+0x133/0x320 mm/slub.c:3741 kmalloc include/linux/slab.h:495 [inline] kzalloc include/linux/slab.h:636 [inline] ops_init+0xf1/0x3a0 net/core/net_namespace.c:101 setup_net+0x1c8/0x500 net/core/net_namespace.c:292 copy_net_ns+0x191/0x340 net/core/net_namespace.c:409 create_new_namespaces+0x37c/0x7a0 kernel/nsproxy.c:106 unshare_nsproxy_namespaces+0xab/0x1e0 kernel/nsproxy.c:205 SYSC_unshare kernel/fork.c:2402 [inline] SyS_unshare+0x305/0x6f0 kernel/fork.c:2352 do_syscall_64+0x1ad/0x5c0 arch/x86/entry/common.c:288 entry_SYSCALL_64_after_swapgs+0x5d/0xdb Freed by task 32014: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_slab_free+0xb0/0x190 mm/kasan/kasan.c:589 slab_free_hook mm/slub.c:1355 [inline] slab_free_freelist_hook mm/slub.c:1377 [inline] slab_free mm/slub.c:2958 [inline] kfree+0xfc/0x310 mm/slub.c:3878 ops_free net/core/net_namespace.c:126 [inline] ops_free_list.part.0+0x1ff/0x330 net/core/net_namespace.c:148 ops_free_list net/core/net_namespace.c:146 [inline] cleanup_net+0x474/0x8a0 net/core/net_namespace.c:478 process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 kthread+0x278/0x310 kernel/kthread.c:211 ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 The buggy address belongs to the object at ffff8801cb81c200 which belongs to the cache kmalloc-8192 of size 8192 The buggy address is located 184 bytes inside of 8192-byte region [ffff8801cb81c200, ffff8801cb81e200) The buggy address belongs to the page: page:ffffea00072e0600 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0 flags: 0x4000000000010200(slab|head) page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8801cb81c180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8801cb81c200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8801cb81c280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8801cb81c300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801cb81c380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================