================================================================== BUG: KASAN: use-after-free in xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] BUG: KASAN: use-after-free in xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 Read of size 8 at addr ffff8801cf2442b8 by task kworker/0:2/351 CPU: 0 PID: 351 Comm: kworker/0:2 Not tainted 4.9.203-syzkaller #0 Workqueue: events xfrm_state_gc_task ffff8801d51efa60 ffffffff81b55f6b 0000000000000000 ffffea00073c9000 ffff8801cf2442b8 0000000000000008 ffffffff8277d966 ffff8801d51efa98 ffffffff8150c461 0000000000000000 ffff8801cf2442b8 ffff8801cf2442b8 Call Trace: [<000000005487fbe9>] __dump_stack lib/dump_stack.c:15 [inline] [<000000005487fbe9>] dump_stack+0xcb/0x130 lib/dump_stack.c:56 [<000000002edb3a9b>] print_address_description+0x6f/0x23a mm/kasan/report.c:256 [<0000000006eee4eb>] kasan_report_error mm/kasan/report.c:355 [inline] [<0000000006eee4eb>] kasan_report mm/kasan/report.c:413 [inline] [<0000000006eee4eb>] kasan_report.cold+0x8c/0x2ba mm/kasan/report.c:397 [<00000000197dc964>] __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:434 [<000000000b5da54d>] xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] [<000000000b5da54d>] xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 [<000000008bb01067>] xfrm_state_gc_destroy net/xfrm/xfrm_state.c:368 [inline] [<000000008bb01067>] xfrm_state_gc_task+0x3b9/0x520 net/xfrm/xfrm_state.c:388 [<000000006997fc30>] process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 [<00000000aedc102c>] worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 [<0000000010bc4454>] kthread+0x278/0x310 kernel/kthread.c:211 [<0000000056ea9ab6>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 Allocated by task 2103: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_kmalloc.part.0+0x62/0xf0 mm/kasan/kasan.c:616 kasan_kmalloc+0xb7/0xd0 mm/kasan/kasan.c:601 __kmalloc+0x133/0x320 mm/slub.c:3741 kmalloc include/linux/slab.h:495 [inline] kzalloc include/linux/slab.h:636 [inline] ops_init+0xf1/0x3a0 net/core/net_namespace.c:101 setup_net+0x1c8/0x500 net/core/net_namespace.c:292 copy_net_ns+0x191/0x340 net/core/net_namespace.c:409 create_new_namespaces+0x37c/0x7a0 kernel/nsproxy.c:106 unshare_nsproxy_namespaces+0xab/0x1e0 kernel/nsproxy.c:205 SYSC_unshare kernel/fork.c:2402 [inline] SyS_unshare+0x305/0x6f0 kernel/fork.c:2352 do_syscall_64+0x1ad/0x5c0 arch/x86/entry/common.c:288 entry_SYSCALL_64_after_swapgs+0x5d/0xdb Freed by task 5: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_slab_free+0xb0/0x190 mm/kasan/kasan.c:589 slab_free_hook mm/slub.c:1355 [inline] slab_free_freelist_hook mm/slub.c:1377 [inline] slab_free mm/slub.c:2958 [inline] kfree+0xfc/0x310 mm/slub.c:3878 ops_free net/core/net_namespace.c:126 [inline] ops_free_list.part.0+0x1ff/0x330 net/core/net_namespace.c:148 ops_free_list net/core/net_namespace.c:146 [inline] cleanup_net+0x474/0x8a0 net/core/net_namespace.c:478 process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 kthread+0x278/0x310 kernel/kthread.c:211 ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 The buggy address belongs to the object at ffff8801cf244200 which belongs to the cache kmalloc-8192 of size 8192 The buggy address is located 184 bytes inside of 8192-byte region [ffff8801cf244200, ffff8801cf246200) The buggy address belongs to the page: page:ffffea00073c9000 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0 flags: 0x4000000000010200(slab|head) page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8801cf244180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8801cf244200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8801cf244280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8801cf244300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801cf244380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================