================================================================== BUG: KASAN: use-after-free in xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] BUG: KASAN: use-after-free in xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 Read of size 8 at addr ffff8801d79c49a0 by task kworker/1:1/22 CPU: 1 PID: 22 Comm: kworker/1:1 Not tainted 4.9.203-syzkaller #0 Workqueue: events xfrm_state_gc_task ffff8801d9c4fa60 ffffffff81b55f6b 0000000000000000 ffffea00075e7000 ffff8801d79c49a0 0000000000000008 ffffffff8277d966 ffff8801d9c4fa98 ffffffff8150c461 0000000000000000 ffff8801d79c49a0 ffff8801d79c49a0 Call Trace: [<0000000028375f0f>] __dump_stack lib/dump_stack.c:15 [inline] [<0000000028375f0f>] dump_stack+0xcb/0x130 lib/dump_stack.c:56 [<000000006335f1ec>] print_address_description+0x6f/0x23a mm/kasan/report.c:256 [<00000000569a6750>] kasan_report_error mm/kasan/report.c:355 [inline] [<00000000569a6750>] kasan_report mm/kasan/report.c:413 [inline] [<00000000569a6750>] kasan_report.cold+0x8c/0x2ba mm/kasan/report.c:397 [<0000000017e2b827>] __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:434 [<0000000062efc04a>] xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] [<0000000062efc04a>] xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 [<00000000c663b484>] xfrm_state_gc_destroy net/xfrm/xfrm_state.c:368 [inline] [<00000000c663b484>] xfrm_state_gc_task+0x3b9/0x520 net/xfrm/xfrm_state.c:388 [<000000009a66d2ec>] process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 [<000000005aebec06>] worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 [<00000000c9f9f306>] kthread+0x278/0x310 kernel/kthread.c:211 [<000000005d4d3c3a>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 Allocated by task 2102: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_kmalloc.part.0+0x62/0xf0 mm/kasan/kasan.c:616 kasan_kmalloc+0xb7/0xd0 mm/kasan/kasan.c:601 __kmalloc+0x133/0x320 mm/slub.c:3741 kmalloc include/linux/slab.h:495 [inline] kzalloc include/linux/slab.h:636 [inline] ops_init+0xf1/0x3a0 net/core/net_namespace.c:101 setup_net+0x1c8/0x500 net/core/net_namespace.c:292 copy_net_ns+0x191/0x340 net/core/net_namespace.c:409 create_new_namespaces+0x37c/0x7a0 kernel/nsproxy.c:106 unshare_nsproxy_namespaces+0xab/0x1e0 kernel/nsproxy.c:205 SYSC_unshare kernel/fork.c:2402 [inline] SyS_unshare+0x305/0x6f0 kernel/fork.c:2352 do_syscall_64+0x1ad/0x5c0 arch/x86/entry/common.c:288 entry_SYSCALL_64_after_swapgs+0x5d/0xdb Freed by task 64: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_slab_free+0xb0/0x190 mm/kasan/kasan.c:589 slab_free_hook mm/slub.c:1355 [inline] slab_free_freelist_hook mm/slub.c:1377 [inline] slab_free mm/slub.c:2958 [inline] kfree+0xfc/0x310 mm/slub.c:3878 ops_free net/core/net_namespace.c:126 [inline] ops_free_list.part.0+0x1ff/0x330 net/core/net_namespace.c:148 ops_free_list net/core/net_namespace.c:146 [inline] cleanup_net+0x474/0x8a0 net/core/net_namespace.c:478 process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 kthread+0x278/0x310 kernel/kthread.c:211 ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 The buggy address belongs to the object at ffff8801d79c4200 which belongs to the cache kmalloc-8192 of size 8192 The buggy address is located 1952 bytes inside of 8192-byte region [ffff8801d79c4200, ffff8801d79c6200) The buggy address belongs to the page: page:ffffea00075e7000 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0 flags: 0x4000000000010200(slab|head) page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8801d79c4880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801d79c4900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8801d79c4980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8801d79c4a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801d79c4a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================