================================================================== BUG: KASAN: use-after-free in xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] BUG: KASAN: use-after-free in xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 Read of size 8 at addr ffff8801cd550178 by task kworker/1:3/2112 CPU: 1 PID: 2112 Comm: kworker/1:3 Not tainted 4.9.191+ #0 Workqueue: events xfrm_state_gc_task ffff8801ce3e7a60 ffffffff81b67171 0000000000000000 ffffea0007355400 ffff8801cd550178 0000000000000008 ffffffff8278ddc6 ffff8801ce3e7a98 ffffffff8150c681 0000000000000000 ffff8801cd550178 ffff8801cd550178 Call Trace: [<0000000043a8316b>] __dump_stack lib/dump_stack.c:15 [inline] [<0000000043a8316b>] dump_stack+0xc1/0x120 lib/dump_stack.c:51 [<00000000b48d1090>] print_address_description+0x6f/0x23a mm/kasan/report.c:256 [<0000000088aa0472>] kasan_report_error mm/kasan/report.c:355 [inline] [<0000000088aa0472>] kasan_report mm/kasan/report.c:413 [inline] [<0000000088aa0472>] kasan_report.cold+0x8c/0x2ba mm/kasan/report.c:397 [<0000000005ed264d>] __asan_report_load8_noabort+0x14/0x20 mm/kasan/report.c:434 [<0000000080ea7a2b>] xfrm6_tunnel_free_spi net/ipv6/xfrm6_tunnel.c:208 [inline] [<0000000080ea7a2b>] xfrm6_tunnel_destroy+0x4f6/0x570 net/ipv6/xfrm6_tunnel.c:303 [<000000007b846776>] xfrm_state_gc_destroy net/xfrm/xfrm_state.c:368 [inline] [<000000007b846776>] xfrm_state_gc_task+0x3b9/0x520 net/xfrm/xfrm_state.c:388 [<000000002ace92c3>] process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 [<000000008a260964>] worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 [<000000008eed24cf>] kthread+0x278/0x310 kernel/kthread.c:211 [<000000006c1db328>] ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 Allocated by task 2091: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_kmalloc.part.0+0x62/0xf0 mm/kasan/kasan.c:616 kasan_kmalloc+0xb7/0xd0 mm/kasan/kasan.c:601 __kmalloc+0x133/0x320 mm/slub.c:3741 kmalloc include/linux/slab.h:495 [inline] kzalloc include/linux/slab.h:636 [inline] ops_init+0xf1/0x3a0 net/core/net_namespace.c:101 setup_net+0x1c8/0x500 net/core/net_namespace.c:292 copy_net_ns+0x191/0x340 net/core/net_namespace.c:409 create_new_namespaces+0x37c/0x7a0 kernel/nsproxy.c:106 unshare_nsproxy_namespaces+0xab/0x1e0 kernel/nsproxy.c:205 SYSC_unshare kernel/fork.c:2402 [inline] SyS_unshare+0x305/0x6f0 kernel/fork.c:2352 do_syscall_64+0x1ad/0x5c0 arch/x86/entry/common.c:288 entry_SYSCALL_64_after_swapgs+0x5d/0xdb Freed by task 64: save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57 save_stack mm/kasan/kasan.c:512 [inline] set_track mm/kasan/kasan.c:524 [inline] kasan_slab_free+0xb0/0x190 mm/kasan/kasan.c:589 slab_free_hook mm/slub.c:1355 [inline] slab_free_freelist_hook mm/slub.c:1377 [inline] slab_free mm/slub.c:2958 [inline] kfree+0xfc/0x310 mm/slub.c:3878 ops_free net/core/net_namespace.c:126 [inline] ops_free_list.part.0+0x1ff/0x330 net/core/net_namespace.c:148 ops_free_list net/core/net_namespace.c:146 [inline] cleanup_net+0x474/0x8a0 net/core/net_namespace.c:478 process_one_work+0x88b/0x1600 kernel/workqueue.c:2114 worker_thread+0x5df/0x11d0 kernel/workqueue.c:2251 kthread+0x278/0x310 kernel/kthread.c:211 ret_from_fork+0x5c/0x70 arch/x86/entry/entry_64.S:375 The buggy address belongs to the object at ffff8801cd550000 which belongs to the cache kmalloc-8192 of size 8192 The buggy address is located 376 bytes inside of 8192-byte region [ffff8801cd550000, ffff8801cd552000) The buggy address belongs to the page: page:ffffea0007355400 count:1 mapcount:0 mapping: (null) index:0x0 compound_mapcount: 0 flags: 0x4000000000010200(slab|head) page dumped because: kasan: bad access detected Memory state around the buggy address: ffff8801cd550000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801cd550080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8801cd550100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8801cd550180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8801cd550200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================