================================================================== BUG: KASAN: slab-use-after-free in __hlist_del include/linux/list.h:980 [inline] BUG: KASAN: slab-use-after-free in hlist_del_rcu include/linux/rculist.h:560 [inline] BUG: KASAN: slab-use-after-free in __xfrm_state_delete+0x666/0xca0 net/xfrm/xfrm_state.c:830 Write of size 8 at addr ffff8880565f3ba8 by task kworker/u8:14/1222 CPU: 0 UID: 0 PID: 1222 Comm: kworker/u8:14 Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025 Workqueue: netns cleanup_net Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x240 mm/kasan/report.c:482 kasan_report+0x118/0x150 mm/kasan/report.c:595 __hlist_del include/linux/list.h:980 [inline] hlist_del_rcu include/linux/rculist.h:560 [inline] __xfrm_state_delete+0x666/0xca0 net/xfrm/xfrm_state.c:830 xfrm_state_delete net/xfrm/xfrm_state.c:856 [inline] xfrm_state_flush+0x45f/0x770 net/xfrm/xfrm_state.c:939 xfrm6_tunnel_net_exit+0x3c/0x100 net/ipv6/xfrm6_tunnel.c:337 ops_exit_list net/core/net_namespace.c:198 [inline] ops_undo_list+0x49a/0x990 net/core/net_namespace.c:251 cleanup_net+0x4c5/0x800 net/core/net_namespace.c:682 process_one_work kernel/workqueue.c:3236 [inline] process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3319 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3400 kthread+0x70e/0x8a0 kernel/kthread.c:463 ret_from_fork+0x436/0x7d0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Allocated by task 524: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:68 unpoison_slab_object mm/kasan/common.c:330 [inline] __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:356 kasan_slab_alloc include/linux/kasan.h:250 [inline] slab_post_alloc_hook mm/slub.c:4191 [inline] slab_alloc_node mm/slub.c:4240 [inline] kmem_cache_alloc_noprof+0x1c1/0x3c0 mm/slub.c:4247 xfrm_state_alloc+0x24/0x2f0 net/xfrm/xfrm_state.c:733 __find_acq_core+0x8a7/0x1c00 net/xfrm/xfrm_state.c:1833 xfrm_find_acq+0x78/0xa0 net/xfrm/xfrm_state.c:2353 xfrm_alloc_userspi+0x6b3/0xc90 net/xfrm/xfrm_user.c:1863 xfrm_user_rcv_msg+0x7a0/0xab0 net/xfrm/xfrm_user.c:3501 netlink_rcv_skb+0x205/0x470 net/netlink/af_netlink.c:2552 xfrm_netlink_rcv+0x79/0x90 net/xfrm/xfrm_user.c:3523 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline] netlink_unicast+0x82c/0x9e0 net/netlink/af_netlink.c:1346 netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1896 sock_sendmsg_nosec net/socket.c:714 [inline] __sock_sendmsg+0x21c/0x270 net/socket.c:729 ____sys_sendmsg+0x505/0x830 net/socket.c:2614 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2668 __sys_sendmsg net/socket.c:2700 [inline] __do_sys_sendmsg net/socket.c:2705 [inline] __se_sys_sendmsg net/socket.c:2703 [inline] __x64_sys_sendmsg+0x19b/0x260 net/socket.c:2703 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 20086: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:576 poison_slab_object mm/kasan/common.c:243 [inline] __kasan_slab_free+0x5b/0x80 mm/kasan/common.c:275 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2422 [inline] slab_free mm/slub.c:4695 [inline] kmem_cache_free+0x18f/0x400 mm/slub.c:4797 xfrm_state_free net/xfrm/xfrm_state.c:591 [inline] xfrm_state_gc_destroy net/xfrm/xfrm_state.c:618 [inline] xfrm_state_gc_task+0x52d/0x6b0 net/xfrm/xfrm_state.c:634 process_one_work kernel/workqueue.c:3236 [inline] process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3319 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3400 kthread+0x70e/0x8a0 kernel/kthread.c:463 ret_from_fork+0x436/0x7d0 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 The buggy address belongs to the object at ffff8880565f3b80 which belongs to the cache xfrm_state of size 928 The buggy address is located 40 bytes inside of freed 928-byte region [ffff8880565f3b80, ffff8880565f3f20) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x565f0 head: order:2 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000040 ffff888021292c80 dead000000000122 0000000000000000 raw: 0000000000000000 00000000800f000f 00000000f5000000 0000000000000000 head: 00fff00000000040 ffff888021292c80 dead000000000122 0000000000000000 head: 0000000000000000 00000000800f000f 00000000f5000000 0000000000000000 head: 00fff00000000002 ffffea0001597c01 00000000ffffffff 00000000ffffffff head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000004 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 2, migratetype Unmovable, gfp_mask 0x52820(GFP_ATOMIC|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 6123, tgid 6119 (syz.3.37), ts 91034610625, free_ts 90947409235 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1851 prep_new_page mm/page_alloc.c:1859 [inline] get_page_from_freelist+0x21e4/0x22c0 mm/page_alloc.c:3858 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5148 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2416 alloc_slab_page mm/slub.c:2492 [inline] allocate_slab+0x8a/0x370 mm/slub.c:2660 new_slab mm/slub.c:2714 [inline] ___slab_alloc+0xbeb/0x1420 mm/slub.c:3901 __slab_alloc mm/slub.c:3992 [inline] __slab_alloc_node mm/slub.c:4067 [inline] slab_alloc_node mm/slub.c:4228 [inline] kmem_cache_alloc_noprof+0x283/0x3c0 mm/slub.c:4247 xfrm_state_alloc+0x24/0x2f0 net/xfrm/xfrm_state.c:733 xfrm_add_acquire+0xf7/0xb20 net/xfrm/xfrm_user.c:2990 xfrm_user_rcv_msg+0x7a0/0xab0 net/xfrm/xfrm_user.c:3501 netlink_rcv_skb+0x205/0x470 net/netlink/af_netlink.c:2552 xfrm_netlink_rcv+0x79/0x90 net/xfrm/xfrm_user.c:3523 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline] netlink_unicast+0x82c/0x9e0 net/netlink/af_netlink.c:1346 netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1896 sock_sendmsg_nosec net/socket.c:714 [inline] __sock_sendmsg+0x21c/0x270 net/socket.c:729 ____sys_sendmsg+0x505/0x830 net/socket.c:2614 page last free pid 6121 tgid 6119 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1395 [inline] __free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2895 stack_depot_save_flags+0x436/0x860 lib/stackdepot.c:727 ref_tracker_free+0xfe/0x7d0 lib/ref_tracker.c:308 netdev_tracker_free include/linux/netdevice.h:4369 [inline] netdev_put include/linux/netdevice.h:4386 [inline] neigh_parms_release+0x19c/0x230 net/core/neighbour.c:1792 addrconf_ifdown+0x15b8/0x1880 net/ipv6/addrconf.c:4007 addrconf_notify+0x1bc/0x1010 net/ipv6/addrconf.c:-1 notifier_call_chain+0x1b3/0x3e0 kernel/notifier.c:85 call_netdevice_notifiers_extack net/core/dev.c:2267 [inline] call_netdevice_notifiers net/core/dev.c:2281 [inline] unregister_netdevice_many_notify+0x14d7/0x1ff0 net/core/dev.c:12166 rtnl_delete_link net/core/rtnetlink.c:3513 [inline] rtnl_dellink+0x488/0x710 net/core/rtnetlink.c:3555 rtnetlink_rcv_msg+0x7cf/0xb70 net/core/rtnetlink.c:6946 netlink_rcv_skb+0x205/0x470 net/netlink/af_netlink.c:2552 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline] netlink_unicast+0x82c/0x9e0 net/netlink/af_netlink.c:1346 netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1896 sock_sendmsg_nosec net/socket.c:714 [inline] __sock_sendmsg+0x21c/0x270 net/socket.c:729 ____sys_sendmsg+0x505/0x830 net/socket.c:2614 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2668 Memory state around the buggy address: ffff8880565f3a80: 00 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc ffff8880565f3b00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff8880565f3b80: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880565f3c00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880565f3c80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================