================================================================== BUG: KASAN: slab-use-after-free in __list_del_entry_valid_or_report+0x2f/0x130 lib/list_debug.c:49 Read of size 8 at addr ffff88802d43c0b0 by task kworker/u4:4/58 CPU: 0 PID: 58 Comm: kworker/u4:4 Not tainted 6.6.0-syzkaller-10265-gbabe393974de #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/09/2023 Workqueue: btrfs-qgroup-rescan btrfs_work_helper Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:364 [inline] print_report+0x163/0x540 mm/kasan/report.c:475 kasan_report+0x175/0x1b0 mm/kasan/report.c:588 __list_del_entry_valid_or_report+0x2f/0x130 lib/list_debug.c:49 __list_del_entry_valid include/linux/list.h:124 [inline] __list_del_entry include/linux/list.h:215 [inline] list_del_init include/linux/list.h:287 [inline] qgroup_iterator_nested_clean fs/btrfs/qgroup.c:2623 [inline] btrfs_qgroup_account_extent+0x18b/0x1150 fs/btrfs/qgroup.c:2883 qgroup_rescan_leaf fs/btrfs/qgroup.c:3543 [inline] btrfs_qgroup_rescan_worker+0x1078/0x1c60 fs/btrfs/qgroup.c:3604 btrfs_work_helper+0x37c/0xbd0 fs/btrfs/async-thread.c:315 process_one_work kernel/workqueue.c:2630 [inline] process_scheduled_works+0x90f/0x1400 kernel/workqueue.c:2703 worker_thread+0xa5f/0xff0 kernel/workqueue.c:2784 kthread+0x2d3/0x370 kernel/kthread.c:388 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242 Allocated by task 14246: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4f/0x70 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:374 [inline] __kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:383 kmalloc include/linux/slab.h:600 [inline] kzalloc include/linux/slab.h:721 [inline] btrfs_quota_enable+0xee9/0x2060 fs/btrfs/qgroup.c:1209 btrfs_ioctl_quota_ctl+0x143/0x190 fs/btrfs/ioctl.c:3705 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl+0xf8/0x170 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b Freed by task 14334: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4f/0x70 mm/kasan/common.c:52 kasan_save_free_info+0x28/0x40 mm/kasan/generic.c:522 ____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236 kasan_slab_free include/linux/kasan.h:164 [inline] slab_free_hook mm/slub.c:1800 [inline] slab_free_freelist_hook mm/slub.c:1826 [inline] slab_free mm/slub.c:3809 [inline] __kmem_cache_free+0x263/0x3a0 mm/slub.c:3822 btrfs_remove_qgroup+0x764/0x8c0 fs/btrfs/qgroup.c:1787 btrfs_ioctl_qgroup_create+0x185/0x1e0 fs/btrfs/ioctl.c:3811 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl+0xf8/0x170 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b Last potentially related work creation: kasan_save_stack+0x3f/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xad/0xc0 mm/kasan/generic.c:492 insert_work+0x3e/0x320 kernel/workqueue.c:1647 __queue_work+0xd00/0x1010 kernel/workqueue.c:1803 call_timer_fn+0x17a/0x5e0 kernel/time/timer.c:1700 expire_timers kernel/time/timer.c:1746 [inline] __run_timers+0x67a/0x860 kernel/time/timer.c:2022 run_timer_softirq+0x67/0xf0 kernel/time/timer.c:2035 __do_softirq+0x2bf/0x93a kernel/softirq.c:553 Second to last potentially related work creation: kasan_save_stack+0x3f/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xad/0xc0 mm/kasan/generic.c:492 insert_work+0x3e/0x320 kernel/workqueue.c:1647 __queue_work+0xd00/0x1010 kernel/workqueue.c:1803 call_timer_fn+0x17a/0x5e0 kernel/time/timer.c:1700 expire_timers kernel/time/timer.c:1746 [inline] __run_timers+0x67a/0x860 kernel/time/timer.c:2022 run_timer_softirq+0x67/0xf0 kernel/time/timer.c:2035 __do_softirq+0x2bf/0x93a kernel/softirq.c:553 The buggy address belongs to the object at ffff88802d43c000 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 176 bytes inside of freed 512-byte region [ffff88802d43c000, ffff88802d43c200) The buggy address belongs to the physical page: page:ffffea0000b50f00 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2d43c head:ffffea0000b50f00 order:2 entire_mapcount:0 nr_pages_mapped:0 pincount:0 ksm flags: 0xfff00000000840(slab|head|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000000840 ffff888012c41c80 ffffea0001ef4b00 dead000000000003 raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 2, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5073, tgid 5073 (syz-executor.1), ts 135208307593, free_ts 135196356315 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x1e6/0x210 mm/page_alloc.c:1536 prep_new_page mm/page_alloc.c:1543 [inline] get_page_from_freelist+0x31db/0x3360 mm/page_alloc.c:3170 __alloc_pages+0x255/0x670 mm/page_alloc.c:4426 alloc_slab_page+0x6a/0x160 mm/slub.c:1870 allocate_slab mm/slub.c:2017 [inline] new_slab+0x84/0x2f0 mm/slub.c:2070 ___slab_alloc+0xc85/0x1310 mm/slub.c:3223 __slab_alloc mm/slub.c:3322 [inline] __slab_alloc_node mm/slub.c:3375 [inline] slab_alloc_node mm/slub.c:3468 [inline] __kmem_cache_alloc_node+0x19d/0x270 mm/slub.c:3517 __do_kmalloc_node mm/slab_common.c:1006 [inline] __kmalloc+0xa8/0x230 mm/slab_common.c:1020 kmalloc include/linux/slab.h:604 [inline] kzalloc include/linux/slab.h:721 [inline] fib6_info_alloc+0x2e/0xe0 net/ipv6/ip6_fib.c:155 ip6_route_info_create+0x445/0x1220 net/ipv6/route.c:3749 ip6_route_add+0x26/0x120 net/ipv6/route.c:3843 addrconf_prefix_route+0x314/0x4d0 net/ipv6/addrconf.c:2445 inet6_addr_add+0x620/0xaf0 net/ipv6/addrconf.c:3012 inet6_rtm_newaddr+0x8a3/0xc80 net/ipv6/addrconf.c:4984 rtnetlink_rcv_msg+0x882/0x1030 net/core/rtnetlink.c:6558 netlink_rcv_skb+0x1df/0x430 net/netlink/af_netlink.c:2545 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1136 [inline] free_unref_page_prepare+0x8c3/0x9f0 mm/page_alloc.c:2312 free_unref_page+0x37/0x3f0 mm/page_alloc.c:2405 __stack_depot_save+0x4ef/0x650 lib/stackdepot.c:443 kasan_save_stack mm/kasan/common.c:46 [inline] kasan_set_track+0x61/0x70 mm/kasan/common.c:52 kasan_save_free_info+0x28/0x40 mm/kasan/generic.c:522 ____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236 kasan_slab_free include/linux/kasan.h:164 [inline] slab_free_hook mm/slub.c:1800 [inline] slab_free_freelist_hook mm/slub.c:1826 [inline] slab_free mm/slub.c:3809 [inline] __kmem_cache_free+0x263/0x3a0 mm/slub.c:3822 skb_kfree_head net/core/skbuff.c:950 [inline] skb_free_head net/core/skbuff.c:962 [inline] skb_release_data+0x660/0x850 net/core/skbuff.c:992 skb_release_all net/core/skbuff.c:1058 [inline] __kfree_skb net/core/skbuff.c:1072 [inline] consume_skb+0xb3/0x150 net/core/skbuff.c:1288 netlink_broadcast_filtered+0x1154/0x1280 net/netlink/af_netlink.c:1554 netlink_broadcast net/netlink/af_netlink.c:1576 [inline] nlmsg_multicast include/net/netlink.h:1090 [inline] nlmsg_notify+0xfb/0x1b0 net/netlink/af_netlink.c:2588 fib6_add_rt2node net/ipv6/ip6_fib.c:1257 [inline] fib6_add+0x1e11/0x3f90 net/ipv6/ip6_fib.c:1483 __ip6_ins_rt net/ipv6/route.c:1303 [inline] ip6_ins_rt+0x106/0x170 net/ipv6/route.c:1313 __ipv6_ifa_notify+0x5ca/0x11e0 net/ipv6/addrconf.c:6217 ipv6_ifa_notify net/ipv6/addrconf.c:6256 [inline] add_addr+0x2d8/0x490 net/ipv6/addrconf.c:3157 init_loopback net/ipv6/addrconf.c:3240 [inline] addrconf_init_auto_addrs+0x410/0xec0 net/ipv6/addrconf.c:3514 Memory state around the buggy address: ffff88802d43bf80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff88802d43c000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff88802d43c080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88802d43c100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88802d43c180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================