================================================================== BUG: KASAN: slab-use-after-free in __list_del_entry_valid_or_report+0x13e/0x1b0 lib/list_debug.c:49 Read of size 8 at addr ffff88801d3878b0 by task kworker/u4:19/29925 CPU: 1 PID: 29925 Comm: kworker/u4:19 Not tainted 6.6.0-rc6-next-20231020-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/06/2023 Workqueue: btrfs-qgroup-rescan btrfs_work_helper Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:364 [inline] print_report+0xc3/0x620 mm/kasan/report.c:475 kasan_report+0xd9/0x110 mm/kasan/report.c:588 __list_del_entry_valid_or_report+0x13e/0x1b0 lib/list_debug.c:49 __list_del_entry_valid include/linux/list.h:124 [inline] __list_del_entry include/linux/list.h:215 [inline] list_del_init include/linux/list.h:287 [inline] qgroup_iterator_nested_clean fs/btrfs/qgroup.c:2623 [inline] btrfs_qgroup_account_extent+0x791/0x1040 fs/btrfs/qgroup.c:2883 qgroup_rescan_leaf+0x6b4/0xc20 fs/btrfs/qgroup.c:3543 btrfs_qgroup_rescan_worker+0x43a/0xa00 fs/btrfs/qgroup.c:3604 btrfs_work_helper+0x222/0xc10 fs/btrfs/async-thread.c:315 process_one_work+0x8a2/0x15e0 kernel/workqueue.c:2630 process_scheduled_works kernel/workqueue.c:2703 [inline] worker_thread+0x8b6/0x1280 kernel/workqueue.c:2784 kthread+0x337/0x440 kernel/kthread.c:388 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242 Allocated by task 12804: kasan_save_stack+0x33/0x50 mm/kasan/common.c:45 kasan_set_track+0x24/0x30 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:374 [inline] __kasan_kmalloc+0xa2/0xb0 mm/kasan/common.c:383 kmalloc include/linux/slab.h:600 [inline] kzalloc include/linux/slab.h:721 [inline] btrfs_quota_enable+0xb0b/0x1eb0 fs/btrfs/qgroup.c:1209 btrfs_ioctl_quota_ctl fs/btrfs/ioctl.c:3705 [inline] btrfs_ioctl+0x4caf/0x5d90 fs/btrfs/ioctl.c:4668 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __x64_sys_ioctl+0x18f/0x210 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3f/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b Freed by task 12804: kasan_save_stack+0x33/0x50 mm/kasan/common.c:45 kasan_set_track+0x24/0x30 mm/kasan/common.c:52 kasan_save_free_info+0x2b/0x40 mm/kasan/generic.c:522 ____kasan_slab_free mm/kasan/common.c:236 [inline] ____kasan_slab_free+0x15b/0x1b0 mm/kasan/common.c:200 kasan_slab_free include/linux/kasan.h:164 [inline] slab_free_hook mm/slub.c:1800 [inline] slab_free_freelist_hook+0x114/0x1e0 mm/slub.c:1826 slab_free mm/slub.c:3809 [inline] __kmem_cache_free+0xc0/0x180 mm/slub.c:3822 btrfs_remove_qgroup+0x541/0x7c0 fs/btrfs/qgroup.c:1787 btrfs_ioctl_qgroup_create fs/btrfs/ioctl.c:3811 [inline] btrfs_ioctl+0x5042/0x5d90 fs/btrfs/ioctl.c:4672 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:871 [inline] __se_sys_ioctl fs/ioctl.c:857 [inline] __x64_sys_ioctl+0x18f/0x210 fs/ioctl.c:857 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3f/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b Last potentially related work creation: kasan_save_stack+0x33/0x50 mm/kasan/common.c:45 __kasan_record_aux_stack+0xbc/0xd0 mm/kasan/generic.c:492 insert_work+0x38/0x230 kernel/workqueue.c:1647 __queue_work+0xb7b/0x1050 kernel/workqueue.c:1803 call_timer_fn+0x1a0/0x590 kernel/time/timer.c:1700 expire_timers kernel/time/timer.c:1746 [inline] __run_timers+0x585/0xb10 kernel/time/timer.c:2022 run_timer_softirq+0x58/0xd0 kernel/time/timer.c:2035 __do_softirq+0x216/0x95f kernel/softirq.c:553 Second to last potentially related work creation: kasan_save_stack+0x33/0x50 mm/kasan/common.c:45 __kasan_record_aux_stack+0xbc/0xd0 mm/kasan/generic.c:492 insert_work+0x38/0x230 kernel/workqueue.c:1647 __queue_work+0xb7b/0x1050 kernel/workqueue.c:1803 call_timer_fn+0x1a0/0x590 kernel/time/timer.c:1700 expire_timers kernel/time/timer.c:1746 [inline] __run_timers+0x585/0xb10 kernel/time/timer.c:2022 run_timer_softirq+0x58/0xd0 kernel/time/timer.c:2035 __do_softirq+0x216/0x95f kernel/softirq.c:553 The buggy address belongs to the object at ffff88801d387800 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 176 bytes inside of freed 512-byte region [ffff88801d387800, ffff88801d387a00) The buggy address belongs to the physical page: page:ffffea000074e100 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1d384 head:ffffea000074e100 order:2 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0xfff00000000840(slab|head|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000000840 ffff888012c41c80 dead000000000100 dead000000000122 raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 2, migratetype Unmovable, gfp_mask 0x52820(GFP_ATOMIC|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 5152, tgid 5152 (kworker/0:4), ts 223157639079, free_ts 223142211752 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x2cf/0x340 mm/page_alloc.c:1537 prep_new_page mm/page_alloc.c:1544 [inline] get_page_from_freelist+0xa16/0x3680 mm/page_alloc.c:3348 __alloc_pages+0x1cf/0x4c0 mm/page_alloc.c:4604 alloc_pages+0x1a8/0x270 mm/mempolicy.c:2283 alloc_slab_page mm/slub.c:1870 [inline] allocate_slab+0x251/0x380 mm/slub.c:2017 new_slab mm/slub.c:2070 [inline] ___slab_alloc+0x8bf/0x1570 mm/slub.c:3223 __slab_alloc.constprop.0+0x56/0xa0 mm/slub.c:3322 __slab_alloc_node mm/slub.c:3375 [inline] slab_alloc_node mm/slub.c:3468 [inline] __kmem_cache_alloc_node+0x131/0x310 mm/slub.c:3517 kmalloc_trace+0x27/0xf0 mm/slab_common.c:1098 kmalloc include/linux/slab.h:600 [inline] kzalloc include/linux/slab.h:721 [inline] br_multicast_new_port_group+0x133/0xb40 net/bridge/br_multicast.c:1418 __br_multicast_add_group+0x40c/0x650 net/bridge/br_multicast.c:1537 br_multicast_add_group net/bridge/br_multicast.c:1568 [inline] br_ip6_multicast_add_group+0x280/0x330 net/bridge/br_multicast.c:1620 br_ip6_multicast_mld2_report net/bridge/br_multicast.c:2973 [inline] br_multicast_ipv6_rcv net/bridge/br_multicast.c:3911 [inline] br_multicast_rcv+0xcf0/0x6750 net/bridge/br_multicast.c:3969 br_handle_frame_finish+0xe48/0x1d80 net/bridge/br_input.c:148 br_nf_hook_thresh+0x2ff/0x410 net/bridge/br_netfilter_hooks.c:1048 br_nf_pre_routing_finish_ipv6+0x683/0xf20 net/bridge/br_netfilter_ipv6.c:148 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1137 [inline] free_unref_page_prepare+0x476/0xa40 mm/page_alloc.c:2383 free_unref_page+0x33/0x3b0 mm/page_alloc.c:2523 qlink_free mm/kasan/quarantine.c:168 [inline] qlist_free_all+0x6a/0x170 mm/kasan/quarantine.c:187 kasan_quarantine_reduce+0x18e/0x1d0 mm/kasan/quarantine.c:294 __kasan_slab_alloc+0x65/0x90 mm/kasan/common.c:305 kasan_slab_alloc include/linux/kasan.h:188 [inline] slab_post_alloc_hook mm/slab.h:763 [inline] slab_alloc_node mm/slub.c:3478 [inline] slab_alloc mm/slub.c:3486 [inline] __kmem_cache_alloc_lru mm/slub.c:3493 [inline] kmem_cache_alloc+0x163/0x390 mm/slub.c:3502 vm_area_alloc+0x1f/0x220 kernel/fork.c:485 mmap_region+0x3a4/0x2820 mm/mmap.c:2836 do_mmap+0x890/0xef0 mm/mmap.c:1379 vm_mmap_pgoff+0x1a7/0x3b0 mm/util.c:546 ksys_mmap_pgoff+0x7d/0x5a0 mm/mmap.c:1425 __do_sys_mmap arch/x86/kernel/sys_x86_64.c:93 [inline] __se_sys_mmap arch/x86/kernel/sys_x86_64.c:86 [inline] __x64_sys_mmap+0x125/0x190 arch/x86/kernel/sys_x86_64.c:86 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3f/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b Memory state around the buggy address: ffff88801d387780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff88801d387800: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff88801d387880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88801d387900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88801d387980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================