================================================================== BUG: KASAN: slab-use-after-free in __lock_acquire+0x78/0x1fd0 kernel/locking/lockdep.c:5004 Read of size 8 at addr ffff888069640a18 by task kworker/u8:2/35 CPU: 0 PID: 35 Comm: kworker/u8:2 Not tainted 6.10.0-syzkaller-05505-gb1bc554e009e #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/27/2024 Workqueue: btrfs-delalloc btrfs_work_helper Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114 print_address_description mm/kasan/report.c:377 [inline] print_report+0x169/0x550 mm/kasan/report.c:488 kasan_report+0x143/0x180 mm/kasan/report.c:601 __lock_acquire+0x78/0x1fd0 kernel/locking/lockdep.c:5004 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5753 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162 class_raw_spinlock_irqsave_constructor include/linux/spinlock.h:551 [inline] try_to_wake_up+0xb0/0x1470 kernel/sched/core.c:4051 submit_compressed_extents+0xdf/0x1460 fs/btrfs/inode.c:1621 run_ordered_work fs/btrfs/async-thread.c:288 [inline] btrfs_work_helper+0x980/0xc50 fs/btrfs/async-thread.c:324 process_one_work kernel/workqueue.c:3231 [inline] process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3312 worker_thread+0x86d/0xd40 kernel/workqueue.c:3390 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Allocated by task 2: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 unpoison_slab_object mm/kasan/common.c:312 [inline] __kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:338 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook mm/slub.c:3940 [inline] slab_alloc_node mm/slub.c:4002 [inline] kmem_cache_alloc_node_noprof+0x16b/0x320 mm/slub.c:4045 alloc_task_struct_node kernel/fork.c:178 [inline] dup_task_struct+0x57/0x8c0 kernel/fork.c:1103 copy_process+0x5d1/0x3dc0 kernel/fork.c:2203 kernel_clone+0x223/0x870 kernel/fork.c:2780 kernel_thread+0x1bc/0x240 kernel/fork.c:2842 create_kthread kernel/kthread.c:412 [inline] kthreadd+0x60d/0x810 kernel/kthread.c:765 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Freed by task 16: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:579 poison_slab_object+0xe0/0x150 mm/kasan/common.c:240 __kasan_slab_free+0x37/0x60 mm/kasan/common.c:256 kasan_slab_free include/linux/kasan.h:184 [inline] slab_free_hook mm/slub.c:2196 [inline] slab_free mm/slub.c:4438 [inline] kmem_cache_free+0x145/0x350 mm/slub.c:4513 put_task_struct include/linux/sched/task.h:138 [inline] delayed_put_task_struct+0x125/0x2f0 kernel/exit.c:228 rcu_do_batch kernel/rcu/tree.c:2569 [inline] rcu_core+0xafd/0x1830 kernel/rcu/tree.c:2843 handle_softirqs+0x2c4/0x970 kernel/softirq.c:554 run_ksoftirqd+0xca/0x130 kernel/softirq.c:928 smpboot_thread_fn+0x544/0xa30 kernel/smpboot.c:164 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Last potentially related work creation: kasan_save_stack+0x3f/0x60 mm/kasan/common.c:47 __kasan_record_aux_stack+0xac/0xc0 mm/kasan/generic.c:541 __call_rcu_common kernel/rcu/tree.c:3106 [inline] call_rcu+0x167/0xa70 kernel/rcu/tree.c:3210 context_switch kernel/sched/core.c:5191 [inline] __schedule+0x17b6/0x4a10 kernel/sched/core.c:6529 preempt_schedule_common+0x84/0xd0 kernel/sched/core.c:6708 preempt_schedule+0xe1/0xf0 kernel/sched/core.c:6732 preempt_schedule_thunk+0x1a/0x30 arch/x86/entry/thunk.S:12 class_preempt_destructor include/linux/preempt.h:480 [inline] try_to_wake_up+0x9a1/0x1470 kernel/sched/core.c:4174 kthread_stop+0x17a/0x630 kernel/kthread.c:707 close_ctree+0x4e6/0xd20 fs/btrfs/disk-io.c:4329 generic_shutdown_super+0x136/0x2d0 fs/super.c:642 kill_anon_super+0x3b/0x70 fs/super.c:1226 btrfs_kill_super+0x41/0x50 fs/btrfs/super.c:2116 deactivate_locked_super+0xc4/0x130 fs/super.c:473 cleanup_mnt+0x41f/0x4b0 fs/namespace.c:1373 task_work_run+0x24f/0x310 kernel/task_work.c:222 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] exit_to_user_mode_loop kernel/entry/common.c:114 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0x168/0x370 kernel/entry/common.c:218 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff888069640000 which belongs to the cache task_struct of size 7424 The buggy address is located 2584 bytes inside of freed 7424-byte region [ffff888069640000, ffff888069641d00) The buggy address belongs to the physical page: page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x69640 head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 memcg:ffff8880227a65c1 flags: 0xfff00000000040(head|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffefff(slab) raw: 00fff00000000040 ffff888015ef8500 dead000000000100 dead000000000122 raw: 0000000000000000 0000000000040004 00000001ffffefff ffff8880227a65c1 head: 00fff00000000040 ffff888015ef8500 dead000000000100 dead000000000122 head: 0000000000000000 0000000000040004 00000001ffffefff ffff8880227a65c1 head: 00fff00000000003 ffffea0001a59001 ffffffffffffffff 0000000000000000 head: 0000000000000008 0000000000000000 00000000ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 5083, tgid 5083 (syz-executor), ts 55384574507, free_ts 15205002164 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x1f3/0x230 mm/page_alloc.c:1473 prep_new_page mm/page_alloc.c:1481 [inline] get_page_from_freelist+0x2e4c/0x2f10 mm/page_alloc.c:3425 __alloc_pages_noprof+0x256/0x6c0 mm/page_alloc.c:4683 __alloc_pages_node_noprof include/linux/gfp.h:269 [inline] alloc_pages_node_noprof include/linux/gfp.h:296 [inline] alloc_slab_page+0x5f/0x120 mm/slub.c:2265 allocate_slab+0x5a/0x2f0 mm/slub.c:2428 new_slab mm/slub.c:2481 [inline] ___slab_alloc+0xcd1/0x14b0 mm/slub.c:3667 __slab_alloc+0x58/0xa0 mm/slub.c:3757 __slab_alloc_node mm/slub.c:3810 [inline] slab_alloc_node mm/slub.c:3990 [inline] kmem_cache_alloc_node_noprof+0x1fe/0x320 mm/slub.c:4045 alloc_task_struct_node kernel/fork.c:178 [inline] dup_task_struct+0x57/0x8c0 kernel/fork.c:1103 copy_process+0x5d1/0x3dc0 kernel/fork.c:2203 kernel_clone+0x223/0x870 kernel/fork.c:2780 __do_sys_clone3 kernel/fork.c:3084 [inline] __se_sys_clone3+0x2cb/0x350 kernel/fork.c:3063 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f page last free pid 1 tgid 1 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1093 [inline] free_unref_page+0xd19/0xea0 mm/page_alloc.c:2588 free_contig_range+0x9e/0x160 mm/page_alloc.c:6642 destroy_args+0x8a/0x890 mm/debug_vm_pgtable.c:1017 debug_vm_pgtable+0x4be/0x550 mm/debug_vm_pgtable.c:1397 do_one_initcall+0x248/0x880 init/main.c:1267 do_initcall_level+0x157/0x210 init/main.c:1329 do_initcalls+0x3f/0x80 init/main.c:1345 kernel_init_freeable+0x435/0x5d0 init/main.c:1578 kernel_init+0x1d/0x2b0 init/main.c:1467 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Memory state around the buggy address: ffff888069640900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888069640980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff888069640a00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888069640a80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888069640b00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================