================================================================== BUG: KASAN: use-after-free in mutex_can_spin_on_owner kernel/locking/mutex.c:617 [inline] BUG: KASAN: use-after-free in mutex_optimistic_spin kernel/locking/mutex.c:661 [inline] BUG: KASAN: use-after-free in __mutex_lock_common kernel/locking/mutex.c:973 [inline] BUG: KASAN: use-after-free in __mutex_lock+0xcd7/0x1060 kernel/locking/mutex.c:1114 Read of size 4 at addr ffff8881d9864ef8 by task syz-executor.3/14466 CPU: 1 PID: 14466 Comm: syz-executor.3 Not tainted 5.4.249-syzkaller-00007-g50533a8b511b #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/26/2023 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1d8/0x241 lib/dump_stack.c:118 print_address_description+0x8c/0x600 mm/kasan/report.c:384 __kasan_report+0xf3/0x120 mm/kasan/report.c:516 kasan_report+0x30/0x60 mm/kasan/common.c:653 mutex_can_spin_on_owner kernel/locking/mutex.c:617 [inline] mutex_optimistic_spin kernel/locking/mutex.c:661 [inline] __mutex_lock_common kernel/locking/mutex.c:973 [inline] __mutex_lock+0xcd7/0x1060 kernel/locking/mutex.c:1114 mutex_lock_killable+0xd8/0x110 kernel/locking/mutex.c:1348 lo_open+0x18/0xc0 drivers/block/loop.c:1899 __blkdev_get+0x3c8/0x1160 fs/block_dev.c:1581 blkdev_get+0x2de/0x3a0 fs/block_dev.c:1714 do_dentry_open+0x964/0x1130 fs/open.c:796 do_last fs/namei.c:3495 [inline] path_openat+0x2992/0x3480 fs/namei.c:3614 do_filp_open+0x20b/0x450 fs/namei.c:3644 do_sys_open+0x39c/0x810 fs/open.c:1113 do_syscall_64+0xca/0x1c0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x5c/0xc1 Allocated by task 14384: save_stack mm/kasan/common.c:70 [inline] set_track mm/kasan/common.c:78 [inline] __kasan_kmalloc+0x171/0x210 mm/kasan/common.c:529 slab_post_alloc_hook mm/slab.h:584 [inline] slab_alloc_node mm/slub.c:2829 [inline] slab_alloc mm/slub.c:2837 [inline] kmem_cache_alloc+0xd9/0x250 mm/slub.c:2842 kmem_cache_alloc_node include/linux/slab.h:427 [inline] alloc_task_struct_node kernel/fork.c:171 [inline] dup_task_struct+0x4f/0x600 kernel/fork.c:874 copy_process+0x56d/0x3230 kernel/fork.c:1881 _do_fork+0x197/0x900 kernel/fork.c:2396 __do_sys_clone3 kernel/fork.c:2685 [inline] __se_sys_clone3 kernel/fork.c:2672 [inline] __x64_sys_clone3+0x2da/0x300 kernel/fork.c:2672 do_syscall_64+0xca/0x1c0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x5c/0xc1 Freed by task 10: save_stack mm/kasan/common.c:70 [inline] set_track mm/kasan/common.c:78 [inline] kasan_set_free_info mm/kasan/common.c:345 [inline] __kasan_slab_free+0x1b5/0x270 mm/kasan/common.c:487 slab_free_hook mm/slub.c:1455 [inline] slab_free_freelist_hook mm/slub.c:1494 [inline] slab_free mm/slub.c:3080 [inline] kmem_cache_free+0x10b/0x2c0 mm/slub.c:3096 __rcu_reclaim kernel/rcu/rcu.h:222 [inline] rcu_do_batch+0x492/0xa00 kernel/rcu/tree.c:2167 rcu_core+0x4c8/0xcb0 kernel/rcu/tree.c:2387 __do_softirq+0x23b/0x6b7 kernel/softirq.c:292 The buggy address belongs to the object at ffff8881d9864ec0 which belongs to the cache task_struct of size 3904 The buggy address is located 56 bytes inside of 3904-byte region [ffff8881d9864ec0, ffff8881d9865e00) The buggy address belongs to the page: page:ffffea0007661800 refcount:1 mapcount:0 mapping:ffff8881f5cf0500 index:0xffff8881d9864ec0 compound_mapcount: 0 flags: 0x8000000000010200(slab|head) raw: 8000000000010200 ffffea000780c008 ffffea0007b47208 ffff8881f5cf0500 raw: ffff8881d9864ec0 0000000000080002 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL) set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook mm/page_alloc.c:2165 [inline] prep_new_page+0x18f/0x370 mm/page_alloc.c:2171 get_page_from_freelist+0x2d13/0x2d90 mm/page_alloc.c:3794 __alloc_pages_nodemask+0x393/0x840 mm/page_alloc.c:4891 alloc_slab_page+0x39/0x3c0 mm/slub.c:343 allocate_slab mm/slub.c:1683 [inline] new_slab+0x97/0x440 mm/slub.c:1749 new_slab_objects mm/slub.c:2505 [inline] ___slab_alloc+0x2fe/0x490 mm/slub.c:2667 __slab_alloc+0x62/0xa0 mm/slub.c:2707 slab_alloc_node mm/slub.c:2792 [inline] slab_alloc mm/slub.c:2837 [inline] kmem_cache_alloc+0x109/0x250 mm/slub.c:2842 kmem_cache_alloc_node include/linux/slab.h:427 [inline] alloc_task_struct_node kernel/fork.c:171 [inline] dup_task_struct+0x4f/0x600 kernel/fork.c:874 copy_process+0x56d/0x3230 kernel/fork.c:1881 _do_fork+0x197/0x900 kernel/fork.c:2396 __do_sys_clone kernel/fork.c:2554 [inline] __se_sys_clone kernel/fork.c:2535 [inline] __x64_sys_clone+0x26b/0x2c0 kernel/fork.c:2535 do_syscall_64+0xca/0x1c0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x5c/0xc1 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1176 [inline] __free_pages_ok+0x847/0x950 mm/page_alloc.c:1438 free_the_page mm/page_alloc.c:4953 [inline] __free_pages+0x91/0x140 mm/page_alloc.c:4959 free_thread_stack kernel/fork.c:299 [inline] release_task_stack kernel/fork.c:439 [inline] put_task_stack+0x212/0x260 kernel/fork.c:450 finish_task_switch+0x24a/0x590 kernel/sched/core.c:3479 context_switch kernel/sched/core.c:3611 [inline] __schedule+0xb0d/0x1320 kernel/sched/core.c:4307 schedule+0x12c/0x1d0 kernel/sched/core.c:4375 freezable_schedule include/linux/freezer.h:172 [inline] futex_wait_queue_me+0x31f/0x690 kernel/futex.c:2743 futex_wait+0x2f5/0x890 kernel/futex.c:2849 do_futex+0x13c1/0x19f0 kernel/futex.c:3888 __do_sys_futex kernel/futex.c:3949 [inline] __se_sys_futex+0x355/0x470 kernel/futex.c:3917 do_syscall_64+0xca/0x1c0 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x5c/0xc1 Memory state around the buggy address: ffff8881d9864d80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8881d9864e00: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc >ffff8881d9864e80: fc fc fc fc fc fc fc fc fb fb fb fb fb fb fb fb ^ ffff8881d9864f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8881d9864f80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================