================================================================== BUG: KASAN: use-after-free in io_poll_check_events+0x621/0x730 fs/io_uring.c:5517 Read of size 4 at addr ffff88804218802c by task kworker/0:8/3734 CPU: 0 PID: 3734 Comm: kworker/0:8 Not tainted 5.17.0-rc7-syzkaller-00241-gf0e18b03fcaf #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events io_fallback_req_func Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 print_address_description.constprop.0.cold+0x8d/0x336 mm/kasan/report.c:255 __kasan_report mm/kasan/report.c:442 [inline] kasan_report.cold+0x83/0xdf mm/kasan/report.c:459 io_poll_check_events+0x621/0x730 fs/io_uring.c:5517 io_apoll_task_func+0x40/0x250 fs/io_uring.c:5591 io_fallback_req_func+0xf9/0x1ae fs/io_uring.c:1400 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Allocated by task 24648: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 kasan_set_track mm/kasan/common.c:45 [inline] set_alloc_info mm/kasan/common.c:436 [inline] __kasan_slab_alloc+0x90/0xc0 mm/kasan/common.c:469 kasan_slab_alloc include/linux/kasan.h:260 [inline] slab_post_alloc_hook mm/slab.h:732 [inline] slab_alloc_node mm/slub.c:3230 [inline] kmem_cache_alloc_node+0x2c3/0x4f0 mm/slub.c:3266 alloc_task_struct_node kernel/fork.c:171 [inline] dup_task_struct kernel/fork.c:883 [inline] copy_process+0x5c4/0x7250 kernel/fork.c:1998 kernel_clone+0xe7/0xab0 kernel/fork.c:2565 __do_sys_clone+0xc8/0x110 kernel/fork.c:2682 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Freed by task 3734: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 kasan_set_track+0x21/0x30 mm/kasan/common.c:45 kasan_set_free_info+0x20/0x30 mm/kasan/generic.c:370 ____kasan_slab_free mm/kasan/common.c:366 [inline] ____kasan_slab_free+0x126/0x160 mm/kasan/common.c:328 kasan_slab_free include/linux/kasan.h:236 [inline] slab_free_hook mm/slub.c:1728 [inline] slab_free_freelist_hook+0x8b/0x1c0 mm/slub.c:1754 slab_free mm/slub.c:3509 [inline] kmem_cache_free+0xd7/0x370 mm/slub.c:3526 put_task_struct_many include/linux/sched/task.h:121 [inline] io_put_task fs/io_uring.c:1841 [inline] __io_req_complete_post+0x656/0x770 fs/io_uring.c:1960 io_req_complete_post+0x59/0x170 fs/io_uring.c:1972 io_req_complete_failed fs/io_uring.c:2003 [inline] io_apoll_task_func+0x1c7/0x250 fs/io_uring.c:5603 io_fallback_req_func+0xf9/0x1ae fs/io_uring.c:1400 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Last potentially related work creation: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 __kasan_record_aux_stack+0xbe/0xd0 mm/kasan/generic.c:348 __call_rcu kernel/rcu/tree.c:3026 [inline] call_rcu+0xb1/0x740 kernel/rcu/tree.c:3106 put_task_struct_rcu_user+0x7f/0xb0 kernel/exit.c:180 context_switch kernel/sched/core.c:4998 [inline] __schedule+0xa9c/0x4910 kernel/sched/core.c:6304 preempt_schedule_common+0x45/0xc0 kernel/sched/core.c:6470 preempt_schedule_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:35 __raw_spin_unlock_irq include/linux/spinlock_api_smp.h:160 [inline] _raw_spin_unlock_irq+0x3c/0x40 kernel/locking/spinlock.c:202 spin_unlock_irq include/linux/spinlock.h:399 [inline] do_group_exit+0x202/0x2f0 kernel/exit.c:932 __do_sys_exit_group kernel/exit.c:946 [inline] __se_sys_exit_group kernel/exit.c:944 [inline] __x64_sys_exit_group+0x3a/0x50 kernel/exit.c:944 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Second to last potentially related work creation: kasan_save_stack+0x1e/0x40 mm/kasan/common.c:38 __kasan_record_aux_stack+0xbe/0xd0 mm/kasan/generic.c:348 __call_rcu kernel/rcu/tree.c:3026 [inline] call_rcu+0xb1/0x740 kernel/rcu/tree.c:3106 put_task_struct_rcu_user+0x7f/0xb0 kernel/exit.c:180 context_switch kernel/sched/core.c:4998 [inline] __schedule+0xa9c/0x4910 kernel/sched/core.c:6304 schedule+0xd2/0x260 kernel/sched/core.c:6377 freezable_schedule include/linux/freezer.h:172 [inline] futex_wait_queue+0x144/0x3b0 kernel/futex/waitwake.c:355 futex_wait+0x2c9/0x670 kernel/futex/waitwake.c:656 do_futex+0x1af/0x300 kernel/futex/syscalls.c:106 __do_sys_futex kernel/futex/syscalls.c:183 [inline] __se_sys_futex kernel/futex/syscalls.c:164 [inline] __x64_sys_futex+0x1b0/0x4a0 kernel/futex/syscalls.c:164 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae The buggy address belongs to the object at ffff888042188000 which belongs to the cache task_struct of size 7168 The buggy address is located 44 bytes inside of 7168-byte region [ffff888042188000, ffff888042189c00) The buggy address belongs to the page: page:ffffea0001086200 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x42188 head:ffffea0001086200 order:3 compound_mapcount:0 compound_pincount:0 memcg:ffff8880785abb01 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 0000000000000000 dead000000000001 ffff888140006280 raw: 0000000000000000 0000000000040004 00000001ffffffff ffff8880785abb01 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 12439, ts 414982459090, free_ts 414975964227 prep_new_page mm/page_alloc.c:2434 [inline] get_page_from_freelist+0xa72/0x2f50 mm/page_alloc.c:4165 __alloc_pages+0x1b2/0x500 mm/page_alloc.c:5389 alloc_pages+0x1aa/0x310 mm/mempolicy.c:2271 alloc_slab_page mm/slub.c:1799 [inline] allocate_slab+0x27f/0x3c0 mm/slub.c:1944 new_slab mm/slub.c:2004 [inline] ___slab_alloc+0xbe1/0x12b0 mm/slub.c:3018 __slab_alloc.constprop.0+0x4d/0xa0 mm/slub.c:3105 slab_alloc_node mm/slub.c:3196 [inline] kmem_cache_alloc_node+0x190/0x4f0 mm/slub.c:3266 alloc_task_struct_node kernel/fork.c:171 [inline] dup_task_struct kernel/fork.c:883 [inline] copy_process+0x5c4/0x7250 kernel/fork.c:1998 kernel_clone+0xe7/0xab0 kernel/fork.c:2565 __do_sys_clone+0xc8/0x110 kernel/fork.c:2682 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1352 [inline] free_pcp_prepare+0x374/0x870 mm/page_alloc.c:1404 free_unref_page_prepare mm/page_alloc.c:3325 [inline] free_unref_page+0x19/0x690 mm/page_alloc.c:3404 __unfreeze_partials+0x320/0x340 mm/slub.c:2536 qlink_free mm/kasan/quarantine.c:157 [inline] qlist_free_all+0x6d/0x160 mm/kasan/quarantine.c:176 kasan_quarantine_reduce+0x180/0x200 mm/kasan/quarantine.c:283 __kasan_slab_alloc+0xa2/0xc0 mm/kasan/common.c:446 kasan_slab_alloc include/linux/kasan.h:260 [inline] slab_post_alloc_hook mm/slab.h:732 [inline] slab_alloc_node mm/slub.c:3230 [inline] slab_alloc mm/slub.c:3238 [inline] kmem_cache_alloc+0x271/0x4b0 mm/slub.c:3243 getname_flags.part.0+0x50/0x4f0 fs/namei.c:138 getname_flags+0x9a/0xe0 include/linux/audit.h:323 user_path_at_empty+0x2b/0x60 fs/namei.c:2850 do_readlinkat+0xcd/0x2f0 fs/stat.c:443 __do_sys_readlink fs/stat.c:476 [inline] __se_sys_readlink fs/stat.c:473 [inline] __x64_sys_readlink+0x74/0xb0 fs/stat.c:473 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae Memory state around the buggy address: ffff888042187f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888042187f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888042188000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888042188080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888042188100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================