================================================================== BUG: KASAN: use-after-free in io_wq_worker_running+0xfe/0x130 io_uring/io-wq.c:674 Read of size 4 at addr ffff88802a70f404 by task iou-wrk-26577/26579 CPU: 1 PID: 26579 Comm: iou-wrk-26577 Not tainted 6.2.0-rc3-syzkaller-00008-g1fe4fd6f5cad #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106 print_address_description+0x74/0x340 mm/kasan/report.c:306 print_report+0x107/0x220 mm/kasan/report.c:417 kasan_report+0x139/0x170 mm/kasan/report.c:517 io_wq_worker_running+0xfe/0x130 io_uring/io-wq.c:674 rwsem_down_write_slowpath+0xfdc/0x14a0 kernel/locking/rwsem.c:1190 __down_write_common kernel/locking/rwsem.c:1305 [inline] __down_write_killable kernel/locking/rwsem.c:1319 [inline] down_write_killable+0x235/0x290 kernel/locking/rwsem.c:1575 mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline] do_madvise+0x2e1/0x1f40 mm/madvise.c:1437 io_madvise+0xb7/0x160 io_uring/advise.c:57 io_issue_sqe+0x494/0xcd0 io_uring/io_uring.c:1856 io_wq_submit_work+0x44a/0x9c0 io_uring/io_uring.c:1932 io_worker_handle_work+0x8e1/0xee0 io_uring/io-wq.c:587 io_wqe_worker+0x36c/0xde0 io_uring/io-wq.c:632 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Allocated by task 26577: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4c/0x70 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:371 [inline] __kasan_kmalloc+0x97/0xb0 mm/kasan/common.c:380 kmalloc_node include/linux/slab.h:606 [inline] kzalloc_node include/linux/slab.h:731 [inline] create_io_worker+0xef/0x630 io_uring/io-wq.c:801 io_wqe_create_worker io_uring/io-wq.c:310 [inline] io_wqe_enqueue+0x9f3/0xd10 io_uring/io-wq.c:936 io_queue_iowq+0x2ac/0x3a0 io_uring/io_uring.c:475 io_queue_async+0x42b/0x600 io_uring/io_uring.c:2013 io_queue_sqe io_uring/io_uring.c:2037 [inline] io_submit_sqe io_uring/io_uring.c:2286 [inline] io_submit_sqes+0xe69/0x1ba0 io_uring/io_uring.c:2397 __do_sys_io_uring_enter io_uring/io_uring.c:3345 [inline] __se_sys_io_uring_enter+0x335/0x2540 io_uring/io_uring.c:3277 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd Freed by task 26579: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4c/0x70 mm/kasan/common.c:52 kasan_save_free_info+0x27/0x40 mm/kasan/generic.c:518 ____kasan_slab_free+0xd6/0x120 mm/kasan/common.c:236 kasan_slab_free include/linux/kasan.h:177 [inline] slab_free_hook mm/slub.c:1781 [inline] slab_free_freelist_hook+0x12e/0x1a0 mm/slub.c:1807 slab_free mm/slub.c:3787 [inline] __kmem_cache_free+0x71/0x110 mm/slub.c:3800 io_wq_cancel_tw_create+0x77/0xd0 io_uring/io-wq.c:1233 io_queue_worker_create+0x384/0x430 io_uring/io-wq.c:381 sched_submit_work kernel/sched/core.c:6597 [inline] schedule+0x63/0x190 kernel/sched/core.c:6628 rwsem_down_write_slowpath+0xfdc/0x14a0 kernel/locking/rwsem.c:1190 __down_write_common kernel/locking/rwsem.c:1305 [inline] __down_write_killable kernel/locking/rwsem.c:1319 [inline] down_write_killable+0x235/0x290 kernel/locking/rwsem.c:1575 mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline] do_madvise+0x2e1/0x1f40 mm/madvise.c:1437 io_madvise+0xb7/0x160 io_uring/advise.c:57 io_issue_sqe+0x494/0xcd0 io_uring/io_uring.c:1856 io_wq_submit_work+0x44a/0x9c0 io_uring/io_uring.c:1932 io_worker_handle_work+0x8e1/0xee0 io_uring/io-wq.c:587 io_wqe_worker+0x36c/0xde0 io_uring/io-wq.c:632 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Last potentially related work creation: kasan_save_stack+0x3b/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xb0/0xc0 mm/kasan/generic.c:488 task_work_add+0x87/0x340 kernel/task_work.c:48 io_queue_worker_create+0x1e2/0x430 io_uring/io-wq.c:373 sched_submit_work kernel/sched/core.c:6597 [inline] schedule+0x63/0x190 kernel/sched/core.c:6628 rwsem_down_write_slowpath+0xfdc/0x14a0 kernel/locking/rwsem.c:1190 __down_write_common kernel/locking/rwsem.c:1305 [inline] __down_write_killable kernel/locking/rwsem.c:1319 [inline] down_write_killable+0x235/0x290 kernel/locking/rwsem.c:1575 mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline] do_madvise+0x2e1/0x1f40 mm/madvise.c:1437 io_madvise+0xb7/0x160 io_uring/advise.c:57 io_issue_sqe+0x494/0xcd0 io_uring/io_uring.c:1856 io_wq_submit_work+0x44a/0x9c0 io_uring/io_uring.c:1932 io_worker_handle_work+0x8e1/0xee0 io_uring/io-wq.c:587 io_wqe_worker+0x36c/0xde0 io_uring/io-wq.c:632 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 The buggy address belongs to the object at ffff88802a70f400 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 4 bytes inside of 512-byte region [ffff88802a70f400, ffff88802a70f600) The buggy address belongs to the physical page: page:ffffea0000a9c300 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x2a70c head:ffffea0000a9c300 order:2 compound_mapcount:0 subpages_mapcount:0 compound_pincount:0 anon flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 ffff888012841c80 0000000000000000 0000000000000001 raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 2, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 26480, tgid 26477 (syz-executor.4), ts 1320725475361, free_ts 1311111812869 prep_new_page mm/page_alloc.c:2531 [inline] get_page_from_freelist+0x72b/0x7a0 mm/page_alloc.c:4283 __alloc_pages+0x259/0x560 mm/page_alloc.c:5549 __alloc_pages_node include/linux/gfp.h:237 [inline] alloc_slab_page+0x61/0x190 mm/slub.c:1853 allocate_slab+0x5e/0x3c0 mm/slub.c:1998 new_slab mm/slub.c:2051 [inline] ___slab_alloc+0x7f4/0xeb0 mm/slub.c:3193 __slab_alloc mm/slub.c:3292 [inline] __slab_alloc_node mm/slub.c:3345 [inline] slab_alloc_node mm/slub.c:3442 [inline] __kmem_cache_alloc_node+0x25b/0x340 mm/slub.c:3491 kmalloc_node_trace+0x23/0x60 mm/slab_common.c:1075 kmalloc_node include/linux/slab.h:606 [inline] kzalloc_node include/linux/slab.h:731 [inline] create_io_worker+0xef/0x630 io_uring/io-wq.c:801 io_wqe_create_worker io_uring/io-wq.c:310 [inline] io_wqe_enqueue+0x9f3/0xd10 io_uring/io-wq.c:936 io_queue_iowq+0x2ac/0x3a0 io_uring/io_uring.c:475 io_queue_async+0x42b/0x600 io_uring/io_uring.c:2013 io_queue_sqe io_uring/io_uring.c:2037 [inline] io_submit_sqe io_uring/io_uring.c:2286 [inline] io_submit_sqes+0xe69/0x1ba0 io_uring/io_uring.c:2397 __do_sys_io_uring_enter io_uring/io_uring.c:3345 [inline] __se_sys_io_uring_enter+0x335/0x2540 io_uring/io_uring.c:3277 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1446 [inline] free_pcp_prepare+0x751/0x780 mm/page_alloc.c:1496 free_unref_page_prepare mm/page_alloc.c:3369 [inline] free_unref_page+0x19/0x4c0 mm/page_alloc.c:3464 discard_slab mm/slub.c:2098 [inline] __unfreeze_partials+0x1a5/0x1e0 mm/slub.c:2637 put_cpu_partial+0x116/0x180 mm/slub.c:2713 qlist_free_all+0x2b/0x70 mm/kasan/quarantine.c:187 kasan_quarantine_reduce+0x156/0x170 mm/kasan/quarantine.c:294 __kasan_slab_alloc+0x1f/0x70 mm/kasan/common.c:302 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook mm/slab.h:761 [inline] slab_alloc_node mm/slub.c:3452 [inline] slab_alloc mm/slub.c:3460 [inline] __kmem_cache_alloc_lru mm/slub.c:3467 [inline] kmem_cache_alloc+0x1b3/0x350 mm/slub.c:3476 kmem_cache_zalloc include/linux/slab.h:710 [inline] taskstats_tgid_alloc kernel/taskstats.c:583 [inline] taskstats_exit+0x16d/0x910 kernel/taskstats.c:622 do_exit+0x5c2/0x2150 kernel/exit.c:852 do_group_exit+0x1fd/0x2b0 kernel/exit.c:1012 get_signal+0x1755/0x1820 kernel/signal.c:2859 arch_do_signal_or_restart+0x8d/0x5f0 arch/x86/kernel/signal.c:306 exit_to_user_mode_loop+0x74/0x160 kernel/entry/common.c:168 exit_to_user_mode_prepare+0xad/0x110 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x2e/0x60 kernel/entry/common.c:296 Memory state around the buggy address: ffff88802a70f300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff88802a70f380: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff88802a70f400: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88802a70f480: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88802a70f500: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================