================================================================== BUG: KASAN: use-after-free in io_wq_worker_running+0x114/0x130 io_uring/io-wq.c:674 Read of size 4 at addr ffff8880732dac04 by task iou-wrk-3766/3767 CPU: 1 PID: 3767 Comm: iou-wrk-3766 Not tainted 6.2.0-rc2-syzkaller-00256-ga689b938df39 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:306 [inline] print_report+0x15e/0x461 mm/kasan/report.c:417 kasan_report+0xbf/0x1f0 mm/kasan/report.c:517 io_wq_worker_running+0x114/0x130 io_uring/io-wq.c:674 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6690 __mutex_lock_common kernel/locking/mutex.c:679 [inline] __mutex_lock+0xa48/0x1360 kernel/locking/mutex.c:747 io_ring_submit_lock io_uring/io_uring.h:215 [inline] io_buffer_select+0x8e5/0xbe0 io_uring/kbuf.c:178 io_recv+0x851/0x1140 io_uring/net.c:860 io_issue_sqe+0x156/0x1220 io_uring/io_uring.c:1856 io_wq_submit_work+0x29c/0xdc0 io_uring/io_uring.c:1932 io_worker_handle_work+0xc41/0x1c60 io_uring/io-wq.c:587 io_wqe_worker+0xa5b/0xe40 io_uring/io-wq.c:632 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Allocated by task 3766: kasan_save_stack+0x22/0x40 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 ____kasan_kmalloc mm/kasan/common.c:371 [inline] ____kasan_kmalloc mm/kasan/common.c:330 [inline] __kasan_kmalloc+0xa3/0xb0 mm/kasan/common.c:380 kmalloc_node include/linux/slab.h:606 [inline] kzalloc_node include/linux/slab.h:731 [inline] create_io_worker+0x10c/0x630 io_uring/io-wq.c:801 io_wqe_create_worker io_uring/io-wq.c:310 [inline] io_wqe_enqueue+0x6c3/0xbc0 io_uring/io-wq.c:936 io_queue_iowq+0x282/0x5c0 io_uring/io_uring.c:475 io_queue_sqe_fallback+0xf3/0x190 io_uring/io_uring.c:2059 io_submit_sqe io_uring/io_uring.c:2281 [inline] io_submit_sqes+0x11db/0x1e60 io_uring/io_uring.c:2397 __do_sys_io_uring_enter+0xc1d/0x2540 io_uring/io_uring.c:3345 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd Freed by task 3767: kasan_save_stack+0x22/0x40 mm/kasan/common.c:45 kasan_set_track+0x25/0x30 mm/kasan/common.c:52 kasan_save_free_info+0x2b/0x40 mm/kasan/generic.c:518 ____kasan_slab_free mm/kasan/common.c:236 [inline] ____kasan_slab_free+0x13b/0x1a0 mm/kasan/common.c:200 kasan_slab_free include/linux/kasan.h:177 [inline] __cache_free mm/slab.c:3394 [inline] __do_kmem_cache_free mm/slab.c:3580 [inline] __kmem_cache_free+0xcd/0x3b0 mm/slab.c:3587 io_wq_cancel_tw_create io_uring/io-wq.c:1233 [inline] io_queue_worker_create+0x567/0x660 io_uring/io-wq.c:381 io_wqe_dec_running+0x1e4/0x240 io_uring/io-wq.c:410 io_wq_worker_sleeping+0xa6/0xc0 io_uring/io-wq.c:698 sched_submit_work kernel/sched/core.c:6597 [inline] schedule+0x16e/0x1b0 kernel/sched/core.c:6628 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6690 __mutex_lock_common kernel/locking/mutex.c:679 [inline] __mutex_lock+0xa48/0x1360 kernel/locking/mutex.c:747 io_ring_submit_lock io_uring/io_uring.h:215 [inline] io_buffer_select+0x8e5/0xbe0 io_uring/kbuf.c:178 io_recv+0x851/0x1140 io_uring/net.c:860 io_issue_sqe+0x156/0x1220 io_uring/io_uring.c:1856 io_wq_submit_work+0x29c/0xdc0 io_uring/io_uring.c:1932 io_worker_handle_work+0xc41/0x1c60 io_uring/io-wq.c:587 io_wqe_worker+0xa5b/0xe40 io_uring/io-wq.c:632 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Last potentially related work creation: kasan_save_stack+0x22/0x40 mm/kasan/common.c:45 __kasan_record_aux_stack+0x7b/0x90 mm/kasan/generic.c:488 task_work_add+0x7f/0x2c0 kernel/task_work.c:48 io_queue_worker_create+0x41d/0x660 io_uring/io-wq.c:373 io_wqe_dec_running+0x1e4/0x240 io_uring/io-wq.c:410 io_wq_worker_sleeping+0xa6/0xc0 io_uring/io-wq.c:698 sched_submit_work kernel/sched/core.c:6597 [inline] schedule+0x16e/0x1b0 kernel/sched/core.c:6628 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6690 __mutex_lock_common kernel/locking/mutex.c:679 [inline] __mutex_lock+0xa48/0x1360 kernel/locking/mutex.c:747 io_ring_submit_lock io_uring/io_uring.h:215 [inline] io_buffer_select+0x8e5/0xbe0 io_uring/kbuf.c:178 io_recv+0x851/0x1140 io_uring/net.c:860 io_issue_sqe+0x156/0x1220 io_uring/io_uring.c:1856 io_wq_submit_work+0x29c/0xdc0 io_uring/io_uring.c:1932 io_worker_handle_work+0xc41/0x1c60 io_uring/io-wq.c:587 io_wqe_worker+0xa5b/0xe40 io_uring/io-wq.c:632 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Second to last potentially related work creation: kasan_save_stack+0x22/0x40 mm/kasan/common.c:45 __kasan_record_aux_stack+0x7b/0x90 mm/kasan/generic.c:488 insert_work+0x48/0x350 kernel/workqueue.c:1358 __queue_work+0x693/0x13b0 kernel/workqueue.c:1517 call_timer_fn+0x1da/0x7c0 kernel/time/timer.c:1700 expire_timers+0xbb/0x5c0 kernel/time/timer.c:1746 __run_timers kernel/time/timer.c:2022 [inline] __run_timers kernel/time/timer.c:1995 [inline] run_timer_softirq+0x326/0x910 kernel/time/timer.c:2035 __do_softirq+0x1fb/0xadc kernel/softirq.c:571 The buggy address belongs to the object at ffff8880732dac00 which belongs to the cache kmalloc-512 of size 512 The buggy address is located 4 bytes inside of 512-byte region [ffff8880732dac00, ffff8880732dae00) The buggy address belongs to the physical page: page:ffffea0001ccb680 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x732da flags: 0xfff00000000200(slab|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000000200 ffff888012440600 ffffea0001b10610 ffffea0001e535d0 raw: 0000000000000000 ffff8880732da000 0000000100000004 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x342220(__GFP_HIGH|__GFP_ATOMIC|__GFP_NOWARN|__GFP_COMP|__GFP_HARDWALL|__GFP_THISNODE), pid 24573, tgid 24573 (kworker/u4:43), ts 2744217260612, free_ts 2744122204553 prep_new_page mm/page_alloc.c:2531 [inline] get_page_from_freelist+0x119c/0x2ce0 mm/page_alloc.c:4283 __alloc_pages+0x1cb/0x5b0 mm/page_alloc.c:5549 __alloc_pages_node include/linux/gfp.h:237 [inline] kmem_getpages mm/slab.c:1363 [inline] cache_grow_begin+0x94/0x390 mm/slab.c:2574 cache_alloc_refill+0x27f/0x380 mm/slab.c:2947 ____cache_alloc mm/slab.c:3023 [inline] ____cache_alloc mm/slab.c:3006 [inline] __do_cache_alloc mm/slab.c:3206 [inline] slab_alloc_node mm/slab.c:3254 [inline] __kmem_cache_alloc_node+0x44f/0x510 mm/slab.c:3544 kmalloc_trace+0x26/0x60 mm/slab_common.c:1062 kmalloc include/linux/slab.h:580 [inline] batadv_forw_packet_alloc+0x3b0/0x4d0 net/batman-adv/send.c:519 batadv_iv_ogm_aggregate_new+0x134/0x4e0 net/batman-adv/bat_iv_ogm.c:563 batadv_iv_ogm_queue_add net/batman-adv/bat_iv_ogm.c:671 [inline] batadv_iv_ogm_schedule_buff+0xe6b/0x1450 net/batman-adv/bat_iv_ogm.c:850 batadv_iv_ogm_schedule net/batman-adv/bat_iv_ogm.c:869 [inline] batadv_iv_ogm_schedule net/batman-adv/bat_iv_ogm.c:862 [inline] batadv_iv_send_outstanding_bat_ogm_packet+0x744/0x910 net/batman-adv/bat_iv_ogm.c:1713 process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289 worker_thread+0x669/0x1090 kernel/workqueue.c:2436 kthread+0x2e8/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1446 [inline] free_pcp_prepare+0x65c/0xc00 mm/page_alloc.c:1496 free_unref_page_prepare mm/page_alloc.c:3369 [inline] free_unref_page_list+0x176/0xcd0 mm/page_alloc.c:3510 release_pages+0xcb1/0x1330 mm/swap.c:1076 tlb_batch_pages_flush+0xa8/0x1a0 mm/mmu_gather.c:97 tlb_flush_mmu_free mm/mmu_gather.c:292 [inline] tlb_flush_mmu mm/mmu_gather.c:299 [inline] tlb_finish_mmu+0x14b/0x7e0 mm/mmu_gather.c:391 exit_mmap+0x202/0x7b0 mm/mmap.c:3096 __mmput+0x128/0x4c0 kernel/fork.c:1207 mmput+0x60/0x70 kernel/fork.c:1229 exit_mm kernel/exit.c:563 [inline] do_exit+0x9ac/0x2950 kernel/exit.c:854 do_group_exit+0xd4/0x2a0 kernel/exit.c:1012 get_signal+0x21c3/0x2450 kernel/signal.c:2859 arch_do_signal_or_restart+0x79/0x5c0 arch/x86/kernel/signal.c:306 exit_to_user_mode_loop kernel/entry/common.c:168 [inline] exit_to_user_mode_prepare+0x15f/0x250 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x1d/0x50 kernel/entry/common.c:296 do_syscall_64+0x46/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd Memory state around the buggy address: ffff8880732dab00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8880732dab80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff8880732dac00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880732dac80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880732dad00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================