================================================================== BUG: KASAN: slab-use-after-free in __unix_gc+0xe0f/0xf70 Read of size 8 at addr ffff8880111a7e40 by task kworker/u4:9/2905 CPU: 0 PID: 2905 Comm: kworker/u4:9 Not tainted 6.8.0-rc3-syzkaller-00831-g4ec1d5fd384e #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024 Workqueue: events_unbound __unix_gc Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106 print_address_description mm/kasan/report.c:377 [inline] print_report+0x167/0x540 mm/kasan/report.c:488 kasan_report+0x142/0x180 mm/kasan/report.c:601 __unix_gc+0xe0f/0xf70 process_one_work kernel/workqueue.c:2633 [inline] process_scheduled_works+0x913/0x1420 kernel/workqueue.c:2706 worker_thread+0xa5f/0x1000 kernel/workqueue.c:2787 kthread+0x2ef/0x390 kernel/kthread.c:388 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:242 Allocated by task 5071: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 unpoison_slab_object mm/kasan/common.c:314 [inline] __kasan_slab_alloc+0x66/0x80 mm/kasan/common.c:340 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook mm/slub.c:3813 [inline] slab_alloc_node mm/slub.c:3860 [inline] kmem_cache_alloc+0x16f/0x340 mm/slub.c:3867 sk_prot_alloc+0x58/0x210 net/core/sock.c:2073 sk_alloc+0x38/0x370 net/core/sock.c:2132 unix_create1+0xb4/0x7f0 unix_create+0x14e/0x200 net/unix/af_unix.c:1047 __sock_create+0x48f/0x920 net/socket.c:1571 sock_create net/socket.c:1622 [inline] __sys_socketpair+0x2c9/0x720 net/socket.c:1769 __do_sys_socketpair net/socket.c:1822 [inline] __se_sys_socketpair net/socket.c:1819 [inline] __x64_sys_socketpair+0x9b/0xb0 net/socket.c:1819 do_syscall_64+0xf9/0x240 entry_SYSCALL_64_after_hwframe+0x6f/0x77 Freed by task 1469: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3f/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x4e/0x60 mm/kasan/generic.c:640 poison_slab_object+0xa6/0xe0 mm/kasan/common.c:241 __kasan_slab_free+0x34/0x70 mm/kasan/common.c:257 kasan_slab_free include/linux/kasan.h:184 [inline] slab_free_hook mm/slub.c:2121 [inline] slab_free mm/slub.c:4299 [inline] kmem_cache_free+0x102/0x2a0 mm/slub.c:4363 sk_prot_free net/core/sock.c:2113 [inline] __sk_destruct+0x470/0x5f0 net/core/sock.c:2207 sock_put include/net/sock.h:1961 [inline] unix_release_sock+0xa90/0xd20 net/unix/af_unix.c:665 unix_release+0x91/0xc0 net/unix/af_unix.c:1062 __sock_release net/socket.c:659 [inline] sock_close+0xbc/0x240 net/socket.c:1421 __fput+0x429/0x8a0 fs/file_table.c:376 delayed_fput+0x59/0x80 fs/file_table.c:399 process_one_work kernel/workqueue.c:2633 [inline] process_scheduled_works+0x913/0x1420 kernel/workqueue.c:2706 worker_thread+0xa5f/0x1000 kernel/workqueue.c:2787 kthread+0x2ef/0x390 kernel/kthread.c:388 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:242 The buggy address belongs to the object at ffff8880111a7800 which belongs to the cache UNIX-STREAM of size 1920 The buggy address is located 1600 bytes inside of freed 1920-byte region [ffff8880111a7800, ffff8880111a7f80) The buggy address belongs to the physical page: page:ffffea0000446800 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x111a0 head:ffffea0000446800 order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0xfff00000000840(slab|head|node=0|zone=1|lastcpupid=0x7ff) page_type: 0xffffffff() raw: 00fff00000000840 ffff8880177c0a00 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000100010 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 4977, tgid 4977 (sftp-server), ts 45565091277, free_ts 43432095712 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x1ea/0x210 mm/page_alloc.c:1533 prep_new_page mm/page_alloc.c:1540 [inline] get_page_from_freelist+0x33ea/0x3580 mm/page_alloc.c:3311 __alloc_pages+0x255/0x680 mm/page_alloc.c:4567 __alloc_pages_node include/linux/gfp.h:238 [inline] alloc_pages_node include/linux/gfp.h:261 [inline] alloc_slab_page+0x5f/0x160 mm/slub.c:2190 allocate_slab mm/slub.c:2354 [inline] new_slab+0x84/0x2f0 mm/slub.c:2407 ___slab_alloc+0xd17/0x13e0 mm/slub.c:3540 __slab_alloc mm/slub.c:3625 [inline] __slab_alloc_node mm/slub.c:3678 [inline] slab_alloc_node mm/slub.c:3850 [inline] kmem_cache_alloc+0x24d/0x340 mm/slub.c:3867 sk_prot_alloc+0x58/0x210 net/core/sock.c:2073 sk_alloc+0x38/0x370 net/core/sock.c:2132 unix_create1+0xb4/0x7f0 unix_stream_connect+0x348/0x1110 net/unix/af_unix.c:1511 __sys_connect_file net/socket.c:2048 [inline] __sys_connect+0x2df/0x310 net/socket.c:2065 __do_sys_connect net/socket.c:2075 [inline] __se_sys_connect net/socket.c:2072 [inline] __x64_sys_connect+0x7a/0x90 net/socket.c:2072 do_syscall_64+0xf9/0x240 entry_SYSCALL_64_after_hwframe+0x6f/0x77 page last free pid 4919 tgid 4919 stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1140 [inline] free_unref_page_prepare+0x968/0xa90 mm/page_alloc.c:2346 free_unref_page+0x37/0x3f0 mm/page_alloc.c:2486 pipe_buf_release include/linux/pipe_fs_i.h:219 [inline] pipe_update_tail fs/pipe.c:234 [inline] pipe_read+0x6f5/0x13f0 fs/pipe.c:354 call_read_iter include/linux/fs.h:2079 [inline] new_sync_read fs/read_write.c:395 [inline] vfs_read+0x978/0xb70 fs/read_write.c:476 ksys_read+0x1a0/0x2c0 fs/read_write.c:619 do_syscall_64+0xf9/0x240 entry_SYSCALL_64_after_hwframe+0x6f/0x77 Memory state around the buggy address: ffff8880111a7d00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880111a7d80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff8880111a7e00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff8880111a7e80: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff8880111a7f00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================