loop0: detected capacity change from 0 to 32768 gfs2: fsid=syz:syz: Trying to join cluster "lock_nolock", "syz:syz" gfs2: fsid=syz:syz: Now mounting FS (format 1801)... gfs2: fsid=syz:syz.0: journal 0 mapped with 3 extents in 0ms gfs2: fsid=syz:syz.0: first mount done, others may mount gfs2: fsid=syz:syz.0: found 1 quota changes loop0: detected capacity change from 32768 to 0 ================================================================== BUG: KASAN: slab-use-after-free in list_empty include/linux/list.h:381 [inline] BUG: KASAN: slab-use-after-free in gfs2_discard fs/gfs2/aops.c:589 [inline] BUG: KASAN: slab-use-after-free in gfs2_invalidate_folio+0x40b/0x750 fs/gfs2/aops.c:627 Read of size 8 at addr ffff88801243a248 by task syz.0.0/5339 CPU: 0 UID: 0 PID: 5339 Comm: syz.0.0 Not tainted syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x240 mm/kasan/report.c:482 kasan_report+0x118/0x150 mm/kasan/report.c:595 list_empty include/linux/list.h:381 [inline] gfs2_discard fs/gfs2/aops.c:589 [inline] gfs2_invalidate_folio+0x40b/0x750 fs/gfs2/aops.c:627 folio_invalidate mm/truncate.c:140 [inline] truncate_cleanup_folio+0x2d8/0x430 mm/truncate.c:160 truncate_inode_pages_range+0x233/0xd90 mm/truncate.c:404 gfs2_evict_inode+0x87a/0x1000 fs/gfs2/super.c:1426 evict+0x5f4/0xae0 fs/inode.c:837 gfs2_put_super+0x355/0x860 fs/gfs2/super.c:617 generic_shutdown_super+0x135/0x2c0 fs/super.c:643 kill_block_super+0x44/0x90 fs/super.c:1722 deactivate_locked_super+0xbc/0x130 fs/super.c:474 cleanup_mnt+0x425/0x4c0 fs/namespace.c:1318 task_work_run+0x1d4/0x260 kernel/task_work.c:233 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] __exit_to_user_mode_loop kernel/entry/common.c:44 [inline] exit_to_user_mode_loop+0xef/0x4e0 kernel/entry/common.c:75 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline] syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline] syscall_exit_to_user_mode_work include/linux/entry-common.h:159 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:194 [inline] do_syscall_64+0x2c1/0xf80 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fab2d38f7c9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fab2e2ac038 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4 RAX: 0000000000000000 RBX: 00007fab2d5e5fa0 RCX: 00007fab2d38f7c9 RDX: 0000000000000000 RSI: ffffffffffffffff RDI: 0000000000000000 RBP: 00007fab2d413f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fab2d5e6038 R14: 00007fab2d5e5fa0 R15: 00007ffe2b353408 Allocated by task 5345: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 unpoison_slab_object mm/kasan/common.c:340 [inline] __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:366 kasan_slab_alloc include/linux/kasan.h:253 [inline] slab_post_alloc_hook mm/slub.c:4953 [inline] slab_alloc_node mm/slub.c:5263 [inline] kmem_cache_alloc_noprof+0x37d/0x710 mm/slub.c:5270 gfs2_alloc_bufdata fs/gfs2/trans.c:173 [inline] gfs2_trans_add_data+0x200/0x620 fs/gfs2/trans.c:214 gfs2_unstuffer_folio fs/gfs2/bmap.c:81 [inline] __gfs2_unstuff_inode fs/gfs2/bmap.c:119 [inline] gfs2_unstuff_dinode+0xb38/0x1320 fs/gfs2/bmap.c:166 gfs2_adjust_quota+0x219/0x800 fs/gfs2/quota.c:847 do_sync+0x847/0xc60 fs/gfs2/quota.c:961 gfs2_quota_sync+0x359/0x460 fs/gfs2/quota.c:1357 gfs2_quotad+0x3d5/0x930 fs/gfs2/quota.c:1607 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Freed by task 5345: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584 poison_slab_object mm/kasan/common.c:253 [inline] __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:285 kasan_slab_free include/linux/kasan.h:235 [inline] slab_free_hook mm/slub.c:2540 [inline] slab_free mm/slub.c:6670 [inline] kmem_cache_free+0x197/0x620 mm/slub.c:6781 trans_drain fs/gfs2/log.c:1015 [inline] gfs2_log_flush+0x17a2/0x24c0 fs/gfs2/log.c:1153 do_sync+0xa1d/0xc60 fs/gfs2/quota.c:981 gfs2_quota_sync+0x359/0x460 fs/gfs2/quota.c:1357 gfs2_quotad+0x3d5/0x930 fs/gfs2/quota.c:1607 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 The buggy address belongs to the object at ffff88801243a230 which belongs to the cache gfs2_bufdata of size 80 The buggy address is located 24 bytes inside of freed 80-byte region [ffff88801243a230, ffff88801243a280) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1243a flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000000 ffff88801c4daa00 dead000000000122 0000000000000000 raw: 0000000000000000 0000000080240024 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52c40(GFP_NOFS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 5339, tgid 5338 (syz.0.0), ts 86274251700, free_ts 86260667939 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x234/0x290 mm/page_alloc.c:1857 prep_new_page mm/page_alloc.c:1865 [inline] get_page_from_freelist+0x24e0/0x2580 mm/page_alloc.c:3915 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:5210 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2486 alloc_slab_page mm/slub.c:3075 [inline] allocate_slab+0x86/0x3b0 mm/slub.c:3248 new_slab mm/slub.c:3302 [inline] ___slab_alloc+0xe53/0x1820 mm/slub.c:4656 __slab_alloc+0x65/0x100 mm/slub.c:4779 __slab_alloc_node mm/slub.c:4855 [inline] slab_alloc_node mm/slub.c:5251 [inline] kmem_cache_alloc_noprof+0x40f/0x710 mm/slub.c:5270 gfs2_alloc_bufdata fs/gfs2/trans.c:173 [inline] gfs2_trans_add_meta+0x2cf/0x960 fs/gfs2/trans.c:276 gfs2_alloc_extent fs/gfs2/rgrp.c:2237 [inline] gfs2_alloc_blocks+0x7a0/0x2080 fs/gfs2/rgrp.c:2447 alloc_dinode+0x258/0x550 fs/gfs2/inode.c:436 gfs2_create_inode+0xbc8/0x15b0 fs/gfs2/inode.c:822 gfs2_atomic_open+0x116/0x200 fs/gfs2/inode.c:1402 atomic_open fs/namei.c:4295 [inline] lookup_open fs/namei.c:4406 [inline] open_last_lookups fs/namei.c:4540 [inline] path_openat+0x11f8/0x3dd0 fs/namei.c:4784 do_filp_open+0x1fa/0x410 fs/namei.c:4814 do_sys_openat2+0x121/0x200 fs/open.c:1430 page last free pid 78 tgid 78 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1406 [inline] free_unref_folios+0xdb3/0x14f0 mm/page_alloc.c:3000 shrink_folio_list+0x4785/0x4f90 mm/vmscan.c:1603 evict_folios+0x473e/0x57f0 mm/vmscan.c:4711 try_to_shrink_lruvec+0x8a3/0xb50 mm/vmscan.c:4874 shrink_one+0x25c/0x720 mm/vmscan.c:4919 shrink_many mm/vmscan.c:4982 [inline] lru_gen_shrink_node mm/vmscan.c:5060 [inline] shrink_node+0x2f7d/0x35b0 mm/vmscan.c:6047 kswapd_shrink_node mm/vmscan.c:6901 [inline] balance_pgdat mm/vmscan.c:7084 [inline] kswapd+0x145a/0x2820 mm/vmscan.c:7354 kthread+0x711/0x8a0 kernel/kthread.c:463 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 Memory state around the buggy address: ffff88801243a100: fb fb fb fb fb fb fc fc fc fc fa fb fb fb fb fb ffff88801243a180: fb fb fb fb fc fc fc fc fa fb fb fb fb fb fb fb >ffff88801243a200: fb fb fc fc fc fc fa fb fb fb fb fb fb fb fb fb ^ ffff88801243a280: fc fc fc fc fa fb fb fb fb fb fb fb fb fb fc fc ffff88801243a300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ==================================================================