BUG: spinlock bad magic on CPU#1, jfsCommit/133 ================================================================== BUG: KASAN: use-after-free in string_nocheck lib/vsprintf.c:643 [inline] BUG: KASAN: use-after-free in string+0x218/0x2b0 lib/vsprintf.c:725 Read of size 1 at addr ffff8880579512a8 by task jfsCommit/133 CPU: 1 PID: 133 Comm: jfsCommit Not tainted 6.1.112-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 print_address_description mm/kasan/report.c:284 [inline] print_report+0x15f/0x4f0 mm/kasan/report.c:395 kasan_report+0x136/0x160 mm/kasan/report.c:495 string_nocheck lib/vsprintf.c:643 [inline] string+0x218/0x2b0 lib/vsprintf.c:725 vsnprintf+0x11a4/0x1c70 lib/vsprintf.c:2805 vprintk_store+0x448/0x1110 kernel/printk/printk.c:2187 vprintk_emit+0x115/0x740 kernel/printk/printk.c:2284 _printk+0xd1/0x111 kernel/printk/printk.c:2328 spin_dump kernel/locking/spinlock_debug.c:63 [inline] spin_bug+0x136/0x1d0 kernel/locking/spinlock_debug.c:77 debug_spin_lock_before kernel/locking/spinlock_debug.c:85 [inline] do_raw_spin_lock+0x200/0x370 kernel/locking/spinlock_debug.c:114 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline] _raw_spin_lock_irqsave+0xdd/0x120 kernel/locking/spinlock.c:162 __wake_up_common_lock kernel/sched/wait.c:137 [inline] __wake_up+0xfd/0x1c0 kernel/sched/wait.c:160 unlock_metapage fs/jfs/jfs_metapage.c:38 [inline] release_metapage+0xb7/0x9b0 fs/jfs/jfs_metapage.c:736 xtTruncate+0xff9/0x3260 jfs_free_zero_link+0x46a/0x6e0 fs/jfs/namei.c:758 jfs_evict_inode+0x35b/0x440 fs/jfs/inode.c:153 evict+0x529/0x930 fs/inode.c:701 txUpdateMap+0x825/0x9e0 fs/jfs/jfs_txnmgr.c:2367 txLazyCommit fs/jfs/jfs_txnmgr.c:2664 [inline] jfs_lazycommit+0x476/0xb60 fs/jfs/jfs_txnmgr.c:2732 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Allocated by task 4295: kasan_save_stack mm/kasan/common.c:45 [inline] kasan_set_track+0x4b/0x70 mm/kasan/common.c:52 __kasan_slab_alloc+0x65/0x70 mm/kasan/common.c:328 kasan_slab_alloc include/linux/kasan.h:201 [inline] slab_post_alloc_hook+0x52/0x3a0 mm/slab.h:737 slab_alloc_node mm/slub.c:3398 [inline] slab_alloc mm/slub.c:3406 [inline] __kmem_cache_alloc_lru mm/slub.c:3413 [inline] kmem_cache_alloc_lru+0x10c/0x2d0 mm/slub.c:3429 alloc_inode_sb include/linux/fs.h:3198 [inline] jfs_alloc_inode+0x24/0x60 fs/jfs/super.c:105 alloc_inode fs/inode.c:261 [inline] new_inode_pseudo+0x61/0x1d0 fs/inode.c:1055 new_inode+0x25/0x1d0 fs/inode.c:1083 diReadSpecial+0x4e/0x680 fs/jfs/jfs_imap.c:426 jfs_mount+0x3ab/0x820 fs/jfs/jfs_mount.c:166 jfs_fill_super+0x598/0xc40 fs/jfs/super.c:556 mount_bdev+0x2c9/0x3f0 fs/super.c:1443 legacy_get_tree+0xeb/0x180 fs/fs_context.c:632 vfs_get_tree+0x88/0x270 fs/super.c:1573 do_new_mount+0x2ba/0xb40 fs/namespace.c:3051 do_mount fs/namespace.c:3394 [inline] __do_sys_mount fs/namespace.c:3602 [inline] __se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3579 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 Last potentially related work creation: kasan_save_stack+0x3b/0x60 mm/kasan/common.c:45 __kasan_record_aux_stack+0xb0/0xc0 mm/kasan/generic.c:486 call_rcu+0x163/0xa10 kernel/rcu/tree.c:2845 destroy_inode fs/inode.c:316 [inline] evict+0x87d/0x930 fs/inode.c:716 jfs_umount+0x111/0x370 fs/jfs/jfs_umount.c:65 jfs_put_super+0x86/0x180 fs/jfs/super.c:194 generic_shutdown_super+0x130/0x340 fs/super.c:501 kill_block_super+0x7a/0xe0 fs/super.c:1470 deactivate_locked_super+0xa0/0x110 fs/super.c:332 cleanup_mnt+0x490/0x520 fs/namespace.c:1186 task_work_run+0x246/0x300 kernel/task_work.c:203 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline] exit_to_user_mode_loop+0xde/0x100 kernel/entry/common.c:177 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:210 __syscall_exit_to_user_mode_work kernel/entry/common.c:292 [inline] syscall_exit_to_user_mode+0x60/0x270 kernel/entry/common.c:303 do_syscall_64+0x47/0xb0 arch/x86/entry/common.c:87 entry_SYSCALL_64_after_hwframe+0x68/0xd2 The buggy address belongs to the object at ffff888057951280 which belongs to the cache jfs_ip of size 2240 The buggy address is located 40 bytes inside of 2240-byte region [ffff888057951280, ffff888057951b40) The buggy address belongs to the physical page: page:ffffea00015e5400 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff888057951280 pfn:0x57950 head:ffffea00015e5400 order:3 compound_mapcount:0 compound_pincount:0 memcg:ffff8880738cb701 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 0000000000000000 dead000000000122 ffff888147ada140 raw: ffff888057951280 00000000800d0009 00000001ffffffff ffff8880738cb701 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Reclaimable, gfp_mask 0x1d2050(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL|__GFP_RECLAIMABLE), pid 4295, tgid 4293 (syz.3.133), ts 101647501169, free_ts 16757874021 set_page_owner include/linux/page_owner.h:31 [inline] post_alloc_hook+0x18d/0x1b0 mm/page_alloc.c:2517 prep_new_page mm/page_alloc.c:2524 [inline] get_page_from_freelist+0x322e/0x33b0 mm/page_alloc.c:4290 __alloc_pages+0x28d/0x770 mm/page_alloc.c:5558 alloc_slab_page+0x6a/0x150 mm/slub.c:1794 allocate_slab mm/slub.c:1939 [inline] new_slab+0x84/0x2d0 mm/slub.c:1992 ___slab_alloc+0xc20/0x1270 mm/slub.c:3180 __slab_alloc mm/slub.c:3279 [inline] slab_alloc_node mm/slub.c:3364 [inline] slab_alloc mm/slub.c:3406 [inline] __kmem_cache_alloc_lru mm/slub.c:3413 [inline] kmem_cache_alloc_lru+0x1a5/0x2d0 mm/slub.c:3429 alloc_inode_sb include/linux/fs.h:3198 [inline] jfs_alloc_inode+0x24/0x60 fs/jfs/super.c:105 alloc_inode fs/inode.c:261 [inline] new_inode_pseudo+0x61/0x1d0 fs/inode.c:1055 new_inode+0x25/0x1d0 fs/inode.c:1083 diReadSpecial+0x4e/0x680 fs/jfs/jfs_imap.c:426 jfs_mount+0x171/0x820 fs/jfs/jfs_mount.c:108 jfs_fill_super+0x598/0xc40 fs/jfs/super.c:556 mount_bdev+0x2c9/0x3f0 fs/super.c:1443 legacy_get_tree+0xeb/0x180 fs/fs_context.c:632 vfs_get_tree+0x88/0x270 fs/super.c:1573 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1444 [inline] free_pcp_prepare mm/page_alloc.c:1494 [inline] free_unref_page_prepare+0xf63/0x1120 mm/page_alloc.c:3369 free_unref_page+0x33/0x3e0 mm/page_alloc.c:3464 free_contig_range+0x9a/0x150 mm/page_alloc.c:9518 destroy_args+0xfe/0x997 mm/debug_vm_pgtable.c:1031 debug_vm_pgtable+0x416/0x46b mm/debug_vm_pgtable.c:1354 do_one_initcall+0x265/0x8f0 init/main.c:1298 do_initcall_level+0x157/0x207 init/main.c:1371 do_initcalls+0x49/0x86 init/main.c:1387 kernel_init_freeable+0x45c/0x60f init/main.c:1626 kernel_init+0x19/0x290 init/main.c:1514 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Memory state around the buggy address: ffff888057951180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff888057951200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888057951280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888057951300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888057951380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================