BUG: spinlock bad magic on CPU#0, jfsCommit/276 ================================================================== BUG: KASAN: use-after-free in string_nocheck lib/vsprintf.c:642 [inline] BUG: KASAN: use-after-free in string+0x218/0x2b0 lib/vsprintf.c:724 Read of size 1 at addr ffff888058de92b8 by task jfsCommit/276 CPU: 0 PID: 276 Comm: jfsCommit Not tainted 5.15.169-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106 print_address_description+0x63/0x3b0 mm/kasan/report.c:248 __kasan_report mm/kasan/report.c:434 [inline] kasan_report+0x16b/0x1c0 mm/kasan/report.c:451 string_nocheck lib/vsprintf.c:642 [inline] string+0x218/0x2b0 lib/vsprintf.c:724 vsnprintf+0x11a4/0x1c70 lib/vsprintf.c:2811 vprintk_store+0x3ba/0x1300 kernel/printk/printk.c:2164 vprintk_emit+0x83/0x150 kernel/printk/printk.c:2258 _printk+0xd1/0x120 kernel/printk/printk.c:2299 spin_dump kernel/locking/spinlock_debug.c:63 [inline] spin_bug+0x136/0x1d0 kernel/locking/spinlock_debug.c:77 debug_spin_lock_before kernel/locking/spinlock_debug.c:85 [inline] do_raw_spin_lock+0x200/0x370 kernel/locking/spinlock_debug.c:114 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:117 [inline] _raw_spin_lock_irqsave+0xdd/0x120 kernel/locking/spinlock.c:162 __wake_up_common_lock kernel/sched/wait.c:137 [inline] __wake_up+0xf5/0x1c0 kernel/sched/wait.c:157 unlock_metapage fs/jfs/jfs_metapage.c:37 [inline] release_metapage+0x155/0xe00 fs/jfs/jfs_metapage.c:737 xtTruncate+0xff9/0x3260 jfs_free_zero_link+0x46a/0x6e0 fs/jfs/namei.c:758 jfs_evict_inode+0x35b/0x440 fs/jfs/inode.c:153 evict+0x529/0x930 fs/inode.c:622 txUpdateMap+0x825/0x9e0 fs/jfs/jfs_txnmgr.c:2401 txLazyCommit fs/jfs/jfs_txnmgr.c:2698 [inline] jfs_lazycommit+0x470/0xc30 fs/jfs/jfs_txnmgr.c:2766 kthread+0x3f6/0x4f0 kernel/kthread.c:334 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287 Allocated by task 11644: kasan_save_stack mm/kasan/common.c:38 [inline] kasan_set_track mm/kasan/common.c:46 [inline] set_alloc_info mm/kasan/common.c:434 [inline] __kasan_slab_alloc+0x8e/0xc0 mm/kasan/common.c:467 kasan_slab_alloc include/linux/kasan.h:254 [inline] slab_post_alloc_hook+0x53/0x380 mm/slab.h:519 slab_alloc_node mm/slub.c:3220 [inline] slab_alloc mm/slub.c:3228 [inline] kmem_cache_alloc+0xf3/0x280 mm/slub.c:3233 jfs_alloc_inode+0x17/0x50 fs/jfs/super.c:105 alloc_inode fs/inode.c:236 [inline] new_inode_pseudo+0x60/0x210 fs/inode.c:976 new_inode+0x25/0x1d0 fs/inode.c:1005 diReadSpecial+0x4e/0x680 fs/jfs/jfs_imap.c:426 jfs_mount+0x71/0x820 fs/jfs/jfs_mount.c:87 jfs_fill_super+0x5ba/0xc70 fs/jfs/super.c:561 mount_bdev+0x2c9/0x3f0 fs/super.c:1400 legacy_get_tree+0xeb/0x180 fs/fs_context.c:611 vfs_get_tree+0x88/0x270 fs/super.c:1530 do_new_mount+0x2ba/0xb40 fs/namespace.c:3012 do_mount fs/namespace.c:3355 [inline] __do_sys_mount fs/namespace.c:3563 [inline] __se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3540 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x66/0xd0 Last potentially related work creation: kasan_save_stack+0x36/0x60 mm/kasan/common.c:38 kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348 __call_rcu kernel/rcu/tree.c:3007 [inline] call_rcu+0x1c4/0xa70 kernel/rcu/tree.c:3087 destroy_inode fs/inode.c:291 [inline] evict+0x87d/0x930 fs/inode.c:637 jfs_umount+0x1d8/0x370 fs/jfs/jfs_umount.c:83 jfs_put_super+0x86/0x180 fs/jfs/super.c:194 generic_shutdown_super+0x130/0x310 fs/super.c:475 kill_block_super+0x7a/0xe0 fs/super.c:1427 deactivate_locked_super+0xa0/0x110 fs/super.c:335 cleanup_mnt+0x44e/0x500 fs/namespace.c:1143 task_work_run+0x129/0x1a0 kernel/task_work.c:188 tracehook_notify_resume include/linux/tracehook.h:189 [inline] exit_to_user_mode_loop+0x106/0x130 kernel/entry/common.c:181 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:214 __syscall_exit_to_user_mode_work kernel/entry/common.c:296 [inline] syscall_exit_to_user_mode+0x5d/0x240 kernel/entry/common.c:307 do_syscall_64+0x47/0xb0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x66/0xd0 Second to last potentially related work creation: kasan_save_stack+0x36/0x60 mm/kasan/common.c:38 kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348 __call_rcu kernel/rcu/tree.c:3007 [inline] call_rcu+0x1c4/0xa70 kernel/rcu/tree.c:3087 destroy_inode fs/inode.c:291 [inline] evict+0x87d/0x930 fs/inode.c:637 jfs_mount+0x51c/0x820 fs/jfs/jfs_mount.c:203 jfs_fill_super+0x5ba/0xc70 fs/jfs/super.c:561 mount_bdev+0x2c9/0x3f0 fs/super.c:1400 legacy_get_tree+0xeb/0x180 fs/fs_context.c:611 vfs_get_tree+0x88/0x270 fs/super.c:1530 do_new_mount+0x2ba/0xb40 fs/namespace.c:3012 do_mount fs/namespace.c:3355 [inline] __do_sys_mount fs/namespace.c:3563 [inline] __se_sys_mount+0x2d5/0x3c0 fs/namespace.c:3540 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x66/0xd0 The buggy address belongs to the object at ffff888058de9280 which belongs to the cache jfs_ip of size 2240 The buggy address is located 56 bytes inside of 2240-byte region [ffff888058de9280, ffff888058de9b40) The buggy address belongs to the page: page:ffffea0001637a00 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff888058dea500 pfn:0x58de8 head:ffffea0001637a00 order:3 compound_mapcount:0 compound_pincount:0 memcg:ffff888065e9a001 flags: 0xfff00000010200(slab|head|node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000010200 0000000000000000 dead000000000122 ffff8881463c73c0 raw: ffff888058dea500 00000000800d0009 00000001ffffffff ffff888065e9a001 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Reclaimable, gfp_mask 0x1d2050(__GFP_IO|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL|__GFP_RECLAIMABLE), pid 7099, ts 203516138059, free_ts 106579261389 prep_new_page mm/page_alloc.c:2426 [inline] get_page_from_freelist+0x322a/0x33c0 mm/page_alloc.c:4159 __alloc_pages+0x272/0x700 mm/page_alloc.c:5423 alloc_slab_page mm/slub.c:1775 [inline] allocate_slab mm/slub.c:1912 [inline] new_slab+0xbb/0x4b0 mm/slub.c:1975 ___slab_alloc+0x6f6/0xe10 mm/slub.c:3008 __slab_alloc mm/slub.c:3095 [inline] slab_alloc_node mm/slub.c:3186 [inline] slab_alloc mm/slub.c:3228 [inline] kmem_cache_alloc+0x18e/0x280 mm/slub.c:3233 jfs_alloc_inode+0x17/0x50 fs/jfs/super.c:105 alloc_inode fs/inode.c:236 [inline] new_inode_pseudo+0x60/0x210 fs/inode.c:976 new_inode+0x25/0x1d0 fs/inode.c:1005 ialloc+0x48/0x970 fs/jfs/jfs_inode.c:48 jfs_create+0x1ba/0xbb0 fs/jfs/namei.c:92 lookup_open fs/namei.c:3462 [inline] open_last_lookups fs/namei.c:3532 [inline] path_openat+0x130a/0x2f20 fs/namei.c:3739 do_filp_open+0x21c/0x460 fs/namei.c:3769 do_sys_openat2+0x13b/0x4f0 fs/open.c:1253 do_sys_open fs/open.c:1269 [inline] __do_sys_openat fs/open.c:1285 [inline] __se_sys_openat fs/open.c:1280 [inline] __x64_sys_openat+0x243/0x290 fs/open.c:1280 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x66/0xd0 page last free stack trace: reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1340 [inline] free_pcp_prepare mm/page_alloc.c:1391 [inline] free_unref_page_prepare+0xc34/0xcf0 mm/page_alloc.c:3317 free_unref_page+0x95/0x2d0 mm/page_alloc.c:3396 io_ring_ctx_free io_uring/io_uring.c:9409 [inline] io_ring_exit_work+0xc9a/0x12f0 io_uring/io_uring.c:9573 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:334 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287 Memory state around the buggy address: ffff888058de9180: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff888058de9200: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888058de9280: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888058de9300: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888058de9380: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================