================================================================== BUG: KASAN: slab-use-after-free in __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] BUG: KASAN: slab-use-after-free in _raw_spin_lock_irqsave+0xa7/0xf0 kernel/locking/spinlock.c:162 Read of size 1 at addr ffff888032befa68 by task kworker/u4:1/13 CPU: 0 UID: 0 PID: 13 Comm: kworker/u4:1 Not tainted 6.16.0-rc7-syzkaller #0 PREEMPT(full) Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Workqueue: loop0 loop_workfn Call Trace: dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xca/0x230 mm/kasan/report.c:480 kasan_report+0x118/0x150 mm/kasan/report.c:593 __kasan_check_byte+0x2a/0x40 mm/kasan/common.c:557 kasan_check_byte include/linux/kasan.h:399 [inline] lock_acquire+0x8d/0x360 kernel/locking/lockdep.c:5845 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xa7/0xf0 kernel/locking/spinlock.c:162 __wake_up_common_lock+0x2f/0x1f0 kernel/sched/wait.c:105 blk_update_request+0x5eb/0xe70 block/blk-mq.c:987 blk_mq_end_request+0x3e/0x70 block/blk-mq.c:1149 lo_rw_aio_complete drivers/block/loop.c:327 [inline] lo_rw_aio+0xd75/0xfa0 drivers/block/loop.c:401 do_req_filebacked drivers/block/loop.c:-1 [inline] loop_handle_cmd drivers/block/loop.c:1888 [inline] loop_process_work+0x810/0xf40 drivers/block/loop.c:1923 process_one_work kernel/workqueue.c:3238 [inline] process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3321 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3402 kthread+0x70e/0x8a0 kernel/kthread.c:464 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Allocated by task 5341: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:68 poison_kmalloc_redzone mm/kasan/common.c:377 [inline] __kasan_kmalloc+0x93/0xb0 mm/kasan/common.c:394 kasan_kmalloc include/linux/kasan.h:260 [inline] __kmalloc_cache_noprof+0x230/0x3d0 mm/slub.c:4359 kmalloc_noprof include/linux/slab.h:905 [inline] lbmLogInit fs/jfs/jfs_logmgr.c:1822 [inline] lmLogInit+0x3c0/0x19e0 fs/jfs/jfs_logmgr.c:1270 open_inline_log fs/jfs/jfs_logmgr.c:1175 [inline] lmLogOpen+0x4e1/0xfb0 fs/jfs/jfs_logmgr.c:1069 jfs_mount_rw+0xe9/0x670 fs/jfs/jfs_mount.c:257 jfs_fill_super+0x754/0xd90 fs/jfs/super.c:532 get_tree_bdev_flags+0x40b/0x4d0 fs/super.c:1681 vfs_get_tree+0x92/0x2b0 fs/super.c:1804 do_new_mount+0x24a/0xa40 fs/namespace.c:3902 do_mount fs/namespace.c:4239 [inline] __do_sys_mount fs/namespace.c:4450 [inline] __se_sys_mount+0x317/0x410 fs/namespace.c:4427 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 5341: kasan_save_stack mm/kasan/common.c:47 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:68 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:576 poison_slab_object mm/kasan/common.c:247 [inline] __kasan_slab_free+0x62/0x70 mm/kasan/common.c:264 kasan_slab_free include/linux/kasan.h:233 [inline] slab_free_hook mm/slub.c:2381 [inline] slab_free mm/slub.c:4643 [inline] kfree+0x18e/0x440 mm/slub.c:4842 lbmLogShutdown fs/jfs/jfs_logmgr.c:1865 [inline] lmLogInit+0x1133/0x19e0 fs/jfs/jfs_logmgr.c:1416 open_inline_log fs/jfs/jfs_logmgr.c:1175 [inline] lmLogOpen+0x4e1/0xfb0 fs/jfs/jfs_logmgr.c:1069 jfs_mount_rw+0xe9/0x670 fs/jfs/jfs_mount.c:257 jfs_fill_super+0x754/0xd90 fs/jfs/super.c:532 get_tree_bdev_flags+0x40b/0x4d0 fs/super.c:1681 vfs_get_tree+0x92/0x2b0 fs/super.c:1804 do_new_mount+0x24a/0xa40 fs/namespace.c:3902 do_mount fs/namespace.c:4239 [inline] __do_sys_mount fs/namespace.c:4450 [inline] __se_sys_mount+0x317/0x410 fs/namespace.c:4427 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff888032befa00 which belongs to the cache kmalloc-192 of size 192 The buggy address is located 104 bytes inside of freed 192-byte region [ffff888032befa00, ffff888032befac0) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x32bef flags: 0x4fff00000000000(node=1|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 04fff00000000000 ffff88801a4413c0 ffffea0000d82740 dead000000000002 raw: 0000000000000000 0000000080100010 00000000f5000000 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x52cc0(GFP_KERNEL|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP), pid 1, tgid 1 (swapper/0), ts 10848846090, free_ts 10634963879 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x240/0x2a0 mm/page_alloc.c:1704 prep_new_page mm/page_alloc.c:1712 [inline] get_page_from_freelist+0x21e4/0x22c0 mm/page_alloc.c:3669 __alloc_frozen_pages_noprof+0x181/0x370 mm/page_alloc.c:4959 alloc_pages_mpol+0x232/0x4a0 mm/mempolicy.c:2419 alloc_slab_page mm/slub.c:2451 [inline] allocate_slab+0x8a/0x3b0 mm/slub.c:2619 new_slab mm/slub.c:2673 [inline] ___slab_alloc+0xbfc/0x1480 mm/slub.c:3859 __slab_alloc mm/slub.c:3949 [inline] __slab_alloc_node mm/slub.c:4024 [inline] slab_alloc_node mm/slub.c:4185 [inline] __kmalloc_cache_noprof+0x296/0x3d0 mm/slub.c:4354 kmalloc_noprof include/linux/slab.h:905 [inline] kzalloc_noprof include/linux/slab.h:1039 [inline] drm_atomic_state_alloc+0xa9/0x100 drivers/gpu/drm/drm_atomic.c:173 drm_client_modeset_commit_atomic+0xe2/0x760 drivers/gpu/drm/drm_client_modeset.c:1042 drm_client_modeset_commit_locked+0xcb/0x4d0 drivers/gpu/drm/drm_client_modeset.c:1204 pan_display_atomic drivers/gpu/drm/drm_fb_helper.c:1387 [inline] drm_fb_helper_pan_display+0x3e7/0xbd0 drivers/gpu/drm/drm_fb_helper.c:1447 fb_pan_display+0x39b/0x680 drivers/video/fbdev/core/fbmem.c:193 bit_update_start+0x4d/0x1e0 drivers/video/fbdev/core/bitblit.c:380 fbcon_switch+0x1568/0x2040 drivers/video/fbdev/core/fbcon.c:2192 redraw_screen+0x56a/0xe90 drivers/tty/vt/vt.c:965 con2fb_init_display drivers/video/fbdev/core/fbcon.c:828 [inline] set_con2fb_map+0xd9b/0x13c0 drivers/video/fbdev/core/fbcon.c:889 page last free pid 10 tgid 10 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] free_pages_prepare mm/page_alloc.c:1248 [inline] __free_frozen_pages+0xc71/0xe70 mm/page_alloc.c:2706 discard_slab mm/slub.c:2717 [inline] __put_partials+0x161/0x1c0 mm/slub.c:3186 put_cpu_partial+0x17c/0x250 mm/slub.c:3261 __slab_free+0x2f7/0x400 mm/slub.c:4513 qlink_free mm/kasan/quarantine.c:163 [inline] qlist_free_all+0x97/0x140 mm/kasan/quarantine.c:179 kasan_quarantine_reduce+0x148/0x160 mm/kasan/quarantine.c:286 __kasan_slab_alloc+0x22/0x80 mm/kasan/common.c:329 kasan_slab_alloc include/linux/kasan.h:250 [inline] slab_post_alloc_hook mm/slub.c:4148 [inline] slab_alloc_node mm/slub.c:4197 [inline] __kmalloc_cache_noprof+0x1be/0x3d0 mm/slub.c:4354 kmalloc_noprof include/linux/slab.h:905 [inline] kzalloc_noprof include/linux/slab.h:1039 [inline] drm_atomic_state_alloc+0xa9/0x100 drivers/gpu/drm/drm_atomic.c:173 drm_atomic_helper_dirtyfb+0xed/0xee0 drivers/gpu/drm/drm_damage_helper.c:126 drm_fbdev_shmem_helper_fb_dirty+0x160/0x2f0 drivers/gpu/drm/drm_fbdev_shmem.c:117 drm_fb_helper_fb_dirty drivers/gpu/drm/drm_fb_helper.c:379 [inline] drm_fb_helper_damage_work+0x224/0x710 drivers/gpu/drm/drm_fb_helper.c:402 process_one_work kernel/workqueue.c:3238 [inline] process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3321 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3402 kthread+0x70e/0x8a0 kernel/kthread.c:464 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148 Memory state around the buggy address: ffff888032bef900: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888032bef980: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc >ffff888032befa00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888032befa80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff888032befb00: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================