================================================================== BUG: KASAN: use-after-free in lbmIODone+0xcbf/0xf40 fs/jfs/jfs_logmgr.c:2220 Read of size 4 at addr ffff888097a6be08 by task loop4/21076 CPU: 1 PID: 21076 Comm: loop4 Not tainted 4.19.211-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1fc/0x2ef lib/dump_stack.c:118 print_address_description.cold+0x54/0x219 mm/kasan/report.c:256 kasan_report_error.cold+0x8a/0x1b9 mm/kasan/report.c:354 kasan_report mm/kasan/report.c:412 [inline] __asan_report_load4_noabort+0x88/0x90 mm/kasan/report.c:432 lbmIODone+0xcbf/0xf40 fs/jfs/jfs_logmgr.c:2220 bio_endio+0x488/0x830 block/bio.c:1780 req_bio_endio block/blk-core.c:278 [inline] blk_update_request+0x30f/0xaf0 block/blk-core.c:3112 blk_mq_end_request+0x4a/0x340 block/blk-mq.c:544 lo_complete_rq+0x201/0x2d0 drivers/block/loop.c:487 __blk_mq_complete_request block/blk-mq.c:583 [inline] blk_mq_complete_request+0x472/0x660 block/blk-mq.c:620 loop_handle_cmd drivers/block/loop.c:1931 [inline] loop_queue_work+0x274/0x20c0 drivers/block/loop.c:1940 kthread_worker_fn+0x292/0x730 kernel/kthread.c:700 kthread+0x33f/0x460 kernel/kthread.c:259 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415 Allocated by task 21074: kmem_cache_alloc_trace+0x12f/0x380 mm/slab.c:3625 kmalloc include/linux/slab.h:515 [inline] lbmLogInit fs/jfs/jfs_logmgr.c:1843 [inline] lmLogInit+0x301/0x13e0 fs/jfs/jfs_logmgr.c:1292 open_inline_log fs/jfs/jfs_logmgr.c:1197 [inline] lmLogOpen+0x718/0x11e0 fs/jfs/jfs_logmgr.c:1090 jfs_mount_rw+0x286/0x4b0 fs/jfs/jfs_mount.c:272 jfs_fill_super+0x814/0xb50 fs/jfs/super.c:598 mount_bdev+0x2fc/0x3b0 fs/super.c:1158 mount_fs+0xa3/0x310 fs/super.c:1261 vfs_kern_mount.part.0+0x68/0x470 fs/namespace.c:961 vfs_kern_mount fs/namespace.c:951 [inline] do_new_mount fs/namespace.c:2492 [inline] do_mount+0x115c/0x2f50 fs/namespace.c:2822 ksys_mount+0xcf/0x130 fs/namespace.c:3038 __do_sys_mount fs/namespace.c:3052 [inline] __se_sys_mount fs/namespace.c:3049 [inline] __x64_sys_mount+0xba/0x150 fs/namespace.c:3049 do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293 entry_SYSCALL_64_after_hwframe+0x49/0xbe Freed by task 8133: __cache_free mm/slab.c:3503 [inline] kfree+0xcc/0x210 mm/slab.c:3822 lbmLogShutdown fs/jfs/jfs_logmgr.c:1886 [inline] lmLogShutdown+0x2c6/0x580 fs/jfs/jfs_logmgr.c:1706 lmLogClose+0x4a1/0x610 fs/jfs/jfs_logmgr.c:1482 jfs_umount+0x25f/0x310 fs/jfs/jfs_umount.c:129 jfs_put_super+0x61/0x140 fs/jfs/super.c:223 generic_shutdown_super+0x144/0x370 fs/super.c:456 kill_block_super+0x97/0xf0 fs/super.c:1185 deactivate_locked_super+0x94/0x160 fs/super.c:329 deactivate_super+0x174/0x1a0 fs/super.c:360 cleanup_mnt+0x1a8/0x290 fs/namespace.c:1098 task_work_run+0x148/0x1c0 kernel/task_work.c:113 tracehook_notify_resume include/linux/tracehook.h:193 [inline] exit_to_usermode_loop+0x251/0x2a0 arch/x86/entry/common.c:167 prepare_exit_to_usermode arch/x86/entry/common.c:198 [inline] syscall_return_slowpath arch/x86/entry/common.c:271 [inline] do_syscall_64+0x538/0x620 arch/x86/entry/common.c:296 entry_SYSCALL_64_after_hwframe+0x49/0xbe The buggy address belongs to the object at ffff888097a6be00 which belongs to the cache kmalloc-192 of size 192 The buggy address is located 8 bytes inside of 192-byte region [ffff888097a6be00, ffff888097a6bec0) The buggy address belongs to the page: page:ffffea00025e9ac0 count:1 mapcount:0 mapping:ffff88813bff0040 index:0x0 flags: 0xfff00000000100(slab) raw: 00fff00000000100 ffffea00024df2c8 ffffea00024abd88 ffff88813bff0040 raw: 0000000000000000 ffff888097a6b000 0000000100000010 0000000000000000 page dumped because: kasan: bad access detected Memory state around the buggy address: ffff888097a6bd00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888097a6bd80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc >ffff888097a6be00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888097a6be80: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff888097a6bf00: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================