syzbot


possible deadlock in __ocfs2_journal_access

Status: auto-obsoleted due to no activity on 2025/03/23 08:13
Subsystems: ocfs2
[Documentation on labels]
Reported-by: syzbot+e3b76b437ea8b580e4d0@syzkaller.appspotmail.com
First crash: 126d, last: 75d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ocfs2?] possible deadlock in __ocfs2_journal_access 0 (1) 2024/11/26 07:58

Sample crash report:
ocfs2: Mounting device (7,8) on (node local, slot 0) with ordered data mode.
======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc6-syzkaller-00262-gb62cef9a5c67 #0 Not tainted
------------------------------------------------------
syz.8.9555/29574 is trying to acquire lock:
ffff8880554b5d68 (&oi->ip_io_mutex){+.+.}-{4:4}, at: __ocfs2_journal_access+0x4a1/0x8b0 fs/ocfs2/journal.c:684

but task is already holding lock:
ffff8880597b0958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x1e94/0x2110 fs/jbd2/transaction.c:448

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #6 (jbd2_handle){++++}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       start_this_handle+0x1eb4/0x2110 fs/jbd2/transaction.c:448
       jbd2__journal_start+0x2da/0x5d0 fs/jbd2/transaction.c:505
       __ext4_journal_start_sb+0x239/0x600 fs/ext4/ext4_jbd2.c:112
       __ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
       ext4_dirty_inode+0x92/0x110 fs/ext4/inode.c:6038
       __mark_inode_dirty+0x2f0/0xe90 fs/fs-writeback.c:2515
       generic_update_time fs/inode.c:2112 [inline]
       inode_update_time fs/inode.c:2125 [inline]
       touch_atime+0x413/0x690 fs/inode.c:2197
       file_accessed include/linux/fs.h:2539 [inline]
       ext4_file_mmap+0x18c/0x540 fs/ext4/file.c:816
       call_mmap include/linux/fs.h:2183 [inline]
       mmap_file mm/internal.h:124 [inline]
       __mmap_new_file_vma mm/vma.c:2291 [inline]
       __mmap_new_vma mm/vma.c:2355 [inline]
       __mmap_region+0x2250/0x2d30 mm/vma.c:2456
       mmap_region+0x226/0x2c0 mm/mmap.c:1352
       do_mmap+0x97a/0x10d0 mm/mmap.c:500
       vm_mmap_pgoff+0x1dd/0x3d0 mm/util.c:575
       ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:546
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #5 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __might_fault+0xc6/0x120 mm/memory.c:6751
       _inline_copy_from_user include/linux/uaccess.h:162 [inline]
       _copy_from_user+0x2a/0xc0 lib/usercopy.c:18
       copy_from_user include/linux/uaccess.h:212 [inline]
       __blk_trace_setup kernel/trace/blktrace.c:626 [inline]
       blk_trace_ioctl+0x1ad/0x9a0 kernel/trace/blktrace.c:740
       blkdev_ioctl+0x40c/0x6a0 block/ioctl.c:682
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf7/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #4 (&q->debugfs_mutex){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       blk_mq_init_sched+0x3fa/0x830 block/blk-mq-sched.c:473
       elevator_init_mq+0x20e/0x320 block/elevator.c:610
       add_disk_fwnode+0x10d/0xf80 block/genhd.c:413
       sd_probe+0xba6/0x1100 drivers/scsi/sd.c:4024
       really_probe+0x2ba/0xad0 drivers/base/dd.c:658
       __driver_probe_device+0x1a2/0x390 drivers/base/dd.c:800
       driver_probe_device+0x50/0x430 drivers/base/dd.c:830
       __device_attach_driver+0x2d6/0x530 drivers/base/dd.c:958
       bus_for_each_drv+0x250/0x2e0 drivers/base/bus.c:459
       __device_attach_async_helper+0x22d/0x300 drivers/base/dd.c:987
       async_run_entry_fn+0xaa/0x420 kernel/async.c:129
       process_one_work kernel/workqueue.c:3236 [inline]
       process_scheduled_works+0xa68/0x1840 kernel/workqueue.c:3317
       worker_thread+0x870/0xd30 kernel/workqueue.c:3398
       kthread+0x2f2/0x390 kernel/kthread.c:389
       ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #3 (&q->q_usage_counter(queue)#50){++++}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       blk_queue_enter+0xe1/0x600 block/blk-core.c:328
       blk_mq_alloc_request+0x4fa/0xaa0 block/blk-mq.c:652
       scsi_alloc_request drivers/scsi/scsi_lib.c:1222 [inline]
       scsi_execute_cmd+0x177/0x1090 drivers/scsi/scsi_lib.c:304
       read_capacity_16+0x2b4/0x1450 drivers/scsi/sd.c:2655
       sd_read_capacity drivers/scsi/sd.c:2824 [inline]
       sd_revalidate_disk+0x1013/0xbce0 drivers/scsi/sd.c:3734
       sd_probe+0x9fa/0x1100 drivers/scsi/sd.c:4010
       really_probe+0x2ba/0xad0 drivers/base/dd.c:658
       __driver_probe_device+0x1a2/0x390 drivers/base/dd.c:800
       driver_probe_device+0x50/0x430 drivers/base/dd.c:830
       __device_attach_driver+0x2d6/0x530 drivers/base/dd.c:958
       bus_for_each_drv+0x250/0x2e0 drivers/base/bus.c:459
       __device_attach_async_helper+0x22d/0x300 drivers/base/dd.c:987
       async_run_entry_fn+0xaa/0x420 kernel/async.c:129
       process_one_work kernel/workqueue.c:3236 [inline]
       process_scheduled_works+0xa68/0x1840 kernel/workqueue.c:3317
       worker_thread+0x870/0xd30 kernel/workqueue.c:3398
       kthread+0x2f2/0x390 kernel/kthread.c:389
       ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #2 (&q->limits_lock){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       queue_limits_start_update include/linux/blkdev.h:947 [inline]
       loop_reconfigure_limits+0x43f/0x900 drivers/block/loop.c:998
       loop_set_block_size drivers/block/loop.c:1473 [inline]
       lo_simple_ioctl drivers/block/loop.c:1496 [inline]
       lo_ioctl+0x1351/0x1f50 drivers/block/loop.c:1559
       blkdev_ioctl+0x57f/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf7/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&q->q_usage_counter(io)#26){++++}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090
       __submit_bio+0x2c6/0x560 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
       ocfs2_read_blocks+0x8d9/0x1600 fs/ocfs2/buffer_head_io.c:330
       ocfs2_read_inode_block_full fs/ocfs2/inode.c:1593 [inline]
       ocfs2_read_inode_block+0x106/0x1e0 fs/ocfs2/inode.c:1605
       ocfs2_get_clusters+0x3d2/0xbd0 fs/ocfs2/extent_map.c:615
       ocfs2_extent_map_get_blocks+0x24c/0x7d0 fs/ocfs2/extent_map.c:668
       ocfs2_read_virt_blocks+0x313/0xb10 fs/ocfs2/extent_map.c:983
       ocfs2_read_dir_block fs/ocfs2/dir.c:508 [inline]
       ocfs2_find_entry_el fs/ocfs2/dir.c:715 [inline]
       ocfs2_find_entry+0x43b/0x2730 fs/ocfs2/dir.c:1080
       ocfs2_find_files_on_disk+0xff/0x360 fs/ocfs2/dir.c:1981
       ocfs2_lookup_ino_from_name+0xb1/0x1e0 fs/ocfs2/dir.c:2003
       _ocfs2_get_system_file_inode fs/ocfs2/sysfile.c:136 [inline]
       ocfs2_get_system_file_inode+0x305/0x7b0 fs/ocfs2/sysfile.c:112
       ocfs2_init_global_system_inodes+0x32c/0x730 fs/ocfs2/super.c:457
       ocfs2_initialize_super fs/ocfs2/super.c:2248 [inline]
       ocfs2_fill_super+0x2f5b/0x5760 fs/ocfs2/super.c:994
       mount_bdev+0x20c/0x2d0 fs/super.c:1693
       legacy_get_tree+0xf0/0x190 fs/fs_context.c:662
       vfs_get_tree+0x92/0x2b0 fs/super.c:1814
       do_new_mount+0x2be/0xb40 fs/namespace.c:3511
       do_mount fs/namespace.c:3851 [inline]
       __do_sys_mount fs/namespace.c:4061 [inline]
       __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4038
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&oi->ip_io_mutex){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       __ocfs2_journal_access+0x4a1/0x8b0 fs/ocfs2/journal.c:684
       ocfs2_local_alloc_slide_window fs/ocfs2/localalloc.c:1276 [inline]
       ocfs2_reserve_local_alloc_bits+0xd99/0x2840 fs/ocfs2/localalloc.c:669
       ocfs2_reserve_clusters_with_limit+0x1b8/0xb60 fs/ocfs2/suballoc.c:1166
       ocfs2_symlink+0x13a9/0x2d80 fs/ocfs2/namei.c:1921
       vfs_symlink+0x139/0x2e0 fs/namei.c:4669
       do_symlinkat+0x222/0x3a0 fs/namei.c:4695
       __do_sys_symlink fs/namei.c:4716 [inline]
       __se_sys_symlink fs/namei.c:4714 [inline]
       __x64_sys_symlink+0x7a/0x90 fs/namei.c:4714
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &oi->ip_io_mutex --> &mm->mmap_lock --> jbd2_handle

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(jbd2_handle);
                               lock(&mm->mmap_lock);
                               lock(jbd2_handle);
  lock(&oi->ip_io_mutex);

 *** DEADLOCK ***

8 locks held by syz.8.9555/29574:
 #0: ffff888048266420 (sb_writers#21){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:516
 #1: ffff88805544a640 (&type->i_mutex_dir_key#16/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:853 [inline]
 #1: ffff88805544a640 (&type->i_mutex_dir_key#16/1){+.+.}-{4:4}, at: filename_create+0x260/0x540 fs/namei.c:4080
 #2: ffff8880554b6d80 (&ocfs2_sysfile_lock_key[args->fi_sysfile_type]#6){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #2: ffff8880554b6d80 (&ocfs2_sysfile_lock_key[args->fi_sysfile_type]#6){+.+.}-{4:4}, at: ocfs2_reserve_suballoc_bits+0x192/0x4e70 fs/ocfs2/suballoc.c:786
 #3: ffff8880554b5f40 (&ocfs2_sysfile_lock_key[args->fi_sysfile_type]#5){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #3: ffff8880554b5f40 (&ocfs2_sysfile_lock_key[args->fi_sysfile_type]#5){+.+.}-{4:4}, at: ocfs2_reserve_local_alloc_bits+0x132/0x2840 fs/ocfs2/localalloc.c:636
 #4: ffff88805544ed80 (&ocfs2_sysfile_lock_key[args->fi_sysfile_type]#3){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #4: ffff88805544ed80 (&ocfs2_sysfile_lock_key[args->fi_sysfile_type]#3){+.+.}-{4:4}, at: ocfs2_reserve_suballoc_bits+0x192/0x4e70 fs/ocfs2/suballoc.c:786
 #5: ffff888048266610 (sb_internal#5){.+.+}-{0:0}, at: ocfs2_local_alloc_slide_window fs/ocfs2/localalloc.c:1254 [inline]
 #5: ffff888048266610 (sb_internal#5){.+.+}-{0:0}, at: ocfs2_reserve_local_alloc_bits+0xc16/0x2840 fs/ocfs2/localalloc.c:669
 #6: ffff8880726428e8 (&journal->j_trans_barrier){.+.+}-{4:4}, at: ocfs2_start_trans+0x3be/0x700 fs/ocfs2/journal.c:350
 #7: ffff8880597b0958 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x1e94/0x2110 fs/jbd2/transaction.c:448

stack backtrace:
CPU: 1 UID: 0 PID: 29574 Comm: syz.8.9555 Not tainted 6.13.0-rc6-syzkaller-00262-gb62cef9a5c67 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 __mutex_lock_common kernel/locking/mutex.c:585 [inline]
 __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
 __ocfs2_journal_access+0x4a1/0x8b0 fs/ocfs2/journal.c:684
 ocfs2_local_alloc_slide_window fs/ocfs2/localalloc.c:1276 [inline]
 ocfs2_reserve_local_alloc_bits+0xd99/0x2840 fs/ocfs2/localalloc.c:669
 ocfs2_reserve_clusters_with_limit+0x1b8/0xb60 fs/ocfs2/suballoc.c:1166
 ocfs2_symlink+0x13a9/0x2d80 fs/ocfs2/namei.c:1921
 vfs_symlink+0x139/0x2e0 fs/namei.c:4669
 do_symlinkat+0x222/0x3a0 fs/namei.c:4695
 __do_sys_symlink fs/namei.c:4716 [inline]
 __se_sys_symlink fs/namei.c:4714 [inline]
 __x64_sys_symlink+0x7a/0x90 fs/namei.c:4714
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fe158785d29
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe1565f6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000058
RAX: ffffffffffffffda RBX: 00007fe158975fa0 RCX: 00007fe158785d29
RDX: 0000000000000000 RSI: 00000000200059c0 RDI: 00000000200049c0
RBP: 00007fe158801b08 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fe158975fa0 R15: 00007fff01ba02e8
 </TASK>
OCFS2: ERROR (device loop8): int ocfs2_validate_gd_self(struct super_block *, struct buffer_head *, int): Group descriptor #32 has bit count 1024 but claims that 1707 are free
On-disk corruption discovered. Please run fsck.ocfs2 once the filesystem is unmounted.
OCFS2: File system is now read-only.
(syz.8.9555,29574,1):ocfs2_search_chain:1814 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_search_chain:1926 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_claim_suballoc_bits:1995 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_claim_suballoc_bits:2038 ERROR: status = -30
(syz.8.9555,29574,1):__ocfs2_claim_clusters:2412 ERROR: status = -30
(syz.8.9555,29574,1):__ocfs2_claim_clusters:2420 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_local_alloc_new_window:1199 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_local_alloc_new_window:1224 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_local_alloc_slide_window:1298 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_local_alloc_slide_window:1317 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_reserve_local_alloc_bits:672 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_reserve_local_alloc_bits:710 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_reserve_clusters_with_limit:1170 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_reserve_clusters_with_limit:1219 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_symlink:1924 ERROR: status = -30
(syz.8.9555,29574,1):ocfs2_symlink:2078 ERROR: status = -30

Crashes (14):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/12 08:12 upstream b62cef9a5c67 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2025/01/08 19:51 upstream 0b7958fa05d5 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2025/01/04 00:43 upstream 63676eefb7a0 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2024/12/30 23:37 upstream ccb98ccef0e5 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2024/12/15 12:07 upstream 2d8308bf5b67 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2024/12/15 06:49 upstream a0e3919a2df2 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in __ocfs2_journal_access
2024/12/15 00:55 upstream a0e3919a2df2 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in __ocfs2_journal_access
2024/12/06 16:30 upstream b8f52214c61a 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2024/12/05 22:31 upstream 5076001689e4 6e50d07b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2024/12/03 07:15 upstream cdd30ebb1b9f 578925bc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2024/11/24 17:18 upstream 9f16d5e6f220 68da6d95 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
2024/11/22 17:24 upstream 28eb75e178d3 68da6d95 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in __ocfs2_journal_access
2024/11/22 10:29 upstream 28eb75e178d3 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in __ocfs2_journal_access
2024/11/22 07:46 upstream 28eb75e178d3 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __ocfs2_journal_access
* Struck through repros no longer work on HEAD.