syzbot


possible deadlock in ocfs2_lock_global_qf

Status: upstream: reported C repro on 2024/10/03 18:26
Subsystems: ocfs2
[Documentation on labels]
Reported-by: syzbot+b53d753ae8fb473e2397@syzkaller.appspotmail.com
First crash: 473d, last: 1d16h
Discussions (8)
Title Replies (including bot) Last reply
[syzbot] Monthly ocfs2 report (Jan 2026) 0 (1) 2026/01/05 08:20
[syzbot] Monthly ocfs2 report (Dec 2025) 0 (1) 2025/12/04 23:06
[syzbot] Monthly ocfs2 report (Nov 2025) 0 (1) 2025/11/03 13:10
[syzbot] [ocfs2?] possible deadlock in ocfs2_lock_global_qf 0 (2) 2025/09/28 18:52
[syzbot] Monthly ocfs2 report (Sep 2025) 0 (1) 2025/09/02 09:20
[syzbot] Monthly ocfs2 report (Aug 2025) 0 (1) 2025/08/01 13:49
[syzbot] Monthly ocfs2 report (Jul 2025) 0 (1) 2025/07/01 10:01
[syzbot] Monthly ocfs2 report (May 2025) 0 (1) 2025/06/03 11:11
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.6 possible deadlock in ocfs2_lock_global_qf 4 1 175d 175d 0/2 auto-obsoleted due to no activity on 2025/11/01 13:37
linux-5.15 possible deadlock in ocfs2_lock_global_qf origin:lts-only 4 C done 299 2d09h 473d 0/3 upstream: reported C repro on 2024/09/29 22:07
linux-6.1 possible deadlock in ocfs2_lock_global_qf 4 264 6d23h 471d 0/3 upstream: reported on 2024/10/01 11:31
Last patch testing requests (1)
Created Duration User Patch Repo Result
2025/12/29 01:33 30m retest repro git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci OK log

Sample crash report:
ocfs2: Mounting device (7,7) on (node local, slot 0) with ordered data mode.
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.7.162/7496 is trying to acquire lock:
ffff88805b732950 (&ocfs2_quota_ip_alloc_sem_key){++++}-{4:4}, at: ocfs2_lock_global_qf+0x1e8/0x270 fs/ocfs2/quota_global.c:314

but task is already holding lock:
ffff88805b732d00 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
ffff88805b732d00 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: ocfs2_lock_global_qf+0x1ca/0x270 fs/ocfs2/quota_global.c:313

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}:
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
       inode_lock include/linux/fs.h:1027 [inline]
       ocfs2_lock_global_qf+0x1ca/0x270 fs/ocfs2/quota_global.c:313
       ocfs2_acquire_dquot+0x2a0/0xb10 fs/ocfs2/quota_global.c:828
       dqget+0x7b6/0xf10 fs/quota/dquot.c:980
       __dquot_initialize+0x3b3/0xcb0 fs/quota/dquot.c:1508
       ocfs2_get_init_inode+0x13b/0x1b0 fs/ocfs2/namei.c:206
       ocfs2_mknod+0x858/0x2030 fs/ocfs2/namei.c:314
       ocfs2_mkdir+0x181/0x420 fs/ocfs2/namei.c:660
       vfs_mkdir+0x52d/0x5d0 fs/namei.c:5139
       do_mkdirat+0x27a/0x4b0 fs/namei.c:5173
       __do_sys_mkdirat fs/namei.c:5195 [inline]
       __se_sys_mkdirat fs/namei.c:5193 [inline]
       __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:5193
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (&dquot->dq_lock){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline]
       mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552
       wait_on_dquot fs/quota/dquot.c:357 [inline]
       dqget+0x72f/0xf10 fs/quota/dquot.c:975
       __dquot_initialize+0x3b3/0xcb0 fs/quota/dquot.c:1508
       ocfs2_get_init_inode+0x13b/0x1b0 fs/ocfs2/namei.c:206
       ocfs2_mknod+0x858/0x2030 fs/ocfs2/namei.c:314
       ocfs2_mkdir+0x181/0x420 fs/ocfs2/namei.c:660
       vfs_mkdir+0x52d/0x5d0 fs/namei.c:5139
       do_mkdirat+0x27a/0x4b0 fs/namei.c:5173
       __do_sys_mkdirat fs/namei.c:5195 [inline]
       __se_sys_mkdirat fs/namei.c:5193 [inline]
       __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:5193
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE]){+.+.}-{4:4}:
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
       inode_lock include/linux/fs.h:1027 [inline]
       ocfs2_remove_inode fs/ocfs2/inode.c:733 [inline]
       ocfs2_wipe_inode fs/ocfs2/inode.c:896 [inline]
       ocfs2_delete_inode fs/ocfs2/inode.c:1157 [inline]
       ocfs2_evict_inode+0x1507/0x4020 fs/ocfs2/inode.c:1299
       evict+0x5f4/0xae0 fs/inode.c:837
       ocfs2_dentry_iput+0x247/0x370 fs/ocfs2/dcache.c:407
       __dentry_kill+0x209/0x660 fs/dcache.c:670
       finish_dput+0xc9/0x480 fs/dcache.c:879
       end_renaming fs/namei.c:4061 [inline]
       do_renameat2+0x604/0x8f0 fs/namei.c:6058
       __do_sys_rename fs/namei.c:6099 [inline]
       __se_sys_rename fs/namei.c:6097 [inline]
       __x64_sys_rename+0x82/0x90 fs/namei.c:6097
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&ocfs2_sysfile_lock_key[ORPHAN_DIR_SYSTEM_INODE]){+.+.}-{4:4}:
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
       inode_lock include/linux/fs.h:1027 [inline]
       ocfs2_del_inode_from_orphan+0x134/0x740 fs/ocfs2/namei.c:2731
       ocfs2_dio_end_io_write fs/ocfs2/aops.c:2306 [inline]
       ocfs2_dio_end_io+0x47b/0x1100 fs/ocfs2/aops.c:2404
       dio_complete+0x25e/0x790 fs/direct-io.c:281
       __blockdev_direct_IO+0x2bc0/0x31f0 fs/direct-io.c:1303
       ocfs2_direct_IO+0x260/0x2d0 fs/ocfs2/aops.c:2441
       generic_file_direct_write+0x1dc/0x3e0 mm/filemap.c:4248
       __generic_file_write_iter+0x120/0x240 mm/filemap.c:4417
       ocfs2_file_write_iter+0x1585/0x1d00 fs/ocfs2/file.c:2475
       iter_file_splice_write+0x977/0x10b0 fs/splice.c:738
       do_splice_from fs/splice.c:938 [inline]
       direct_splice_actor+0x104/0x160 fs/splice.c:1161
       splice_direct_to_actor+0x5b3/0xcd0 fs/splice.c:1105
       do_splice_direct_actor fs/splice.c:1204 [inline]
       do_splice_direct+0x187/0x270 fs/splice.c:1230
       do_sendfile+0x4ec/0x7f0 fs/read_write.c:1370
       __do_sys_sendfile64 fs/read_write.c:1431 [inline]
       __se_sys_sendfile64+0x13e/0x190 fs/read_write.c:1417
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&ocfs2_quota_ip_alloc_sem_key){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain kernel/locking/lockdep.c:3908 [inline]
       __lock_acquire+0x15a6/0x2cf0 kernel/locking/lockdep.c:5237
       lock_acquire+0x107/0x340 kernel/locking/lockdep.c:5868
       down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
       ocfs2_lock_global_qf+0x1e8/0x270 fs/ocfs2/quota_global.c:314
       ocfs2_acquire_dquot+0x2a0/0xb10 fs/ocfs2/quota_global.c:828
       dqget+0x7b6/0xf10 fs/quota/dquot.c:980
       __dquot_initialize+0x3b3/0xcb0 fs/quota/dquot.c:1508
       ocfs2_get_init_inode+0x13b/0x1b0 fs/ocfs2/namei.c:206
       ocfs2_mknod+0x858/0x2030 fs/ocfs2/namei.c:314
       ocfs2_mkdir+0x181/0x420 fs/ocfs2/namei.c:660
       vfs_mkdir+0x52d/0x5d0 fs/namei.c:5139
       do_mkdirat+0x27a/0x4b0 fs/namei.c:5173
       __do_sys_mkdirat fs/namei.c:5195 [inline]
       __se_sys_mkdirat fs/namei.c:5193 [inline]
       __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:5193
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &ocfs2_quota_ip_alloc_sem_key --> &dquot->dq_lock --> &ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]);
                               lock(&dquot->dq_lock);
                               lock(&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]);
  lock(&ocfs2_quota_ip_alloc_sem_key);

 *** DEADLOCK ***

5 locks held by syz.7.162/7496:
 #0: ffff8880284ec480 (sb_writers#22){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88804c227000 (&type->i_mutex_dir_key#16/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88804c227000 (&type->i_mutex_dir_key#16/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2873 [inline]
 #1: ffff88804c227000 (&type->i_mutex_dir_key#16/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2884 [inline]
 #1: ffff88804c227000 (&type->i_mutex_dir_key#16/1){+.+.}-{4:4}, at: filename_create+0x1fb/0x360 fs/namei.c:4888
 #2: ffff88805b7adf40 (&ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE]){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #2: ffff88805b7adf40 (&ocfs2_sysfile_lock_key[INODE_ALLOC_SYSTEM_INODE]){+.+.}-{4:4}, at: ocfs2_reserve_suballoc_bits+0x164/0x4600 fs/ocfs2/suballoc.c:789
 #3: ffff88804c36a098 (&dquot->dq_lock){+.+.}-{4:4}, at: ocfs2_acquire_dquot+0x293/0xb10 fs/ocfs2/quota_global.c:823
 #4: ffff88805b732d00 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #4: ffff88805b732d00 (&ocfs2_sysfile_lock_key[USER_QUOTA_SYSTEM_INODE]){+.+.}-{4:4}, at: ocfs2_lock_global_qf+0x1ca/0x270 fs/ocfs2/quota_global.c:313

stack backtrace:
CPU: 0 UID: 0 PID: 7496 Comm: syz.7.162 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 print_circular_bug+0x2e2/0x300 kernel/locking/lockdep.c:2043
 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain kernel/locking/lockdep.c:3908 [inline]
 __lock_acquire+0x15a6/0x2cf0 kernel/locking/lockdep.c:5237
 lock_acquire+0x107/0x340 kernel/locking/lockdep.c:5868
 down_write+0x3a/0x50 kernel/locking/rwsem.c:1590
 ocfs2_lock_global_qf+0x1e8/0x270 fs/ocfs2/quota_global.c:314
 ocfs2_acquire_dquot+0x2a0/0xb10 fs/ocfs2/quota_global.c:828
 dqget+0x7b6/0xf10 fs/quota/dquot.c:980
 __dquot_initialize+0x3b3/0xcb0 fs/quota/dquot.c:1508
 ocfs2_get_init_inode+0x13b/0x1b0 fs/ocfs2/namei.c:206
 ocfs2_mknod+0x858/0x2030 fs/ocfs2/namei.c:314
 ocfs2_mkdir+0x181/0x420 fs/ocfs2/namei.c:660
 vfs_mkdir+0x52d/0x5d0 fs/namei.c:5139
 do_mkdirat+0x27a/0x4b0 fs/namei.c:5173
 __do_sys_mkdirat fs/namei.c:5195 [inline]
 __se_sys_mkdirat fs/namei.c:5193 [inline]
 __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:5193
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f40c116de97
Code: 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 b8 02 01 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f40bf3d5e68 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 00007f40bf3d5ef0 RCX: 00007f40c116de97
RDX: 00000000000001ff RSI: 0000200000000000 RDI: 00000000ffffff9c
RBP: 0000200000000080 R08: 0000200000000000 R09: 0000000000000000
R10: 0000200000000080 R11: 0000000000000246 R12: 0000200000000000
R13: 00007f40bf3d5eb0 R14: 0000000000000000 R15: 0000000000000000
 </TASK>
(syz.7.162,7496,0):ocfs2_get_suballoc_slot_bit:2830 ERROR: invalid inode 1 requested
(syz.7.162,7496,0):ocfs2_get_suballoc_slot_bit:2855 ERROR: status = -22
(syz.7.162,7496,0):ocfs2_test_inode_bit:2937 ERROR: get alloc slot and bit failed -22
(syz.7.162,7496,0):ocfs2_test_inode_bit:2978 ERROR: status = -22

Crashes (3991):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/14 09:10 upstream c537e12daeec d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2026/01/13 06:44 upstream b71e635feefc d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/06 00:22 upstream d1d36025a617 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/04 01:57 upstream 3f9f0252130e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/03 22:28 upstream 3f9f0252130e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/03 20:44 upstream 3f9f0252130e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/03 17:34 upstream 3f9f0252130e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/03 15:55 upstream 3f9f0252130e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/03 08:33 upstream d61f1cc5db79 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/03 07:28 upstream d61f1cc5db79 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/03 04:44 upstream d61f1cc5db79 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/02 21:31 upstream 4a26e7032d7d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/02 18:32 upstream 4a26e7032d7d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/02 16:30 upstream 4a26e7032d7d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/02 13:40 upstream 4a26e7032d7d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/02 04:40 upstream 1d18101a644e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/02 02:43 upstream 1d18101a644e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/02 00:12 upstream 1d18101a644e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/01 20:06 upstream 7d0a66e4bb90 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/01 17:57 upstream 7d0a66e4bb90 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/12/01 16:03 upstream 7d0a66e4bb90 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/11/30 22:45 upstream e69c7c175115 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/11/30 12:07 upstream 6bda50f4333f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/11/30 11:55 upstream 6bda50f4333f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/11/30 09:10 upstream 6bda50f4333f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/11/30 08:08 upstream 6bda50f4333f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/11/30 06:39 upstream 6bda50f4333f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2025/11/28 19:14 upstream 4331989728da d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in ocfs2_lock_global_qf
2025/11/01 07:55 upstream ba36dd5ee6fd 2c50b6a9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in ocfs2_lock_global_qf
2024/10/03 17:15 upstream 7ec462100ef9 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2024/09/29 18:22 upstream e7ed34365879 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_lock_global_qf
2026/01/04 04:34 linux-next cc3aa43b44bd d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in ocfs2_lock_global_qf
2025/12/15 01:23 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/14 03:18 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/13 10:10 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/10 22:40 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/10 00:11 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/09 16:09 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/09 04:52 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/09 02:03 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/08 21:50 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/08 19:54 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/08 16:47 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/07 02:52 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/05 20:55 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/05 02:05 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/12/03 05:16 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/11/30 10:26 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 05c93f3395ed d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
2025/09/28 18:51 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 2213e57a69f0 001c9061 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro #1 (clean fs)] [mounted in repro #2 (clean fs)] ci-upstream-gce-arm64 possible deadlock in ocfs2_lock_global_qf
* Struck through repros no longer work on HEAD.