syzbot


possible deadlock in ocfs2_finish_quota_recovery

Status: upstream: reported on 2025/02/02 09:01
Subsystems: ocfs2
[Documentation on labels]
Reported-by: syzbot+f59a1ae7b7227c859b8f@syzkaller.appspotmail.com
First crash: 6d21h, last: 3h55m
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ocfs2?] possible deadlock in ocfs2_finish_quota_recovery 0 (1) 2025/02/02 09:01
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in ocfs2_finish_quota_recovery C 2 4d00h 4d14h 0/3 upstream: reported C repro on 2025/01/31 16:27
linux-5.15 possible deadlock in ocfs2_finish_quota_recovery 2 3d11h 6d04h 0/3 upstream: reported on 2025/01/30 02:34

Sample crash report:
ocfs2: Finishing quota recovery on device (7,5) for slot 0
======================================================
WARNING: possible circular locking dependency detected
6.14.0-rc1-syzkaller-00026-gd009de7d5428 #0 Not tainted
------------------------------------------------------
kworker/u8:6/1143 is trying to acquire lock:
ffff888140e820e0 (&type->s_umount_key#64){++++}-{4:4}, at: ocfs2_finish_quota_recovery+0x15c/0x22a0 fs/ocfs2/quota_local.c:603

but task is already holding lock:
ffffc90003eafc60 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
ffffc90003eafc60 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
       process_one_work kernel/workqueue.c:3212 [inline]
       process_scheduled_works+0x994/0x1840 kernel/workqueue.c:3317
       worker_thread+0x870/0xd30 kernel/workqueue.c:3398
       kthread+0x7a9/0x920 kernel/kthread.c:464
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #1 ((wq_completion)ocfs2_wq#2){+.+.}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
       touch_wq_lockdep_map+0xc7/0x170 kernel/workqueue.c:3905
       __flush_workqueue+0x14a/0x1280 kernel/workqueue.c:3947
       ocfs2_shutdown_local_alloc+0x109/0xa90 fs/ocfs2/localalloc.c:380
       ocfs2_dismount_volume+0x202/0x910 fs/ocfs2/super.c:1822
       generic_shutdown_super+0x139/0x2d0 fs/super.c:642
       kill_block_super+0x44/0x90 fs/super.c:1710
       deactivate_locked_super+0xc4/0x130 fs/super.c:473
       cleanup_mnt+0x41f/0x4b0 fs/namespace.c:1413
       task_work_run+0x24f/0x310 kernel/task_work.c:227
       resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
       exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
       exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
       __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
       syscall_exit_to_user_mode+0x13f/0x340 kernel/entry/common.c:218
       do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&type->s_umount_key#64){++++}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3163 [inline]
       check_prevs_add kernel/locking/lockdep.c:3282 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3906
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5228
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
       ocfs2_finish_quota_recovery+0x15c/0x22a0 fs/ocfs2/quota_local.c:603
       ocfs2_complete_recovery+0x17c1/0x25c0 fs/ocfs2/journal.c:1357
       process_one_work kernel/workqueue.c:3236 [inline]
       process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317
       worker_thread+0x870/0xd30 kernel/workqueue.c:3398
       kthread+0x7a9/0x920 kernel/kthread.c:464
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

Chain exists of:
  &type->s_umount_key#64 --> (wq_completion)ocfs2_wq#2 --> (work_completion)(&journal->j_recovery_work)

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock((work_completion)(&journal->j_recovery_work));
                               lock((wq_completion)ocfs2_wq#2);
                               lock((work_completion)(&journal->j_recovery_work));
  rlock(&type->s_umount_key#64);

 *** DEADLOCK ***

2 locks held by kworker/u8:6/1143:
 #0: ffff888025ab0148 ((wq_completion)ocfs2_wq#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff888025ab0148 ((wq_completion)ocfs2_wq#2){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90003eafc60 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90003eafc60 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317

stack backtrace:
CPU: 0 UID: 0 PID: 1143 Comm: kworker/u8:6 Not tainted 6.14.0-rc1-syzkaller-00026-gd009de7d5428 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Workqueue: ocfs2_wq ocfs2_complete_recovery
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2076
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2208
 check_prev_add kernel/locking/lockdep.c:3163 [inline]
 check_prevs_add kernel/locking/lockdep.c:3282 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3906
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5228
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
 down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
 ocfs2_finish_quota_recovery+0x15c/0x22a0 fs/ocfs2/quota_local.c:603
 ocfs2_complete_recovery+0x17c1/0x25c0 fs/ocfs2/journal.c:1357
 process_one_work kernel/workqueue.c:3236 [inline]
 process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317
 worker_thread+0x870/0xd30 kernel/workqueue.c:3398
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (14):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/02/05 02:47 upstream d009de7d5428 5896748e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/02 19:05 upstream 69b8923f5003 568559e4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/02 12:50 upstream 69b8923f5003 568559e4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/01 21:55 upstream 69b8923f5003 568559e4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/01 15:38 upstream 69b8923f5003 aa47157c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/01 13:28 upstream 69b8923f5003 aa47157c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/01 12:45 upstream 69b8923f5003 aa47157c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/01 05:07 upstream 69b8923f5003 aa47157c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/01/30 13:16 upstream 805ba04cb7cc afe4eff5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/01/30 12:33 upstream 805ba04cb7cc afe4eff5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in ocfs2_finish_quota_recovery
2025/02/04 06:19 upstream 0de63bb7d919 8f267cef .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in ocfs2_finish_quota_recovery
2025/02/03 02:55 upstream 69e858e0b8b2 568559e4 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in ocfs2_finish_quota_recovery
2025/01/29 08:56 upstream 805ba04cb7cc 865ef71e .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in ocfs2_finish_quota_recovery
2025/01/29 13:27 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 1950a0af2d55 865ef71e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in ocfs2_finish_quota_recovery
* Struck through repros no longer work on HEAD.