ocfs2: Finishing quota recovery on device (7,7) for slot 0 ====================================================== WARNING: possible circular locking dependency detected 6.14.0-rc2-syzkaller-00185-g128c8f96eb86 #0 Not tainted ------------------------------------------------------ kworker/u8:23/32536 is trying to acquire lock: ffff8880579620e0 (&type->s_umount_key#77){++++}-{4:4}, at: ocfs2_finish_quota_recovery+0xeb/0xcd0 fs/ocfs2/quota_local.c:603 but task is already holding lock: ffffc9000bb7fd18 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}, at: process_one_work+0x921/0x1ba0 kernel/workqueue.c:3212 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}: process_one_work+0x927/0x1ba0 kernel/workqueue.c:3212 process_scheduled_works kernel/workqueue.c:3317 [inline] worker_thread+0x6c8/0xf00 kernel/workqueue.c:3398 kthread+0x3b2/0x750 kernel/kthread.c:464 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #1 ((wq_completion)ocfs2_wq){+.+.}-{0:0}: touch_wq_lockdep_map+0xad/0x1c0 kernel/workqueue.c:3905 __flush_workqueue+0x129/0x1260 kernel/workqueue.c:3947 ocfs2_shutdown_local_alloc+0xbe/0xa10 fs/ocfs2/localalloc.c:380 ocfs2_dismount_volume+0x1f1/0xa00 fs/ocfs2/super.c:1822 generic_shutdown_super+0x156/0x390 fs/super.c:642 kill_block_super+0x3b/0x90 fs/super.c:1710 deactivate_locked_super+0xc1/0x1a0 fs/super.c:473 deactivate_super+0xde/0x100 fs/super.c:506 cleanup_mnt+0x222/0x450 fs/namespace.c:1413 task_work_run+0x151/0x250 kernel/task_work.c:227 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] exit_to_user_mode_loop kernel/entry/common.c:114 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline] __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline] syscall_exit_to_user_mode+0x27b/0x2a0 kernel/entry/common.c:218 do_syscall_64+0xda/0x250 arch/x86/entry/common.c:89 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&type->s_umount_key#77){++++}-{4:4}: check_prev_add kernel/locking/lockdep.c:3163 [inline] check_prevs_add kernel/locking/lockdep.c:3282 [inline] validate_chain kernel/locking/lockdep.c:3906 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5228 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5851 down_read+0x9a/0x330 kernel/locking/rwsem.c:1524 ocfs2_finish_quota_recovery+0xeb/0xcd0 fs/ocfs2/quota_local.c:603 ocfs2_complete_recovery+0x302/0xf90 fs/ocfs2/journal.c:1357 process_one_work+0x9c8/0x1ba0 kernel/workqueue.c:3236 process_scheduled_works kernel/workqueue.c:3317 [inline] worker_thread+0x6c8/0xf00 kernel/workqueue.c:3398 kthread+0x3b2/0x750 kernel/kthread.c:464 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Chain exists of: &type->s_umount_key#77 --> (wq_completion)ocfs2_wq --> (work_completion)(&journal->j_recovery_work) Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((work_completion)(&journal->j_recovery_work)); lock((wq_completion)ocfs2_wq); lock((work_completion)(&journal->j_recovery_work)); rlock(&type->s_umount_key#77); *** DEADLOCK *** 2 locks held by kworker/u8:23/32536: #0: ffff888027a08148 ((wq_completion)ocfs2_wq#2){+.+.}-{0:0}, at: process_one_work+0x1293/0x1ba0 kernel/workqueue.c:3211 #1: ffffc9000bb7fd18 ((work_completion)(&journal->j_recovery_work)){+.+.}-{0:0}, at: process_one_work+0x921/0x1ba0 kernel/workqueue.c:3212 stack backtrace: CPU: 1 UID: 0 PID: 32536 Comm: kworker/u8:23 Not tainted 6.14.0-rc2-syzkaller-00185-g128c8f96eb86 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Workqueue: ocfs2_wq ocfs2_complete_recovery Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_circular_bug+0x490/0x760 kernel/locking/lockdep.c:2076 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2208 check_prev_add kernel/locking/lockdep.c:3163 [inline] check_prevs_add kernel/locking/lockdep.c:3282 [inline] validate_chain kernel/locking/lockdep.c:3906 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5228 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5851 down_read+0x9a/0x330 kernel/locking/rwsem.c:1524 ocfs2_finish_quota_recovery+0xeb/0xcd0 fs/ocfs2/quota_local.c:603 ocfs2_complete_recovery+0x302/0xf90 fs/ocfs2/journal.c:1357 process_one_work+0x9c8/0x1ba0 kernel/workqueue.c:3236 process_scheduled_works kernel/workqueue.c:3317 [inline] worker_thread+0x6c8/0xf00 kernel/workqueue.c:3398 kthread+0x3b2/0x750 kernel/kthread.c:464 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244