====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ kworker/u8:5/988 is trying to acquire lock: ffff88805eedd828 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}, at: hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 but task is already holding lock: ffff888063fb40a0 (&tree->tree_lock#2/1){+.+.}-{4:4}, at: hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&tree->tree_lock#2/1){+.+.}-{4:4}: __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 hfs_ext_read_extent fs/hfs/extent.c:200 [inline] hfs_extend_file+0x35c/0x15e0 fs/hfs/extent.c:401 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 hfs_cat_create+0x20f/0x800 fs/hfs/catalog.c:104 hfs_create+0x75/0xe0 fs/hfs/dir.c:202 lookup_open fs/namei.c:4483 [inline] open_last_lookups fs/namei.c:4583 [inline] path_openat+0x13b4/0x38a0 fs/namei.c:4827 do_file_open+0x23e/0x4a0 fs/namei.c:4859 do_sys_openat2+0x113/0x200 fs/open.c:1366 do_sys_open fs/open.c:1372 [inline] __do_sys_openat fs/open.c:1388 [inline] __se_sys_openat fs/open.c:1383 [inline] __x64_sys_openat+0x138/0x170 fs/open.c:1383 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 __hfs_ext_write_extent+0x1fa/0x470 fs/hfs/extent.c:121 hfs_ext_write_extent+0x17e/0x210 fs/hfs/extent.c:144 hfs_write_inode+0x117/0x960 fs/hfs/inode.c:459 write_inode fs/fs-writeback.c:1582 [inline] __writeback_single_inode+0x75d/0x1060 fs/fs-writeback.c:1813 writeback_sb_inodes+0x92e/0x1910 fs/fs-writeback.c:2041 wb_writeback+0x445/0xad0 fs/fs-writeback.c:2227 wb_do_writeback fs/fs-writeback.c:2374 [inline] wb_workfn+0x3fd/0xf00 fs/fs-writeback.c:2414 process_one_work kernel/workqueue.c:3275 [inline] process_scheduled_works+0xaec/0x17a0 kernel/workqueue.c:3358 worker_thread+0xa50/0xfc0 kernel/workqueue.c:3439 kthread+0x388/0x470 kernel/kthread.c:467 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&tree->tree_lock#2/1); lock(&HFS_I(tree->inode)->extents_lock); lock(&tree->tree_lock#2/1); lock(&HFS_I(tree->inode)->extents_lock); *** DEADLOCK *** 3 locks held by kworker/u8:5/988: #0: ffff88801f2f0138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3250 [inline] #0: ffff88801f2f0138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9d4/0x17a0 kernel/workqueue.c:3358 #1: ffffc9000440fc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #1: ffffc9000440fc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa0f/0x17a0 kernel/workqueue.c:3358 #2: ffff888063fb40a0 (&tree->tree_lock#2/1){+.+.}-{4:4}, at: hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 stack backtrace: CPU: 0 UID: 0 PID: 988 Comm: kworker/u8:5 Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Workqueue: writeback wb_workfn (flush-7:6) Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 __hfs_ext_write_extent+0x1fa/0x470 fs/hfs/extent.c:121 hfs_ext_write_extent+0x17e/0x210 fs/hfs/extent.c:144 hfs_write_inode+0x117/0x960 fs/hfs/inode.c:459 write_inode fs/fs-writeback.c:1582 [inline] __writeback_single_inode+0x75d/0x1060 fs/fs-writeback.c:1813 writeback_sb_inodes+0x92e/0x1910 fs/fs-writeback.c:2041 wb_writeback+0x445/0xad0 fs/fs-writeback.c:2227 wb_do_writeback fs/fs-writeback.c:2374 [inline] wb_workfn+0x3fd/0xf00 fs/fs-writeback.c:2414 process_one_work kernel/workqueue.c:3275 [inline] process_scheduled_works+0xaec/0x17a0 kernel/workqueue.c:3358 worker_thread+0xa50/0xfc0 kernel/workqueue.c:3439 kthread+0x388/0x470 kernel/kthread.c:467 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245