====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ kworker/u8:7/363 is trying to acquire lock: ffff88803d94e6a8 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}, at: hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 but task is already holding lock: ffff88805c9940a0 (&tree->tree_lock/1){+.+.}-{4:4}, at: hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&tree->tree_lock/1){+.+.}-{4:4}: __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 hfs_ext_read_extent fs/hfs/extent.c:200 [inline] hfs_extend_file+0x35c/0x15e0 fs/hfs/extent.c:401 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 hfs_cat_create+0x20f/0x800 fs/hfs/catalog.c:104 hfs_create+0x75/0xe0 fs/hfs/dir.c:202 lookup_open fs/namei.c:4483 [inline] open_last_lookups fs/namei.c:4583 [inline] path_openat+0x13b4/0x38a0 fs/namei.c:4827 do_file_open+0x23e/0x4a0 fs/namei.c:4859 do_sys_openat2+0x113/0x200 fs/open.c:1366 do_sys_open fs/open.c:1372 [inline] __do_sys_openat fs/open.c:1388 [inline] __se_sys_openat fs/open.c:1383 [inline] __x64_sys_openat+0x138/0x170 fs/open.c:1383 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 __hfs_ext_write_extent+0x1fa/0x470 fs/hfs/extent.c:121 hfs_ext_write_extent+0x17e/0x210 fs/hfs/extent.c:144 hfs_write_inode+0x117/0x960 fs/hfs/inode.c:459 write_inode fs/fs-writeback.c:1582 [inline] __writeback_single_inode+0x75d/0x11a0 fs/fs-writeback.c:1813 writeback_sb_inodes+0x995/0x19d0 fs/fs-writeback.c:2042 wb_writeback+0x456/0xb70 fs/fs-writeback.c:2227 wb_do_writeback fs/fs-writeback.c:2374 [inline] wb_workfn+0x41a/0xf60 fs/fs-writeback.c:2414 process_one_work kernel/workqueue.c:3276 [inline] process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3359 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3440 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&tree->tree_lock/1); lock(&HFS_I(tree->inode)->extents_lock); lock(&tree->tree_lock/1); lock(&HFS_I(tree->inode)->extents_lock); *** DEADLOCK *** 3 locks held by kworker/u8:7/363: #0: ffff88801eae7938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3251 [inline] #0: ffff88801eae7938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3359 #1: ffffc900043f7c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3252 [inline] #1: ffffc900043f7c40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3359 #2: ffff88805c9940a0 (&tree->tree_lock/1){+.+.}-{4:4}, at: hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 stack backtrace: CPU: 1 UID: 0 PID: 363 Comm: kworker/u8:7 Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Workqueue: writeback wb_workfn (flush-7:8) Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 __hfs_ext_write_extent+0x1fa/0x470 fs/hfs/extent.c:121 hfs_ext_write_extent+0x17e/0x210 fs/hfs/extent.c:144 hfs_write_inode+0x117/0x960 fs/hfs/inode.c:459 write_inode fs/fs-writeback.c:1582 [inline] __writeback_single_inode+0x75d/0x11a0 fs/fs-writeback.c:1813 writeback_sb_inodes+0x995/0x19d0 fs/fs-writeback.c:2042 wb_writeback+0x456/0xb70 fs/fs-writeback.c:2227 wb_do_writeback fs/fs-writeback.c:2374 [inline] wb_workfn+0x41a/0xf60 fs/fs-writeback.c:2414 process_one_work kernel/workqueue.c:3276 [inline] process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3359 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3440 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 kworker/u8:7: attempt to access beyond end of device loop8: rw=8388609, sector=76, nr_sectors = 1 limit=64 Buffer I/O error on dev loop8, logical block 76, lost async page write kworker/u8:7: attempt to access beyond end of device loop8: rw=8388609, sector=77, nr_sectors = 1 limit=64 Buffer I/O error on dev loop8, logical block 77, lost async page write kworker/u8:7: attempt to access beyond end of device loop8: rw=8388609, sector=78, nr_sectors = 1 limit=64 Buffer I/O error on dev loop8, logical block 78, lost async page write