====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ kworker/u8:1/10645 is trying to acquire lock: ffff888062850128 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}, at: hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 but task is already holding lock: ffff88803af5c0a0 (&tree->tree_lock#2/1){+.+.}-{4:4}, at: hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&tree->tree_lock#2/1){+.+.}-{4:4}: __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 hfs_ext_read_extent fs/hfs/extent.c:200 [inline] hfs_get_block+0x556/0xc50 fs/hfs/extent.c:366 block_read_full_folio+0x29f/0x830 fs/buffer.c:2417 filemap_read_folio+0x137/0x3b0 mm/filemap.c:2501 do_read_cache_folio+0x2bf/0x560 mm/filemap.c:4106 do_read_cache_page mm/filemap.c:4172 [inline] read_cache_page+0x5d/0x170 mm/filemap.c:4181 read_mapping_page include/linux/pagemap.h:1011 [inline] __hfs_bnode_create+0x4b9/0x980 fs/hfs/bnode.c:388 hfs_bnode_find+0x211/0xd40 fs/hfs/bnode.c:433 hfs_brec_find+0x17b/0x510 fs/hfs/bfind.c:135 hfs_brec_read+0x24/0x110 fs/hfs/bfind.c:174 hfs_cat_find_brec+0x177/0x3f0 fs/hfs/catalog.c:194 hfs_fill_super+0x507/0x750 fs/hfs/super.c:357 get_tree_bdev_flags+0x431/0x4f0 fs/super.c:1694 vfs_get_tree+0x92/0x2a0 fs/super.c:1754 fc_mount fs/namespace.c:1193 [inline] do_new_mount_fc fs/namespace.c:3763 [inline] do_new_mount+0x341/0xd30 fs/namespace.c:3839 do_mount fs/namespace.c:4172 [inline] __do_sys_mount fs/namespace.c:4361 [inline] __se_sys_mount+0x31d/0x420 fs/namespace.c:4338 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 __hfs_ext_write_extent+0x1fa/0x470 fs/hfs/extent.c:121 hfs_ext_write_extent+0x17e/0x210 fs/hfs/extent.c:144 hfs_write_inode+0x117/0x960 fs/hfs/inode.c:459 write_inode fs/fs-writeback.c:1582 [inline] __writeback_single_inode+0x75d/0x11a0 fs/fs-writeback.c:1825 writeback_sb_inodes+0x995/0x19d0 fs/fs-writeback.c:2054 wb_writeback+0x456/0xb70 fs/fs-writeback.c:2239 wb_do_writeback fs/fs-writeback.c:2386 [inline] wb_workfn+0x41a/0xf60 fs/fs-writeback.c:2426 process_one_work kernel/workqueue.c:3288 [inline] process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3371 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3452 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&tree->tree_lock#2/1); lock(&HFS_I(tree->inode)->extents_lock); lock(&tree->tree_lock#2/1); lock(&HFS_I(tree->inode)->extents_lock); *** DEADLOCK *** 3 locks held by kworker/u8:1/10645: #0: ffff88801eecc938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3263 [inline] #0: ffff88801eecc938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0xa52/0x18c0 kernel/workqueue.c:3371 #1: ffffc9000575fc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3264 [inline] #1: ffffc9000575fc40 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0xa8d/0x18c0 kernel/workqueue.c:3371 #2: ffff88803af5c0a0 (&tree->tree_lock#2/1){+.+.}-{4:4}, at: hfs_find_init+0x18e/0x300 fs/hfs/bfind.c:-1 stack backtrace: CPU: 1 UID: 0 PID: 10645 Comm: kworker/u8:1 Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026 Workqueue: writeback wb_workfn (flush-7:6) Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] mutex_lock_nested+0x5a/0x1d0 kernel/locking/rtmutex_api.c:552 hfs_extend_file+0xf2/0x15e0 fs/hfs/extent.c:397 hfs_bmap_reserve+0x107/0x430 fs/hfs/btree.c:269 __hfs_ext_write_extent+0x1fa/0x470 fs/hfs/extent.c:121 hfs_ext_write_extent+0x17e/0x210 fs/hfs/extent.c:144 hfs_write_inode+0x117/0x960 fs/hfs/inode.c:459 write_inode fs/fs-writeback.c:1582 [inline] __writeback_single_inode+0x75d/0x11a0 fs/fs-writeback.c:1825 writeback_sb_inodes+0x995/0x19d0 fs/fs-writeback.c:2054 wb_writeback+0x456/0xb70 fs/fs-writeback.c:2239 wb_do_writeback fs/fs-writeback.c:2386 [inline] wb_workfn+0x41a/0xf60 fs/fs-writeback.c:2426 process_one_work kernel/workqueue.c:3288 [inline] process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3371 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3452 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 bridge0: port 2(bridge_slave_1) entered blocking state bridge0: port 2(bridge_slave_1) entered forwarding state