======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kworker/u8:3/50 is trying to acquire lock:
ffff888050196f78 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}, at: hfs_extend_file+0xa5/0xcd0 fs/hfs/extent.c:397
but task is already holding lock:
ffff888033dcc0b0 (&tree->tree_lock/1){+.+.}-{4:4}, at: hfs_find_init+0x19c/0x310 fs/hfs/bfind.c:36
which lock already depends on the new lock.
the existing dependency chain (in reverse order) is:
-> #1 (&tree->tree_lock/1){+.+.}-{4:4}:
__mutex_lock_common kernel/locking/mutex.c:598 [inline]
__mutex_lock+0x193/0x1060 kernel/locking/mutex.c:760
hfs_find_init+0x19c/0x310 fs/hfs/bfind.c:36
hfs_ext_read_extent+0x19b/0x9e0 fs/hfs/extent.c:200
hfs_get_block+0x568/0x830 fs/hfs/extent.c:366
block_read_full_folio+0x457/0x850 fs/buffer.c:2420
filemap_read_folio+0xc8/0x2a0 mm/filemap.c:2444
do_read_cache_folio+0x263/0x5c0 mm/filemap.c:4024
do_read_cache_page mm/filemap.c:4090 [inline]
read_cache_page+0x5b/0x160 mm/filemap.c:4099
read_mapping_page include/linux/pagemap.h:993 [inline]
__hfs_bnode_create+0x70b/0x9b0 fs/hfs/bnode.c:388
hfs_bnode_find+0x2cc/0xd40 fs/hfs/bnode.c:433
hfs_brec_find+0x3a2/0x650 fs/hfs/bfind.c:135
hfs_brec_read+0x26/0x120 fs/hfs/bfind.c:174
hfs_cat_find_brec+0xd8/0x2c0 fs/hfs/catalog.c:194
hfs_fill_super+0x524/0x800 fs/hfs/super.c:354
get_tree_bdev_flags+0x38c/0x620 fs/super.c:1691
vfs_get_tree+0x8e/0x340 fs/super.c:1751
fc_mount fs/namespace.c:1208 [inline]
do_new_mount_fc fs/namespace.c:3651 [inline]
do_new_mount fs/namespace.c:3727 [inline]
path_mount+0x7b9/0x23a0 fs/namespace.c:4037
do_mount fs/namespace.c:4050 [inline]
__do_sys_mount fs/namespace.c:4238 [inline]
__se_sys_mount fs/namespace.c:4215 [inline]
__x64_sys_mount+0x293/0x310 fs/namespace.c:4215
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xcd/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
-> #0 (&HFS_I(tree->inode)->extents_lock){+.+.}-{4:4}:
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x12a6/0x1ce0 kernel/locking/lockdep.c:5237
lock_acquire kernel/locking/lockdep.c:5868 [inline]
lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5825
__mutex_lock_common kernel/locking/mutex.c:598 [inline]
__mutex_lock+0x193/0x1060 kernel/locking/mutex.c:760
hfs_extend_file+0xa5/0xcd0 fs/hfs/extent.c:397
hfs_bmap_reserve+0x2ab/0x3a0 fs/hfs/btree.c:269
__hfs_ext_write_extent+0x3cf/0x520 fs/hfs/extent.c:121
hfs_ext_write_extent+0x1b5/0x1f0 fs/hfs/extent.c:144
hfs_write_inode+0xcc/0xab0 fs/hfs/inode.c:440
write_inode fs/fs-writeback.c:1564 [inline]
__writeback_single_inode+0xb3e/0xfb0 fs/fs-writeback.c:1784
writeback_sb_inodes+0x60d/0xfa0 fs/fs-writeback.c:2015
wb_writeback+0x419/0xb70 fs/fs-writeback.c:2195
wb_do_writeback fs/fs-writeback.c:2342 [inline]
wb_workfn+0x14d/0xbe0 fs/fs-writeback.c:2382
process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3263
process_scheduled_works kernel/workqueue.c:3346 [inline]
worker_thread+0x6c8/0xf10 kernel/workqueue.c:3427
kthread+0x3c5/0x780 kernel/kthread.c:463
ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
other info that might help us debug this:
Possible unsafe locking scenario:
CPU0 CPU1
---- ----
lock(&tree->tree_lock/1);
lock(&HFS_I(tree->inode)->extents_lock);
lock(&tree->tree_lock/1);
lock(&HFS_I(tree->inode)->extents_lock);
*** DEADLOCK ***
3 locks held by kworker/u8:3/50:
#0: ffff888021689948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
#1: ffffc90000bb7d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
#2: ffff888033dcc0b0 (&tree->tree_lock/1){+.+.}-{4:4}, at: hfs_find_init+0x19c/0x310 fs/hfs/bfind.c:36
stack backtrace:
CPU: 0 UID: 0 PID: 50 Comm: kworker/u8:3 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: writeback wb_workfn (flush-7:4)
Call Trace:
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
print_circular_bug+0x275/0x350 kernel/locking/lockdep.c:2043
check_noncircular+0x14c/0x170 kernel/locking/lockdep.c:2175
check_prev_add kernel/locking/lockdep.c:3165 [inline]
check_prevs_add kernel/locking/lockdep.c:3284 [inline]
validate_chain kernel/locking/lockdep.c:3908 [inline]
__lock_acquire+0x12a6/0x1ce0 kernel/locking/lockdep.c:5237
lock_acquire kernel/locking/lockdep.c:5868 [inline]
lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5825
__mutex_lock_common kernel/locking/mutex.c:598 [inline]
__mutex_lock+0x193/0x1060 kernel/locking/mutex.c:760
hfs_extend_file+0xa5/0xcd0 fs/hfs/extent.c:397
hfs_bmap_reserve+0x2ab/0x3a0 fs/hfs/btree.c:269
__hfs_ext_write_extent+0x3cf/0x520 fs/hfs/extent.c:121
hfs_ext_write_extent+0x1b5/0x1f0 fs/hfs/extent.c:144
hfs_write_inode+0xcc/0xab0 fs/hfs/inode.c:440
write_inode fs/fs-writeback.c:1564 [inline]
__writeback_single_inode+0xb3e/0xfb0 fs/fs-writeback.c:1784
writeback_sb_inodes+0x60d/0xfa0 fs/fs-writeback.c:2015
wb_writeback+0x419/0xb70 fs/fs-writeback.c:2195
wb_do_writeback fs/fs-writeback.c:2342 [inline]
wb_workfn+0x14d/0xbe0 fs/fs-writeback.c:2382
process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3263
process_scheduled_works kernel/workqueue.c:3346 [inline]
worker_thread+0x6c8/0xf10 kernel/workqueue.c:3427
kthread+0x3c5/0x780 kernel/kthread.c:463
ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
hfs: new node 0 already hashed?
------------[ cut here ]------------
WARNING: CPU: 0 PID: 50 at fs/hfs/bnode.c:520 hfs_bnode_create+0x14c/0x5e0 fs/hfs/bnode.c:520
Modules linked in:
CPU: 0 UID: 0 PID: 50 Comm: kworker/u8:3 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: writeback wb_workfn (flush-7:4)
RIP: 0010:hfs_bnode_create+0x14c/0x5e0 fs/hfs/bnode.c:520
Code: e9 18 ff 45 39 fc 75 9e e8 91 ee 18 ff 4c 89 f7 e8 b9 73 c2 08 e8 84 ee 18 ff 44 89 e6 48 c7 c7 60 d9 aa 8b e8 c5 42 f7 fe 90 <0f> 0b 90 e8 6c ee 18 ff 48 89 d8 48 83 c4 28 5b 5d 41 5c 41 5d 41
RSP: 0018:ffffc90000bb7070 EFLAGS: 00010286
RAX: 000000000000001f RBX: ffff888027708300 RCX: ffffffff819acb09
RDX: 0000000000000000 RSI: ffffffff819b4636 RDI: 0000000000000005
RBP: ffff888033dcc000 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000080000000 R11: 77656e203a736668 R12: 0000000000000000
R13: dffffc0000000000 R14: ffff888033dcc0e0 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff8881249e0000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000110c2e3f0c CR3: 0000000029493000 CR4: 0000000000350ef0
Call Trace:
hfs_bmap_alloc+0x7d7/0x960 fs/hfs/btree.c:326
hfs_btree_inc_height.isra.0+0xff/0x820 fs/hfs/brec.c:490
hfs_brec_insert+0x8b1/0xc40 fs/hfs/brec.c:148
__hfs_ext_write_extent+0x3fa/0x520 fs/hfs/extent.c:124
hfs_ext_write_extent+0x1b5/0x1f0 fs/hfs/extent.c:144
hfs_write_inode+0xcc/0xab0 fs/hfs/inode.c:440
write_inode fs/fs-writeback.c:1564 [inline]
__writeback_single_inode+0xb3e/0xfb0 fs/fs-writeback.c:1784
writeback_sb_inodes+0x60d/0xfa0 fs/fs-writeback.c:2015
wb_writeback+0x419/0xb70 fs/fs-writeback.c:2195
wb_do_writeback fs/fs-writeback.c:2342 [inline]
wb_workfn+0x14d/0xbe0 fs/fs-writeback.c:2382
process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3263
process_scheduled_works kernel/workqueue.c:3346 [inline]
worker_thread+0x6c8/0xf10 kernel/workqueue.c:3427
kthread+0x3c5/0x780 kernel/kthread.c:463
ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245