hfs: request for non-existent node 10 in B*Tree hfs: request for non-existent node 10 in B*Tree ====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ kworker/u4:18/4576 is trying to acquire lock: ffff888054381af8 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}, at: hfs_extend_file+0xd7/0x1280 fs/hfs/extent.c:397 but task is already holding lock: ffff88807d6a60b0 (&tree->tree_lock#2/1){+.+.}-{3:3}, at: hfs_find_init+0x15b/0x1d0 fs/hfs/bfind.c:-1 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&tree->tree_lock#2/1){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x120/0xaf0 kernel/locking/mutex.c:747 hfs_find_init+0x15b/0x1d0 fs/hfs/bfind.c:-1 hfs_ext_read_extent fs/hfs/extent.c:200 [inline] hfs_extend_file+0x2eb/0x1280 fs/hfs/extent.c:401 hfs_bmap_reserve+0x103/0x420 fs/hfs/btree.c:234 hfs_cat_create+0x1c0/0x8d0 fs/hfs/catalog.c:104 hfs_mkdir+0x68/0xe0 fs/hfs/dir.c:232 vfs_mkdir+0x387/0x570 fs/namei.c:4106 do_mkdirat+0x1d0/0x430 fs/namei.c:4131 __do_sys_mkdirat fs/namei.c:4146 [inline] __se_sys_mkdirat fs/namei.c:4144 [inline] __x64_sys_mkdirat+0x85/0x90 fs/namei.c:4144 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 -> #0 (&HFS_I(tree->inode)->extents_lock){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3090 [inline] check_prevs_add kernel/locking/lockdep.c:3209 [inline] validate_chain kernel/locking/lockdep.c:3825 [inline] __lock_acquire+0x2cf8/0x7c50 kernel/locking/lockdep.c:5049 lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x120/0xaf0 kernel/locking/mutex.c:747 hfs_extend_file+0xd7/0x1280 fs/hfs/extent.c:397 hfs_bmap_reserve+0x103/0x420 fs/hfs/btree.c:234 hfs_bmap_alloc+0x7b/0x5c0 fs/hfs/btree.c:261 hfs_bnode_split+0xc9/0xee0 fs/hfsplus/brec.c:245 hfs_brec_insert+0x374/0xbc0 fs/hfs/brec.c:102 __hfs_ext_write_extent+0x2a1/0x470 fs/hfs/extent.c:124 hfs_ext_write_extent+0x15e/0x1e0 fs/hfs/extent.c:144 hfs_write_inode+0x8e/0x970 fs/hfs/inode.c:434 write_inode fs/fs-writeback.c:1460 [inline] __writeback_single_inode+0x75b/0x1160 fs/fs-writeback.c:1677 writeback_sb_inodes+0xad8/0x17d0 fs/fs-writeback.c:1903 wb_writeback+0x468/0xd00 fs/fs-writeback.c:2077 wb_do_writeback fs/fs-writeback.c:2220 [inline] wb_workfn+0x435/0xec0 fs/fs-writeback.c:2260 process_one_work+0x898/0x1160 kernel/workqueue.c:2292 worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439 kthread+0x29d/0x330 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&tree->tree_lock#2/1); lock(&HFS_I(tree->inode)->extents_lock); lock(&tree->tree_lock#2/1); lock(&HFS_I(tree->inode)->extents_lock); *** DEADLOCK *** 3 locks held by kworker/u4:18/4576: #0: ffff888141266138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267 #1: ffffc900051cfd00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267 #2: ffff88807d6a60b0 (&tree->tree_lock#2/1){+.+.}-{3:3}, at: hfs_find_init+0x15b/0x1d0 fs/hfs/bfind.c:-1 stack backtrace: CPU: 0 PID: 4576 Comm: kworker/u4:18 Not tainted syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025 Workqueue: writeback wb_workfn (flush-7:0) Call Trace: dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106 check_noncircular+0x274/0x310 kernel/locking/lockdep.c:2170 check_prev_add kernel/locking/lockdep.c:3090 [inline] check_prevs_add kernel/locking/lockdep.c:3209 [inline] validate_chain kernel/locking/lockdep.c:3825 [inline] __lock_acquire+0x2cf8/0x7c50 kernel/locking/lockdep.c:5049 lock_acquire+0x1b4/0x490 kernel/locking/lockdep.c:5662 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x120/0xaf0 kernel/locking/mutex.c:747 hfs_extend_file+0xd7/0x1280 fs/hfs/extent.c:397 hfs_bmap_reserve+0x103/0x420 fs/hfs/btree.c:234 hfs_bmap_alloc+0x7b/0x5c0 fs/hfs/btree.c:261 hfs_bnode_split+0xc9/0xee0 fs/hfsplus/brec.c:245 hfs_brec_insert+0x374/0xbc0 fs/hfs/brec.c:102 __hfs_ext_write_extent+0x2a1/0x470 fs/hfs/extent.c:124 hfs_ext_write_extent+0x15e/0x1e0 fs/hfs/extent.c:144 hfs_write_inode+0x8e/0x970 fs/hfs/inode.c:434 write_inode fs/fs-writeback.c:1460 [inline] __writeback_single_inode+0x75b/0x1160 fs/fs-writeback.c:1677 writeback_sb_inodes+0xad8/0x17d0 fs/fs-writeback.c:1903 wb_writeback+0x468/0xd00 fs/fs-writeback.c:2077 wb_do_writeback fs/fs-writeback.c:2220 [inline] wb_workfn+0x435/0xec0 fs/fs-writeback.c:2260 process_one_work+0x898/0x1160 kernel/workqueue.c:2292 worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439 kthread+0x29d/0x330 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 hfs: request for non-existent node 11 in B*Tree hfs: request for non-existent node 11 in B*Tree kworker/u4:18: attempt to access beyond end of device loop0: rw=1, sector=171, nr_sectors = 1 limit=64 Buffer I/O error on dev loop0, logical block 171, lost async page write kworker/u4:18: attempt to access beyond end of device loop0: rw=1, sector=172, nr_sectors = 1 limit=64 Buffer I/O error on dev loop0, logical block 172, lost async page write kworker/u4:18: attempt to access beyond end of device loop0: rw=1, sector=173, nr_sectors = 1 limit=64 Buffer I/O error on dev loop0, logical block 173, lost async page write kworker/u4:18: attempt to access beyond end of device loop0: rw=1, sector=174, nr_sectors = 1 limit=64 Buffer I/O error on dev loop0, logical block 174, lost async page write EXT4-fs (loop7): Delayed block allocation failed for inode 15 at logical offset 2050 with max blocks 1 with error 28 EXT4-fs (loop7): This should not happen!! Data will be lost EXT4-fs (loop7): Total free blocks count 0 EXT4-fs (loop7): Free/Dirty block details EXT4-fs (loop7): free_blocks=4293918720 EXT4-fs (loop7): dirty_blocks=16 EXT4-fs (loop7): Block reservation details