====================================================== WARNING: possible circular locking dependency detected 6.6.0-syzkaller-16176-g1b907d050735 #0 Not tainted ------------------------------------------------------ syz-executor.1/17545 is trying to acquire lock: ffff8880297328f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xbb/0x4d0 fs/hfsplus/bitmap.c:182 but task is already holding lock: ffff88804e51b048 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x2da/0xb40 fs/hfsplus/extents.c:576 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747 hfsplus_get_block+0x383/0x14e0 fs/hfsplus/extents.c:260 block_read_full_folio+0x474/0xe90 fs/buffer.c:2399 filemap_read_folio+0x19c/0x770 mm/filemap.c:2323 do_read_cache_folio+0x134/0x810 mm/filemap.c:3691 do_read_cache_page+0x30/0x1f0 mm/filemap.c:3757 read_mapping_page include/linux/pagemap.h:854 [inline] hfsplus_block_allocate+0xee/0x8b0 fs/hfsplus/bitmap.c:37 hfsplus_file_extend+0xade/0x1b70 fs/hfsplus/extents.c:468 hfsplus_get_block+0x406/0x14e0 fs/hfsplus/extents.c:245 get_more_blocks fs/direct-io.c:647 [inline] do_direct_IO fs/direct-io.c:935 [inline] __blockdev_direct_IO+0x1d65/0x49a0 fs/direct-io.c:1248 blockdev_direct_IO include/linux/fs.h:3038 [inline] hfsplus_direct_IO+0xf8/0x1e0 fs/hfsplus/inode.c:135 generic_file_direct_write+0x1e3/0x3f0 mm/filemap.c:3843 __generic_file_write_iter+0x125/0x230 mm/filemap.c:3999 generic_file_write_iter+0xaf/0x310 mm/filemap.c:4039 call_write_iter include/linux/fs.h:2020 [inline] new_sync_write fs/read_write.c:491 [inline] vfs_write+0x792/0xb20 fs/read_write.c:584 ksys_write+0x1a0/0x2c0 fs/read_write.c:637 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b -> #0 (&sbi->alloc_mutex){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5136 lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5753 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747 hfsplus_block_free+0xbb/0x4d0 fs/hfsplus/bitmap.c:182 hfsplus_free_extents+0x17a/0xae0 fs/hfsplus/extents.c:363 hfsplus_file_truncate+0x7d0/0xb40 fs/hfsplus/extents.c:591 hfsplus_delete_inode+0x174/0x220 hfsplus_unlink+0x512/0x790 fs/hfsplus/dir.c:405 vfs_unlink+0x35d/0x5f0 fs/namei.c:4318 do_unlinkat+0x4ae/0x830 fs/namei.c:4382 __do_sys_unlink fs/namei.c:4430 [inline] __se_sys_unlink fs/namei.c:4428 [inline] __x64_sys_unlink+0x49/0x50 fs/namei.c:4428 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&HFSPLUS_I(inode)->extents_lock); lock(&sbi->alloc_mutex); lock(&HFSPLUS_I(inode)->extents_lock); lock(&sbi->alloc_mutex); *** DEADLOCK *** 5 locks held by syz-executor.1/17545: #0: ffff88804b27e418 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:404 #1: ffff88804e519080 (&type->i_mutex_dir_key#12/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:837 [inline] #1: ffff88804e519080 (&type->i_mutex_dir_key#12/1){+.+.}-{3:3}, at: do_unlinkat+0x26a/0x830 fs/namei.c:4369 #2: ffff88804e51b240 (&sb->s_type->i_mutex_key#33){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:802 [inline] #2: ffff88804e51b240 (&sb->s_type->i_mutex_key#33){+.+.}-{3:3}, at: vfs_unlink+0xe4/0x5f0 fs/namei.c:4307 #3: ffff888029732998 (&sbi->vh_mutex){+.+.}-{3:3}, at: hfsplus_unlink+0x161/0x790 fs/hfsplus/dir.c:370 #4: ffff88804e51b048 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x2da/0xb40 fs/hfsplus/extents.c:576 stack backtrace: CPU: 1 PID: 17545 Comm: syz-executor.1 Not tainted 6.6.0-syzkaller-16176-g1b907d050735 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/09/2023 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106 check_noncircular+0x375/0x4a0 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x39ff/0x7f70 kernel/locking/lockdep.c:5136 lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5753 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747 hfsplus_block_free+0xbb/0x4d0 fs/hfsplus/bitmap.c:182 hfsplus_free_extents+0x17a/0xae0 fs/hfsplus/extents.c:363 hfsplus_file_truncate+0x7d0/0xb40 fs/hfsplus/extents.c:591 hfsplus_delete_inode+0x174/0x220 hfsplus_unlink+0x512/0x790 fs/hfsplus/dir.c:405 vfs_unlink+0x35d/0x5f0 fs/namei.c:4318 do_unlinkat+0x4ae/0x830 fs/namei.c:4382 __do_sys_unlink fs/namei.c:4430 [inline] __se_sys_unlink fs/namei.c:4428 [inline] __x64_sys_unlink+0x49/0x50 fs/namei.c:4428 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x44/0x110 arch/x86/entry/common.c:82 entry_SYSCALL_64_after_hwframe+0x63/0x6b RIP: 0033:0x7f6ed8c7cae9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f6ed99350c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000057 RAX: ffffffffffffffda RBX: 00007f6ed8d9bf80 RCX: 00007f6ed8c7cae9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000200000c0 RBP: 00007f6ed8cc847a R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007f6ed8d9bf80 R15: 00007ffd304e7a18