====================================================== WARNING: possible circular locking dependency detected 6.6.0-rc3-next-20230927-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.3/5360 is trying to acquire lock: ffff8880288958f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_free+0xd2/0x530 fs/hfsplus/bitmap.c:182 but task is already holding lock: ffff888028dda988 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x226/0x1120 fs/hfsplus/extents.c:576 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}: __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 hfsplus_get_block+0x277/0x9e0 fs/hfsplus/extents.c:260 block_read_full_folio+0x3df/0xae0 fs/buffer.c:2398 filemap_read_folio+0xe9/0x2c0 mm/filemap.c:2368 do_read_cache_folio+0x205/0x540 mm/filemap.c:3728 do_read_cache_page mm/filemap.c:3794 [inline] read_cache_page+0x5b/0x160 mm/filemap.c:3803 read_mapping_page include/linux/pagemap.h:854 [inline] hfsplus_block_allocate+0x144/0x990 fs/hfsplus/bitmap.c:37 hfsplus_file_extend+0x440/0x1090 fs/hfsplus/extents.c:468 hfsplus_get_block+0x1ae/0x9e0 fs/hfsplus/extents.c:245 __block_write_begin_int+0x3c0/0x14d0 fs/buffer.c:2118 __block_write_begin fs/buffer.c:2167 [inline] block_write_begin+0xb1/0x490 fs/buffer.c:2226 cont_write_begin+0x52f/0x730 fs/buffer.c:2583 hfsplus_write_begin+0x87/0x140 fs/hfsplus/inode.c:52 cont_expand_zero fs/buffer.c:2543 [inline] cont_write_begin+0x5fd/0x730 fs/buffer.c:2573 hfsplus_write_begin+0x87/0x140 fs/hfsplus/inode.c:52 generic_cont_expand_simple+0x11f/0x200 fs/buffer.c:2474 hfsplus_setattr+0x193/0x2d0 fs/hfsplus/inode.c:263 notify_change+0x742/0x11c0 fs/attr.c:499 do_truncate+0x15c/0x220 fs/open.c:66 do_sys_ftruncate+0x6a2/0x790 fs/open.c:194 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x63/0xcd -> #0 (&sbi->alloc_mutex){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 hfsplus_block_free+0xd2/0x530 fs/hfsplus/bitmap.c:182 hfsplus_free_extents+0x3a2/0x510 fs/hfsplus/extents.c:363 hfsplus_file_truncate+0xe7f/0x1120 fs/hfsplus/extents.c:591 hfsplus_setattr+0x1eb/0x2d0 fs/hfsplus/inode.c:269 notify_change+0x742/0x11c0 fs/attr.c:499 do_truncate+0x15c/0x220 fs/open.c:66 vfs_truncate+0x3eb/0x4d0 fs/open.c:112 do_sys_truncate+0x153/0x190 fs/open.c:135 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x63/0xcd other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&HFSPLUS_I(inode)->extents_lock); lock(&sbi->alloc_mutex); lock(&HFSPLUS_I(inode)->extents_lock); lock(&sbi->alloc_mutex); *** DEADLOCK *** 3 locks held by syz-executor.3/5360: #0: ffff8880296be410 (sb_writers#13){.+.+}-{0:0}, at: vfs_truncate+0xea/0x4d0 fs/open.c:85 #1: ffff888028ddab80 (&sb->s_type->i_mutex_key#22){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:802 [inline] #1: ffff888028ddab80 (&sb->s_type->i_mutex_key#22){+.+.}-{3:3}, at: do_truncate+0x14b/0x220 fs/open.c:64 #2: ffff888028dda988 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_truncate+0x226/0x1120 fs/hfsplus/extents.c:576 stack backtrace: CPU: 1 PID: 5360 Comm: syz-executor.3 Not tainted 6.6.0-rc3-next-20230927-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/06/2023 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xd9/0x1b0 lib/dump_stack.c:106 check_noncircular+0x311/0x3f0 kernel/locking/lockdep.c:2187 check_prev_add kernel/locking/lockdep.c:3134 [inline] check_prevs_add kernel/locking/lockdep.c:3253 [inline] validate_chain kernel/locking/lockdep.c:3868 [inline] __lock_acquire+0x2e3d/0x5de0 kernel/locking/lockdep.c:5136 lock_acquire kernel/locking/lockdep.c:5753 [inline] lock_acquire+0x1ae/0x510 kernel/locking/lockdep.c:5718 __mutex_lock_common kernel/locking/mutex.c:603 [inline] __mutex_lock+0x181/0x1340 kernel/locking/mutex.c:747 hfsplus_block_free+0xd2/0x530 fs/hfsplus/bitmap.c:182 hfsplus_free_extents+0x3a2/0x510 fs/hfsplus/extents.c:363 hfsplus_file_truncate+0xe7f/0x1120 fs/hfsplus/extents.c:591 hfsplus_setattr+0x1eb/0x2d0 fs/hfsplus/inode.c:269 notify_change+0x742/0x11c0 fs/attr.c:499 do_truncate+0x15c/0x220 fs/open.c:66 vfs_truncate+0x3eb/0x4d0 fs/open.c:112 do_sys_truncate+0x153/0x190 fs/open.c:135 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x38/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f547827cae9 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f5478f2a0c8 EFLAGS: 00000246 ORIG_RAX: 000000000000004c RAX: ffffffffffffffda RBX: 00007f547839bf80 RCX: 00007f547827cae9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000040 RBP: 00007f54782c847a R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007f547839bf80 R15: 00007ffc3b47fb98