syzbot


possible deadlock in hfsplus_block_allocate

Status: upstream: reported on 2022/11/29 13:38
Subsystems: hfs
[Documentation on labels]
Reported-by: syzbot+b6ccd31787585244a855@syzkaller.appspotmail.com
First crash: 366d, last: 2d20h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] possible deadlock in hfsplus_block_allocate 0 (1) 2022/11/29 13:38
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in hfsplus_block_allocate 18 23h16m 245d 0/3 upstream: reported on 2023/03/30 21:16
linux-4.14 possible deadlock in hfsplus_block_allocate hfsplus 2 316d 332d 0/1 upstream: reported on 2023/01/02 02:59
linux-4.19 possible deadlock in hfsplus_block_allocate hfsplus 2 335d 343d 0/1 upstream: reported on 2022/12/22 13:15
linux-5.15 possible deadlock in hfsplus_block_allocate 12 7d13h 251d 0/3 upstream: reported on 2023/03/24 22:08

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.7.0-rc3-syzkaller-00014-gdf60cee26a2e #0 Not tainted
------------------------------------------------------
syz-executor.5/19427 is trying to acquire lock:
ffff88807d37b8f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_allocate+0x9e/0x8b0 fs/hfsplus/bitmap.c:35

but task is already holding lock:
ffff88807db2d208 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x21b/0x1b70 fs/hfsplus/extents.c:457

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
       lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747
       hfsplus_get_block+0x383/0x14e0 fs/hfsplus/extents.c:260
       block_read_full_folio+0x474/0xea0 fs/buffer.c:2399
       filemap_read_folio+0x19c/0x780 mm/filemap.c:2323
       do_read_cache_folio+0x134/0x810 mm/filemap.c:3691
       do_read_cache_page+0x30/0x200 mm/filemap.c:3757
       read_mapping_page include/linux/pagemap.h:871 [inline]
       hfsplus_block_allocate+0xee/0x8b0 fs/hfsplus/bitmap.c:37
       hfsplus_file_extend+0xade/0x1b70 fs/hfsplus/extents.c:468
       hfsplus_get_block+0x406/0x14e0 fs/hfsplus/extents.c:245
       __block_write_begin_int+0x54d/0x1ad0 fs/buffer.c:2119
       __block_write_begin fs/buffer.c:2168 [inline]
       block_write_begin+0x9b/0x1e0 fs/buffer.c:2227
       cont_write_begin+0x643/0x880 fs/buffer.c:2582
       hfsplus_write_begin+0x8a/0xd0 fs/hfsplus/inode.c:52
       generic_perform_write+0x31b/0x630 mm/filemap.c:3918
       generic_file_write_iter+0xaf/0x310 mm/filemap.c:4039
       call_write_iter include/linux/fs.h:2020 [inline]
       new_sync_write fs/read_write.c:491 [inline]
       vfs_write+0x792/0xb20 fs/read_write.c:584
       ksys_write+0x1a0/0x2c0 fs/read_write.c:637
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x45/0x110 arch/x86/entry/common.c:82
       entry_SYSCALL_64_after_hwframe+0x63/0x6b

-> #0 (&sbi->alloc_mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3134 [inline]
       check_prevs_add kernel/locking/lockdep.c:3253 [inline]
       validate_chain+0x1909/0x5ab0 kernel/locking/lockdep.c:3869
       __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
       lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
       __mutex_lock_common kernel/locking/mutex.c:603 [inline]
       __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747
       hfsplus_block_allocate+0x9e/0x8b0 fs/hfsplus/bitmap.c:35
       hfsplus_file_extend+0xade/0x1b70 fs/hfsplus/extents.c:468
       hfsplus_bmap_reserve+0x105/0x4e0 fs/hfsplus/btree.c:358
       hfsplus_rename_cat+0x1d0/0x1050 fs/hfsplus/catalog.c:456
       hfsplus_unlink+0x308/0x790 fs/hfsplus/dir.c:376
       hfsplus_rename+0xc8/0x1c0 fs/hfsplus/dir.c:547
       vfs_rename+0xaba/0xde0 fs/namei.c:4844
       do_renameat2+0xd5a/0x1390 fs/namei.c:4996
       __do_sys_rename fs/namei.c:5042 [inline]
       __se_sys_rename fs/namei.c:5040 [inline]
       __x64_sys_rename+0x86/0x90 fs/namei.c:5040
       do_syscall_x64 arch/x86/entry/common.c:51 [inline]
       do_syscall_64+0x45/0x110 arch/x86/entry/common.c:82
       entry_SYSCALL_64_after_hwframe+0x63/0x6b

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&HFSPLUS_I(inode)->extents_lock);
                               lock(&sbi->alloc_mutex);
                               lock(&HFSPLUS_I(inode)->extents_lock);
  lock(&sbi->alloc_mutex);

 *** DEADLOCK ***

7 locks held by syz-executor.5/19427:
 #0: ffff888053652418 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:404
 #1: ffff88807db29080 (&type->i_mutex_dir_key#9/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:837 [inline]
 #1: ffff88807db29080 (&type->i_mutex_dir_key#9/1){+.+.}-{3:3}, at: lock_rename fs/namei.c:3046 [inline]
 #1: ffff88807db29080 (&type->i_mutex_dir_key#9/1){+.+.}-{3:3}, at: do_renameat2+0x601/0x1390 fs/namei.c:4935
 #2: ffff88807db289c0 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:837 [inline]
 #2: ffff88807db289c0 (&sb->s_type->i_mutex_key#21){+.+.}-{3:3}, at: lock_two_inodes+0x100/0x180 fs/inode.c:1129
 #3: ffff88807db2ab80 (&sb->s_type->i_mutex_key#21/4){+.+.}-{3:3}, at: vfs_rename+0x5eb/0xde0 fs/namei.c:4816
 #4: ffff88807d37b998 (&sbi->vh_mutex){+.+.}-{3:3}, at: hfsplus_unlink+0x161/0x790 fs/hfsplus/dir.c:370
 #5: ffff8880183aa0b0 (&tree->tree_lock){+.+.}-{3:3}, at: hfsplus_find_init+0x14a/0x1c0
 #6: ffff88807db2d208 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x21b/0x1b70 fs/hfsplus/extents.c:457

stack backtrace:
CPU: 0 PID: 19427 Comm: syz-executor.5 Not tainted 6.7.0-rc3-syzkaller-00014-gdf60cee26a2e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 check_noncircular+0x366/0x490 kernel/locking/lockdep.c:2187
 check_prev_add kernel/locking/lockdep.c:3134 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x1909/0x5ab0 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1345/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1e3/0x530 kernel/locking/lockdep.c:5754
 __mutex_lock_common kernel/locking/mutex.c:603 [inline]
 __mutex_lock+0x136/0xd60 kernel/locking/mutex.c:747
 hfsplus_block_allocate+0x9e/0x8b0 fs/hfsplus/bitmap.c:35
 hfsplus_file_extend+0xade/0x1b70 fs/hfsplus/extents.c:468
 hfsplus_bmap_reserve+0x105/0x4e0 fs/hfsplus/btree.c:358
 hfsplus_rename_cat+0x1d0/0x1050 fs/hfsplus/catalog.c:456
 hfsplus_unlink+0x308/0x790 fs/hfsplus/dir.c:376
 hfsplus_rename+0xc8/0x1c0 fs/hfsplus/dir.c:547
 vfs_rename+0xaba/0xde0 fs/namei.c:4844
 do_renameat2+0xd5a/0x1390 fs/namei.c:4996
 __do_sys_rename fs/namei.c:5042 [inline]
 __se_sys_rename fs/namei.c:5040 [inline]
 __x64_sys_rename+0x86/0x90 fs/namei.c:5040
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x45/0x110 arch/x86/entry/common.c:82
 entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f119127cae9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f1191fa30c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f119139c050 RCX: 00007f119127cae9
RDX: 0000000000000000 RSI: 0000000020000040 RDI: 0000000020000000
RBP: 00007f11912c847a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f119139c050 R15: 00007ffd86763308
 </TASK>

Crashes (152):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/11/28 05:15 upstream df60cee26a2e 7ec6c044 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/28 03:28 upstream 2cc14f52aeb7 7ec6c044 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/27 22:48 upstream 2cc14f52aeb7 7ec6c044 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/27 11:11 upstream 2cc14f52aeb7 5b429f39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/24 00:43 upstream d3fa86b1a7b4 fc59b78e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/11/23 16:20 upstream 9b6de136b5f0 fc59b78e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in hfsplus_block_allocate
2023/11/23 15:14 upstream 9b6de136b5f0 fc59b78e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/11/23 00:11 upstream 9b6de136b5f0 03e12510 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-badwrites-root possible deadlock in hfsplus_block_allocate
2023/11/22 17:27 upstream c2d5304e6c64 03e12510 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in hfsplus_block_allocate
2023/11/22 16:06 upstream c2d5304e6c64 03e12510 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/11/21 23:50 upstream c2d5304e6c64 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/11/21 22:35 upstream c2d5304e6c64 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/21 19:48 upstream 98b1cc82c4af cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/21 13:03 upstream 98b1cc82c4af cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/17 19:46 upstream 6bc40e44f1dd cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in hfsplus_block_allocate
2023/11/16 18:00 upstream 7475e51b8796 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in hfsplus_block_allocate
2023/11/15 20:00 upstream c42d9eeef8e5 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in hfsplus_block_allocate
2023/11/15 10:03 upstream 86d11b0e20c0 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in hfsplus_block_allocate
2023/11/15 05:36 upstream c42d9eeef8e5 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/11/13 11:09 upstream b85ea95d0864 6d6dbf8a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/11/04 07:50 upstream 8f6f76a6a29f 500bfdc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root possible deadlock in hfsplus_block_allocate
2023/10/28 09:29 upstream 888cf78c29e2 3c418d72 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in hfsplus_block_allocate
2023/10/26 12:18 upstream 611da07b89fd 23afc60f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/10/15 01:07 upstream 70f8c6f8f880 6388bc36 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/09/21 22:36 upstream b5cbe7c00aa0 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in hfsplus_block_allocate
2023/09/21 14:48 upstream 42dc814987c1 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/10/26 10:39 upstream 611da07b89fd b67a3ce3 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in hfsplus_block_allocate
2023/10/05 17:19 upstream 3006adf3be79 becbb1de .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in hfsplus_block_allocate
2023/10/04 21:53 upstream cbf3a2cb156a b7d7ff54 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in hfsplus_block_allocate
2023/10/04 11:43 upstream cbf3a2cb156a b7d7ff54 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in hfsplus_block_allocate
2023/08/09 03:39 upstream 13b937206866 8ad1a287 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/08/07 11:00 upstream 52a93d39b17d 0ef3dfda .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/07/30 12:35 upstream d31e3792919e 92476829 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/07/29 16:36 upstream ffabf7c73176 92476829 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/07/13 22:32 upstream eb26cbb1a754 55eda22f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in hfsplus_block_allocate
2023/07/04 06:56 upstream 24be4d0b46bb 6e553898 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in hfsplus_block_allocate
2023/07/03 20:26 upstream a901a3568fd2 6e553898 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in hfsplus_block_allocate
2023/06/26 04:33 upstream 547cc9be86f4 79782afc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/06/17 10:45 upstream 1639fae5132b f3921d4d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/06/13 01:28 upstream fd37b884003c 749afb64 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/06/12 13:42 upstream 858fd168a95c 7086cdb9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/06/11 02:40 upstream 022ce8862dff 7086cdb9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/06/10 04:20 upstream 33f2b5785a2b 9018a337 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/06/08 22:24 upstream 25041a4c02c7 058b3a5a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in hfsplus_block_allocate
2023/07/14 23:42 upstream 2772d7df3c93 35d9ecc5 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in hfsplus_block_allocate
2023/07/13 11:05 upstream eb26cbb1a754 bfb20202 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in hfsplus_block_allocate
2023/10/23 12:41 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 78124b0c1d10 989a3687 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in hfsplus_block_allocate
2023/06/26 12:51 linux-next 60e7c4a25da6 09ffe269 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in hfsplus_block_allocate
2023/08/03 22:06 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 2642b8a18760 74621247 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in hfsplus_block_allocate
2023/06/28 13:29 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci e40939bbfc68 8064cb02 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in hfsplus_block_allocate
2023/06/19 13:35 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 177239177378 d521bc56 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in hfsplus_block_allocate
2022/11/29 13:13 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 6d464646530f 05dc7993 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in hfsplus_block_allocate
* Struck through repros no longer work on HEAD.