syzbot


possible deadlock in hfsplus_block_allocate

Status: upstream: reported on 2023/03/30 21:16
Reported-by: syzbot+7cba265d6c8566d8a3c9@syzkaller.appspotmail.com
First crash: 335d, last: 55d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.14 possible deadlock in hfsplus_block_allocate hfsplus 2 407d 423d 0/1 upstream: reported on 2023/01/02 02:59
upstream possible deadlock in hfsplus_block_allocate hfs 176 8d01h 457d 0/26 upstream: reported on 2022/11/29 13:38
linux-4.19 possible deadlock in hfsplus_block_allocate hfsplus 2 426d 434d 0/1 upstream: reported on 2022/12/22 13:15
linux-5.15 possible deadlock in hfsplus_block_allocate 17 6d21h 341d 0/3 upstream: reported on 2023/03/24 22:08

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.1.54-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor.5/17627 is trying to acquire lock:
ffff88807df640f8 (&sbi->alloc_mutex){+.+.}-{3:3}, at: hfsplus_block_allocate+0x9a/0x8b0 fs/hfsplus/bitmap.c:35

but task is already holding lock:
ffff8880729fd8c8 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x1d2/0x1b10 fs/hfsplus/extents.c:457

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}:
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5661
       __mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
       __mutex_lock kernel/locking/mutex.c:747 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
       hfsplus_get_block+0x37f/0x14e0 fs/hfsplus/extents.c:260
       block_read_full_folio+0x403/0xf60 fs/buffer.c:2271
       filemap_read_folio+0x199/0x780 mm/filemap.c:2407
       do_read_cache_folio+0x2ee/0x810 mm/filemap.c:3535
       do_read_cache_page+0x32/0x220 mm/filemap.c:3577
       read_mapping_page include/linux/pagemap.h:756 [inline]
       hfsplus_block_allocate+0xea/0x8b0 fs/hfsplus/bitmap.c:37
       hfsplus_file_extend+0xa4c/0x1b10 fs/hfsplus/extents.c:468
       hfsplus_get_block+0x402/0x14e0 fs/hfsplus/extents.c:245
       __block_write_begin_int+0x544/0x1a30 fs/buffer.c:1991
       __block_write_begin fs/buffer.c:2041 [inline]
       block_write_begin+0x98/0x1f0 fs/buffer.c:2102
       cont_write_begin+0x63f/0x880 fs/buffer.c:2456
       hfsplus_write_begin+0x86/0xd0 fs/hfsplus/inode.c:52
       generic_perform_write+0x2fc/0x5e0 mm/filemap.c:3754
       __generic_file_write_iter+0x176/0x400 mm/filemap.c:3882
       generic_file_write_iter+0xab/0x310 mm/filemap.c:3914
       call_write_iter include/linux/fs.h:2205 [inline]
       new_sync_write fs/read_write.c:491 [inline]
       vfs_write+0x7ae/0xba0 fs/read_write.c:584
       ksys_write+0x19c/0x2c0 fs/read_write.c:637
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&sbi->alloc_mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3090 [inline]
       check_prevs_add kernel/locking/lockdep.c:3209 [inline]
       validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3824
       __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5048
       lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5661
       __mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
       __mutex_lock kernel/locking/mutex.c:747 [inline]
       mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
       hfsplus_block_allocate+0x9a/0x8b0 fs/hfsplus/bitmap.c:35
       hfsplus_file_extend+0xa4c/0x1b10 fs/hfsplus/extents.c:468
       hfsplus_get_block+0x402/0x14e0 fs/hfsplus/extents.c:245
       __block_write_begin_int+0x544/0x1a30 fs/buffer.c:1991
       __block_write_begin fs/buffer.c:2041 [inline]
       block_write_begin+0x98/0x1f0 fs/buffer.c:2102
       cont_write_begin+0x63f/0x880 fs/buffer.c:2456
       hfsplus_write_begin+0x86/0xd0 fs/hfsplus/inode.c:52
       cont_expand_zero fs/buffer.c:2416 [inline]
       cont_write_begin+0x6e1/0x880 fs/buffer.c:2446
       hfsplus_write_begin+0x86/0xd0 fs/hfsplus/inode.c:52
       generic_cont_expand_simple+0x187/0x2a0 fs/buffer.c:2347
       hfsplus_setattr+0x169/0x280 fs/hfsplus/inode.c:263
       notify_change+0xdcd/0x1080 fs/attr.c:483
       do_truncate+0x21c/0x300 fs/open.c:65
       vfs_truncate+0x2dd/0x3a0 fs/open.c:111
       do_sys_truncate+0xda/0x190 fs/open.c:134
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&HFSPLUS_I(inode)->extents_lock);
                               lock(&sbi->alloc_mutex);
                               lock(&HFSPLUS_I(inode)->extents_lock);
  lock(&sbi->alloc_mutex);

 *** DEADLOCK ***

3 locks held by syz-executor.5/17627:
 #0: ffff888079c5e460 (sb_writers#22){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
 #1: ffff8880729fdac0 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #1: ffff8880729fdac0 (&sb->s_type->i_mutex_key#27){+.+.}-{3:3}, at: do_truncate+0x208/0x300 fs/open.c:63
 #2: ffff8880729fd8c8 (&HFSPLUS_I(inode)->extents_lock){+.+.}-{3:3}, at: hfsplus_file_extend+0x1d2/0x1b10 fs/hfsplus/extents.c:457

stack backtrace:
CPU: 1 PID: 17627 Comm: syz-executor.5 Not tainted 6.1.54-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/04/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 check_noncircular+0x2fa/0x3b0 kernel/locking/lockdep.c:2170
 check_prev_add kernel/locking/lockdep.c:3090 [inline]
 check_prevs_add kernel/locking/lockdep.c:3209 [inline]
 validate_chain+0x1667/0x58e0 kernel/locking/lockdep.c:3824
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5048
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5661
 __mutex_lock_common+0x1d4/0x2520 kernel/locking/mutex.c:603
 __mutex_lock kernel/locking/mutex.c:747 [inline]
 mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
 hfsplus_block_allocate+0x9a/0x8b0 fs/hfsplus/bitmap.c:35
 hfsplus_file_extend+0xa4c/0x1b10 fs/hfsplus/extents.c:468
 hfsplus_get_block+0x402/0x14e0 fs/hfsplus/extents.c:245
 __block_write_begin_int+0x544/0x1a30 fs/buffer.c:1991
 __block_write_begin fs/buffer.c:2041 [inline]
 block_write_begin+0x98/0x1f0 fs/buffer.c:2102
 cont_write_begin+0x63f/0x880 fs/buffer.c:2456
 hfsplus_write_begin+0x86/0xd0 fs/hfsplus/inode.c:52
 cont_expand_zero fs/buffer.c:2416 [inline]
 cont_write_begin+0x6e1/0x880 fs/buffer.c:2446
 hfsplus_write_begin+0x86/0xd0 fs/hfsplus/inode.c:52
 generic_cont_expand_simple+0x187/0x2a0 fs/buffer.c:2347
 hfsplus_setattr+0x169/0x280 fs/hfsplus/inode.c:263
 notify_change+0xdcd/0x1080 fs/attr.c:483
 do_truncate+0x21c/0x300 fs/open.c:65
 vfs_truncate+0x2dd/0x3a0 fs/open.c:111
 do_sys_truncate+0xda/0x190 fs/open.c:134
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f9fee07cae9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f9feedb60c8 EFLAGS: 00000246 ORIG_RAX: 000000000000004c
RAX: ffffffffffffffda RBX: 00007f9fee19bf80 RCX: 00007f9fee07cae9
RDX: 0000000000000000 RSI: 0000000000002823 RDI: 0000000020000000
RBP: 00007f9fee0c847a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f9fee19bf80 R15: 00007ffd4a8b55d8
 </TASK>

Crashes (19):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/09/19 19:06 linux-6.1.y a356197db198 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in hfsplus_block_allocate
2024/01/05 15:50 linux-6.1.y 38fb82ecd144 28c42cff .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/11/30 02:31 linux-6.1.y 6ac30d748bb0 f819d6f7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/09/16 12:02 linux-6.1.y 09045dae0d90 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/07/17 17:03 linux-6.1.y 61fd484b2cf6 20f8b3c2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in hfsplus_block_allocate
2023/06/13 22:47 linux-6.1.y 2f3918bc53fb d2ee9228 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in hfsplus_block_allocate
2023/06/05 19:10 linux-6.1.y 76ba310227d2 a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in hfsplus_block_allocate
2023/05/12 09:01 linux-6.1.y bf4ad6fa4e53 adb9a3cd .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in hfsplus_block_allocate
2023/03/30 21:16 linux-6.1.y 3b29299e5f60 f325deb0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan possible deadlock in hfsplus_block_allocate
2023/08/26 16:11 linux-6.1.y cd363bb9548e 7ba13a15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/08/22 14:57 linux-6.1.y 6c44e13dc284 b81ca3f6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/06/08 01:12 linux-6.1.y 76ba310227d2 058b3a5a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/05/02 13:33 linux-6.1.y ca48fc16c493 52d40fd2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/04/28 12:40 linux-6.1.y ca1c9012c941 70a605de .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/04/16 03:27 linux-6.1.y 0102425ac76b ec410564 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/04/12 08:28 linux-6.1.y 543aff194ab6 1a1596b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/04/10 13:24 linux-6.1.y 543aff194ab6 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/04/09 10:23 linux-6.1.y 543aff194ab6 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
2023/04/07 21:46 linux-6.1.y 543aff194ab6 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 possible deadlock in hfsplus_block_allocate
* Struck through repros no longer work on HEAD.