syzbot


possible deadlock in hfsplus_find_init

Status: upstream: reported C repro on 2022/12/03 13:19
Subsystems: hfsplus
[Documentation on labels]
Reported-by: syzbot+777d200bb1b1fd2b12f7@syzkaller.appspotmail.com
First crash: 561d, last: 523d
Fix bisection: failed (error log, bisect log)
  
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.14 possible deadlock in hfsplus_find_init hfsplus C 4 468d 537d 0/1 upstream: reported C repro on 2022/12/27 19:37
linux-6.1 possible deadlock in hfsplus_find_init origin:upstream C 73 1d10h 459d 0/3 upstream: reported C repro on 2023/03/15 11:24
linux-5.15 possible deadlock in hfsplus_find_init origin:upstream C 44 9d01h 437d 0/3 upstream: reported C repro on 2023/04/06 17:26
upstream possible deadlock in hfsplus_find_init hfs C error error 455 6d10h 542d 0/27 upstream: reported C repro on 2022/12/22 07:31

Sample crash report:
         will be removed in v5.15!
======================================================
audit: type=1800 audit(1672578591.604:2): pid=8106 uid=0 auid=4294967295 ses=4294967295 subj==unconfined op=collect_data cause=failed(directio) comm="syz-executor950" name="bus" dev="loop0" ino=25 res=0
======================================================
audit: type=1800 audit(1672578591.634:3): pid=8106 uid=0 auid=4294967295 ses=4294967295 subj==unconfined op=collect_data cause=failed(directio) comm="syz-executor950" name="file1" dev="loop0" ino=20 res=0
WARNING: possible circular locking dependency detected
4.19.211-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor950/8106 is trying to acquire lock:
0000000014babfec (&tree->tree_lock/1){+.+.}, at: hfsplus_find_init+0x170/0x220 fs/hfsplus/bfind.c:33

but task is already holding lock:
00000000a7fe7b0f (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: hfsplus_file_truncate+0x1e2/0x1040 fs/hfsplus/extents.c:576

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}:
       hfsplus_file_extend+0x1bb/0xf40 fs/hfsplus/extents.c:457
       hfsplus_bmap_reserve+0x298/0x440 fs/hfsplus/btree.c:357
       __hfsplus_ext_write_extent+0x45b/0x5a0 fs/hfsplus/extents.c:104
       __hfsplus_ext_cache_extent fs/hfsplus/extents.c:186 [inline]
       hfsplus_ext_read_extent+0x910/0xab0 fs/hfsplus/extents.c:218
       hfsplus_file_extend+0x672/0xf40 fs/hfsplus/extents.c:461
       hfsplus_get_block+0x196/0x960 fs/hfsplus/extents.c:245
       __block_write_begin_int+0x46c/0x17b0 fs/buffer.c:1978
       __block_write_begin fs/buffer.c:2028 [inline]
       block_write_begin+0x58/0x2e0 fs/buffer.c:2087
       cont_write_begin+0x55a/0x820 fs/buffer.c:2440
       hfsplus_write_begin+0x87/0x150 fs/hfsplus/inode.c:52
       cont_expand_zero fs/buffer.c:2367 [inline]
       cont_write_begin+0x2ee/0x820 fs/buffer.c:2430
       hfsplus_write_begin+0x87/0x150 fs/hfsplus/inode.c:52
       generic_cont_expand_simple+0x106/0x170 fs/buffer.c:2331
       hfsplus_setattr+0x18b/0x310 fs/hfsplus/inode.c:257
       notify_change+0x70b/0xfc0 fs/attr.c:334
       do_truncate+0x134/0x1f0 fs/open.c:63
       do_sys_ftruncate+0x492/0x560 fs/open.c:194
       do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

-> #0 (&tree->tree_lock/1){+.+.}:
       __mutex_lock_common kernel/locking/mutex.c:937 [inline]
       __mutex_lock+0xd7/0x1190 kernel/locking/mutex.c:1078
       hfsplus_find_init+0x170/0x220 fs/hfsplus/bfind.c:33
       hfsplus_file_truncate+0x297/0x1040 fs/hfsplus/extents.c:582
       hfsplus_setattr+0x1e7/0x310 fs/hfsplus/inode.c:263
       notify_change+0x70b/0xfc0 fs/attr.c:334
       do_truncate+0x134/0x1f0 fs/open.c:63
       do_sys_ftruncate+0x492/0x560 fs/open.c:194
       do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
       entry_SYSCALL_64_after_hwframe+0x49/0xbe

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&HFSPLUS_I(inode)->extents_lock);
                               lock(&tree->tree_lock/1);
                               lock(&HFSPLUS_I(inode)->extents_lock);
  lock(&tree->tree_lock/1);

 *** DEADLOCK ***

3 locks held by syz-executor950/8106:
 #0: 000000006bb158e5 (sb_writers#11){.+.+}, at: sb_start_write include/linux/fs.h:1579 [inline]
 #0: 000000006bb158e5 (sb_writers#11){.+.+}, at: do_sys_ftruncate+0x297/0x560 fs/open.c:189
 #1: 00000000a653ec8e (&sb->s_type->i_mutex_key#17){+.+.}, at: inode_lock include/linux/fs.h:748 [inline]
 #1: 00000000a653ec8e (&sb->s_type->i_mutex_key#17){+.+.}, at: do_truncate+0x125/0x1f0 fs/open.c:61
 #2: 00000000a7fe7b0f (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: hfsplus_file_truncate+0x1e2/0x1040 fs/hfsplus/extents.c:576

stack backtrace:
CPU: 0 PID: 8106 Comm: syz-executor950 Not tainted 4.19.211-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x1fc/0x2ef lib/dump_stack.c:118
 print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1222
 check_prev_add kernel/locking/lockdep.c:1866 [inline]
 check_prevs_add kernel/locking/lockdep.c:1979 [inline]
 validate_chain kernel/locking/lockdep.c:2420 [inline]
 __lock_acquire+0x30c9/0x3ff0 kernel/locking/lockdep.c:3416
 lock_acquire+0x170/0x3c0 kernel/locking/lockdep.c:3908
 __mutex_lock_common kernel/locking/mutex.c:937 [inline]
 __mutex_lock+0xd7/0x1190 kernel/locking/mutex.c:1078
 hfsplus_find_init+0x170/0x220 fs/hfsplus/bfind.c:33
 hfsplus_file_truncate+0x297/0x1040 fs/hfsplus/extents.c:582
 hfsplus_setattr+0x1e7/0x310 fs/hfsplus/inode.c:263
 notify_change+0x70b/0xfc0 fs/attr.c:334
 do_truncate+0x134/0x1f0 fs/open.c:63
 do_sys_ftruncate+0x492/0x560 fs/open.c:194
 do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293
 entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x7fce51aec7e9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 51 14 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 c0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffed9e28f78 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fce51aec7e9
RDX: 00007fce51aec7e9 RSI: 0000000000000000 RDI: 0000000000000005
RBP: 00007fce51aac080 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fce51aac110
R13: 0000000000000000 R14: 00000000000000

Crashes (9):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/01/01 13:11 linux-4.19.y 3f8a27f9e27b ab32d508 .config console log report syz C [disk image] [vmlinux] [mounted in repro] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2023/01/10 22:29 linux-4.19.y 3f8a27f9e27b 48bc529a .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2023/01/10 15:04 linux-4.19.y 3f8a27f9e27b 48bc529a .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2023/01/07 07:34 linux-4.19.y 3f8a27f9e27b 1dac8c7a .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2022/12/30 08:50 linux-4.19.y 3f8a27f9e27b 44712fbc .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2022/12/26 20:03 linux-4.19.y 3f8a27f9e27b 9da18ae8 .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2022/12/26 14:55 linux-4.19.y 3f8a27f9e27b 9da18ae8 .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2022/12/15 22:00 linux-4.19.y 3f8a27f9e27b 6f9c033e .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
2022/12/03 13:19 linux-4.19.y 3f8a27f9e27b e080de16 .config console log report info [disk image] [vmlinux] ci2-linux-4-19 possible deadlock in hfsplus_find_init
* Struck through repros no longer work on HEAD.