syzbot


possible deadlock in hfsplus_file_extend

Status: upstream: reported C repro on 2022/12/29 06:07
Subsystems: hfsplus
[Documentation on labels]
Reported-by: syzbot+7a343c73c11c99d582b3@syzkaller.appspotmail.com
First crash: 484d, last: 418d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in hfsplus_file_extend origin:upstream C 1465 10m 410d 0/3 upstream: reported C repro on 2023/03/13 14:24
linux-5.15 possible deadlock in hfsplus_file_extend missing-backport origin:lts-only C 1471 1h35m 412d 0/3 upstream: reported C repro on 2023/03/11 17:12
linux-4.19 possible deadlock in hfsplus_file_extend hfsplus C 258 417d 517d 0/1 upstream: reported C repro on 2022/11/26 10:00
upstream possible deadlock in hfsplus_file_extend hfs C error 17471 now 517d 0/26 upstream: reported C repro on 2022/11/26 08:07
Fix bisection attempts (2)
Created Duration User Patch Repo Result
2023/03/05 07:43 31m bisect fix linux-4.14.y job log (0) log
2023/02/03 06:28 34m bisect fix linux-4.14.y job log (0) log

Sample crash report:
WARNING: the mand mount option is being deprecated and
         will be removed in v5.15!
======================================================
============================================
WARNING: possible recursive locking detected
4.14.302-syzkaller #0 Not tainted
--------------------------------------------
syz-executor729/7983 is trying to acquire lock:
 (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: [<ffffffff81d2d0a8>] hfsplus_file_extend+0x188/0xef0 fs/hfsplus/extents.c:452

but task is already holding lock:
 (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: [<ffffffff81d2d0a8>] hfsplus_file_extend+0x188/0xef0 fs/hfsplus/extents.c:452

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&HFSPLUS_I(inode)->extents_lock);
  lock(&HFSPLUS_I(inode)->extents_lock);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

4 locks held by syz-executor729/7983:
 #0:  (sb_writers#10){.+.+}, at: [<ffffffff81867ecb>] sb_start_write include/linux/fs.h:1551 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<ffffffff81867ecb>] do_sys_ftruncate.constprop.0+0x1fb/0x480 fs/open.c:200
 #1:  (&sb->s_type->i_mutex_key#17){+.+.}, at: [<ffffffff818674b0>] inode_lock include/linux/fs.h:719 [inline]
 #1:  (&sb->s_type->i_mutex_key#17){+.+.}, at: [<ffffffff818674b0>] do_truncate+0xf0/0x1a0 fs/open.c:61
 #2:  (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: [<ffffffff81d2d0a8>] hfsplus_file_extend+0x188/0xef0 fs/hfsplus/extents.c:452
 #3:  (&tree->tree_lock/1){+.+.}, at: [<ffffffff81d40441>] hfsplus_find_init+0x161/0x220 fs/hfsplus/bfind.c:33

stack backtrace:
CPU: 1 PID: 7983 Comm: syz-executor729 Not tainted 4.14.302-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
 __dump_stack lib/dump_stack.c:17 [inline]
 dump_stack+0x1b2/0x281 lib/dump_stack.c:58
 print_deadlock_bug kernel/locking/lockdep.c:1800 [inline]
 check_deadlock kernel/locking/lockdep.c:1847 [inline]
 validate_chain kernel/locking/lockdep.c:2448 [inline]
 __lock_acquire.cold+0x180/0x97c kernel/locking/lockdep.c:3491
 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
 __mutex_lock_common kernel/locking/mutex.c:756 [inline]
 __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
 hfsplus_file_extend+0x188/0xef0 fs/hfsplus/extents.c:452
 hfsplus_bmap_reserve+0x26e/0x410 fs/hfsplus/btree.c:357
 __hfsplus_ext_write_extent+0x415/0x560 fs/hfsplus/extents.c:104
 __hfsplus_ext_cache_extent fs/hfsplus/extents.c:186 [inline]
 hfsplus_ext_read_extent+0x81a/0x9e0 fs/hfsplus/extents.c:218
 hfsplus_file_extend+0x616/0xef0 fs/hfsplus/extents.c:456
 hfsplus_get_block+0x15b/0x820 fs/hfsplus/extents.c:245
 __block_write_begin_int+0x35c/0x11d0 fs/buffer.c:2038
 __block_write_begin fs/buffer.c:2088 [inline]
 block_write_begin+0x58/0x270 fs/buffer.c:2147
 cont_write_begin+0x4a3/0x740 fs/buffer.c:2497
 hfsplus_write_begin+0x87/0x130 fs/hfsplus/inode.c:53
 cont_expand_zero fs/buffer.c:2424 [inline]
 cont_write_begin+0x296/0x740 fs/buffer.c:2487
 hfsplus_write_begin+0x87/0x130 fs/hfsplus/inode.c:53
 generic_cont_expand_simple+0xe1/0x130 fs/buffer.c:2388
 hfsplus_setattr+0x139/0x310 fs/hfsplus/inode.c:258
 notify_change+0x56b/0xd10 fs/attr.c:315
 do_truncate+0xff/0x1a0 fs/open.c:63
 do_sys_ftruncate.constprop.0+0x3a3/0x480 fs/open.c:205
 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
 entry_SYSCALL_64_after_hwframe+0x5e/0xd3
RIP: 0033:0x7f273fdce799
RSP: 002b:00007ffd44f0db68 EFLAGS: 00000246 ORIG_RAX: 000000000000004d
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f273fdce799
RDX: 0000000000000000 RSI: 0000000002007ffb RDI: 0000000000000004

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/12/29 06:07 linux-4.14.y c4215ee4771b 44712fbc .config console log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-linux-4-14 possible deadlock in hfsplus_file_extend
2023/01/03 10:57 linux-4.14.y c4215ee4771b f0036e18 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-4-14 possible deadlock in hfsplus_file_extend
2023/01/02 12:30 linux-4.14.y c4215ee4771b ab32d508 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-4-14 possible deadlock in hfsplus_file_extend
* Struck through repros no longer work on HEAD.