syzbot


possible deadlock in hfsplus_find_init

Status: upstream: reported C repro on 2022/12/27 19:37
Subsystems: hfsplus
[Documentation on labels]
Reported-by: syzbot+0a5f47bad7259db05d4d@syzkaller.appspotmail.com
First crash: 456d, last: 388d
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 possible deadlock in hfsplus_find_init hfsplus C error 9 442d 480d 0/1 upstream: reported C repro on 2022/12/03 13:19
linux-6.1 possible deadlock in hfsplus_find_init origin:upstream C 50 5d12h 378d 0/3 upstream: reported C repro on 2023/03/15 11:24
linux-5.15 possible deadlock in hfsplus_find_init origin:upstream C 28 27d 356d 0/3 upstream: reported C repro on 2023/04/06 17:26
upstream possible deadlock in hfsplus_find_init hfs C error error 378 5h55m 462d 0/26 upstream: reported C repro on 2022/12/22 07:31
Fix bisection attempts (1)
Created Duration User Patch Repo Result
2023/02/17 03:50 27m bisect fix linux-4.14.y job log (0) log

Sample crash report:
======================================================
audit: type=1800 audit(1672362127.158:2): pid=7994 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor377" name="bus" dev="loop0" ino=25 res=0
audit: type=1800 audit(1672362127.188:3): pid=7994 uid=0 auid=4294967295 ses=4294967295 op="collect_data" cause="failed(directio)" comm="syz-executor377" name="file1" dev="loop0" ino=20 res=0
======================================================
WARNING: possible circular locking dependency detected
4.14.302-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor377/7994 is trying to acquire lock:
 (&tree->tree_lock/1){+.+.}, at: [<ffffffff81d40441>] hfsplus_find_init+0x161/0x220 fs/hfsplus/bfind.c:33

but task is already holding lock:
 (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: [<ffffffff81d2e7ea>] hfsplus_file_truncate+0x1ba/0xe80 fs/hfsplus/extents.c:571

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&HFSPLUS_I(inode)->extents_lock){+.+.}:
       __mutex_lock_common kernel/locking/mutex.c:756 [inline]
       __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
       hfsplus_file_extend+0x188/0xef0 fs/hfsplus/extents.c:452
       hfsplus_bmap_reserve+0x26e/0x410 fs/hfsplus/btree.c:357
       __hfsplus_ext_write_extent+0x415/0x560 fs/hfsplus/extents.c:104
       __hfsplus_ext_cache_extent fs/hfsplus/extents.c:186 [inline]
       hfsplus_ext_read_extent+0x81a/0x9e0 fs/hfsplus/extents.c:218
       hfsplus_file_extend+0x616/0xef0 fs/hfsplus/extents.c:456
       hfsplus_get_block+0x15b/0x820 fs/hfsplus/extents.c:245
       __block_write_begin_int+0x35c/0x11d0 fs/buffer.c:2038
       __block_write_begin fs/buffer.c:2088 [inline]
       block_write_begin+0x58/0x270 fs/buffer.c:2147
       cont_write_begin+0x4a3/0x740 fs/buffer.c:2497
       hfsplus_write_begin+0x87/0x130 fs/hfsplus/inode.c:53
       cont_expand_zero fs/buffer.c:2424 [inline]
       cont_write_begin+0x296/0x740 fs/buffer.c:2487
       hfsplus_write_begin+0x87/0x130 fs/hfsplus/inode.c:53
       generic_cont_expand_simple+0xe1/0x130 fs/buffer.c:2388
       hfsplus_setattr+0x139/0x310 fs/hfsplus/inode.c:258
       notify_change+0x56b/0xd10 fs/attr.c:315
       do_truncate+0xff/0x1a0 fs/open.c:63
       do_sys_ftruncate.constprop.0+0x3a3/0x480 fs/open.c:205
       do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
       entry_SYSCALL_64_after_hwframe+0x5e/0xd3

-> #0 (&tree->tree_lock/1){+.+.}:
       lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
       __mutex_lock_common kernel/locking/mutex.c:756 [inline]
       __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
       hfsplus_find_init+0x161/0x220 fs/hfsplus/bfind.c:33
       hfsplus_file_truncate+0x25b/0xe80 fs/hfsplus/extents.c:577
       hfsplus_setattr+0x182/0x310 fs/hfsplus/inode.c:264
       notify_change+0x56b/0xd10 fs/attr.c:315
       do_truncate+0xff/0x1a0 fs/open.c:63
       do_sys_ftruncate.constprop.0+0x3a3/0x480 fs/open.c:205
       do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
       entry_SYSCALL_64_after_hwframe+0x5e/0xd3

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&HFSPLUS_I(inode)->extents_lock);
                               lock(&tree->tree_lock/1);
                               lock(&HFSPLUS_I(inode)->extents_lock);
  lock(&tree->tree_lock/1);

 *** DEADLOCK ***

3 locks held by syz-executor377/7994:
 #0:  (sb_writers#10){.+.+}, at: [<ffffffff81867ecb>] sb_start_write include/linux/fs.h:1551 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<ffffffff81867ecb>] do_sys_ftruncate.constprop.0+0x1fb/0x480 fs/open.c:200
 #1:  (&sb->s_type->i_mutex_key#17){+.+.}, at: [<ffffffff818674b0>] inode_lock include/linux/fs.h:719 [inline]
 #1:  (&sb->s_type->i_mutex_key#17){+.+.}, at: [<ffffffff818674b0>] do_truncate+0xf0/0x1a0 fs/open.c:61
 #2:  (&HFSPLUS_I(inode)->extents_lock){+.+.}, at: [<ffffffff81d2e7ea>] hfsplus_file_truncate+0x1ba/0xe80 fs/hfsplus/extents.c:571

stack backtrace:
CPU: 1 PID: 7994 Comm: syz-executor377 Not tainted 4.14.302-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
 __dump_stack lib/dump_stack.c:17 [inline]
 dump_stack+0x1b2/0x281 lib/dump_stack.c:58
 print_circular_bug.constprop.0.cold+0x2d7/0x41e kernel/locking/lockdep.c:1258
 check_prev_add kernel/locking/lockdep.c:1905 [inline]
 check_prevs_add kernel/locking/lockdep.c:2022 [inline]
 validate_chain kernel/locking/lockdep.c:2464 [inline]
 __lock_acquire+0x2e0e/0x3f20 kernel/locking/lockdep.c:3491
 lock_acquire+0x170/0x3f0 kernel/locking/lockdep.c:3998
 __mutex_lock_common kernel/locking/mutex.c:756 [inline]
 __mutex_lock+0xc4/0x1310 kernel/locking/mutex.c:893
 hfsplus_find_init+0x161/0x220 fs/hfsplus/bfind.c:33
 hfsplus_file_truncate+0x25b/0xe80 fs/hfsplus/extents.c:577
 hfsplus_setattr+0x182/0x310 fs/hfsplus/inode.c:264
 notify_change+0x56b/0xd10 fs/attr.c:315
 do_truncate+0xff/0x1a0 fs/open.c:63
 do_sys_ftruncate.constprop.0+0x3a3/0x480 fs/open.c:205
 do_syscall_64+0x1d5/0x640 arch/x86/entry/common.c:292
 entry_SYSCALL_64_after_hwframe+0x5e/0xd3
RIP: 0033:0x7f87deec67e9
RSP: 002b:00007ffd1fa83b08 EFLAGS: 00000246 ORIG_RAX: 000000000000004d

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/12/30 01:03 linux-4.14.y c4215ee4771b 44712fbc .config console log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-linux-4-14 possible deadlock in hfsplus_find_init
2023/03/06 00:17 linux-4.14.y 7878a41b6cc1 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-4-14 possible deadlock in hfsplus_find_init
2023/01/18 03:49 linux-4.14.y c4215ee4771b 42660d9e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-4-14 possible deadlock in hfsplus_find_init
2022/12/27 19:37 linux-4.14.y c4215ee4771b 44712fbc .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-4-14 possible deadlock in hfsplus_find_init
* Struck through repros no longer work on HEAD.