syzbot


possible deadlock in __submit_bio

Status: upstream: reported C repro on 2024/11/03 22:19
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+949ae54e95a2fab4cbb4@syzkaller.appspotmail.com
First crash: 7d03h, last: 1d14h
Cause bisection: introduced by (bisect log) :
commit f1be1788a32e8fa63416ad4518bbd1a85a825c9d
Author: Ming Lei <ming.lei@redhat.com>
Date: Fri Oct 25 00:37:20 2024 +0000

  block: model freeze & enter queue as lock for supporting lockdep

Crash: possible deadlock in __submit_bio (log)
Repro: C syz .config
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [block?] possible deadlock in __submit_bio 3 (5) 2024/11/05 21:23
Last patch testing requests (1)
Created Duration User Patch Repo Result
2024/11/05 11:35 26m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git f9f24ca362a4 OK log

Sample crash report:
loop0: detected capacity change from 0 to 1024
======================================================
WARNING: possible circular locking dependency detected
6.12.0-rc5-next-20241104-syzkaller #0 Not tainted
------------------------------------------------------
syz-executor699/5942 is trying to acquire lock:
ffff888025049db8 (&q->q_usage_counter(io)#17){++++}-{0:0}, at: __submit_bio+0x2c2/0x560 block/blk-core.c:629

but task is already holding lock:
ffff888022fdc0b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x14a/0x1c0 fs/hfsplus/bfind.c:28

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&tree->tree_lock){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       hfsplus_find_init+0x14a/0x1c0 fs/hfsplus/bfind.c:28
       hfsplus_cat_write_inode+0x1df/0x1070 fs/hfsplus/inode.c:589
       write_inode fs/fs-writeback.c:1501 [inline]
       __writeback_single_inode+0x711/0x10d0 fs/fs-writeback.c:1721
       writeback_single_inode+0x1f3/0x660 fs/fs-writeback.c:1777
       sync_inode_metadata+0xc4/0x120 fs/fs-writeback.c:2847
       hfsplus_file_fsync+0xf5/0x4d0 fs/hfsplus/inode.c:316
       __loop_update_dio+0x1a4/0x500 drivers/block/loop.c:204
       loop_set_status+0x62b/0x8f0 drivers/block/loop.c:1289
       lo_ioctl+0xcbc/0x1f50
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:907 [inline]
       __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:893
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&sb->s_type->i_mutex_key#15){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       down_write+0x99/0x220 kernel/locking/rwsem.c:1577
       inode_lock include/linux/fs.h:817 [inline]
       hfsplus_file_fsync+0xe8/0x4d0 fs/hfsplus/inode.c:311
       __loop_update_dio+0x1a4/0x500 drivers/block/loop.c:204
       loop_set_status+0x62b/0x8f0 drivers/block/loop.c:1289
       lo_ioctl+0xcbc/0x1f50
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:907 [inline]
       __se_sys_ioctl+0xf9/0x170 fs/ioctl.c:893
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&q->q_usage_counter(io)#17){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       blk_mq_submit_bio+0x1510/0x2490 block/blk-mq.c:3069
       __submit_bio+0x2c2/0x560 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
       submit_bh fs/buffer.c:2819 [inline]
       block_read_full_folio+0x93b/0xcd0 fs/buffer.c:2446
       filemap_read_folio+0x14b/0x630 mm/filemap.c:2366
       do_read_cache_folio+0x3f5/0x850 mm/filemap.c:3826
       do_read_cache_page+0x30/0x200 mm/filemap.c:3892
       read_mapping_page include/linux/pagemap.h:1005 [inline]
       __hfs_bnode_create+0x487/0x770 fs/hfsplus/bnode.c:440
       hfsplus_bnode_find+0x237/0x10c0 fs/hfsplus/bnode.c:486
       hfsplus_brec_find+0x183/0x570 fs/hfsplus/bfind.c:172
       hfsplus_brec_read+0x2b/0x110 fs/hfsplus/bfind.c:211
       hfsplus_find_cat+0x17f/0x5d0 fs/hfsplus/catalog.c:202
       hfsplus_iget+0x483/0x680 fs/hfsplus/super.c:83
       hfsplus_fill_super+0xc4d/0x1be0 fs/hfsplus/super.c:504
       get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
       vfs_get_tree+0x90/0x2b0 fs/super.c:1814
       do_new_mount+0x2be/0xb40 fs/namespace.c:3507
       do_mount fs/namespace.c:3847 [inline]
       __do_sys_mount fs/namespace.c:4057 [inline]
       __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &q->q_usage_counter(io)#17 --> &sb->s_type->i_mutex_key#15 --> &tree->tree_lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&tree->tree_lock);
                               lock(&sb->s_type->i_mutex_key#15);
                               lock(&tree->tree_lock);
  rlock(&q->q_usage_counter(io)#17);

 *** DEADLOCK ***

2 locks held by syz-executor699/5942:
 #0: ffff8880771b80e0 (&type->s_umount_key#41/1){+.+.}-{4:4}, at: alloc_super+0x221/0x9d0 fs/super.c:344
 #1: ffff888022fdc0b0 (&tree->tree_lock){+.+.}-{4:4}, at: hfsplus_find_init+0x14a/0x1c0 fs/hfsplus/bfind.c:28

stack backtrace:
CPU: 1 UID: 0 PID: 5942 Comm: syz-executor699 Not tainted 6.12.0-rc5-next-20241104-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 bio_queue_enter block/blk.h:75 [inline]
 blk_mq_submit_bio+0x1510/0x2490 block/blk-mq.c:3069
 __submit_bio+0x2c2/0x560 block/blk-core.c:629
 __submit_bio_noacct_mq block/blk-core.c:710 [inline]
 submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
 submit_bh fs/buffer.c:2819 [inline]
 block_read_full_folio+0x93b/0xcd0 fs/buffer.c:2446
 filemap_read_folio+0x14b/0x630 mm/filemap.c:2366
 do_read_cache_folio+0x3f5/0x850 mm/filemap.c:3826
 do_read_cache_page+0x30/0x200 mm/filemap.c:3892
 read_mapping_page include/linux/pagemap.h:1005 [inline]
 __hfs_bnode_create+0x487/0x770 fs/hfsplus/bnode.c:440
 hfsplus_bnode_find+0x237/0x10c0 fs/hfsplus/bnode.c:486
 hfsplus_brec_find+0x183/0x570 fs/hfsplus/bfind.c:172
 hfsplus_brec_read+0x2b/0x110 fs/hfsplus/bfind.c:211
 hfsplus_find_cat+0x17f/0x5d0 fs/hfsplus/catalog.c:202
 hfsplus_iget+0x483/0x680 fs/hfsplus/super.c:83
 hfsplus_fill_super+0xc4d/0x1be0 fs/hfsplus/super.c:504
 get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636
 vfs_get_tree+0x90/0x2b0 fs/super.c:1814
 do_new_mount+0x2be/0xb40 fs/namespace.c:3507
 do_mount fs/namespace.c:3847 [inline]
 __do_sys_mount fs/namespace.c:4057 [inline]
 __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4034
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1999b93b4a
Code: d8 64 89 02 48 c7 c0 ff ff ff ff eb a6 e8 ee 08 00 00 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 49 89 ca b8 a5 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ffebe1a8bb8 EFLAGS: 00000286 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007ffebe1a8bd0 RCX: 00007f1999b93b4a
RDX: 0000000020000140 RSI: 0000000020000040 RDI: 00007ffebe1a8bd0
RBP: 0000000000000004 R08: 00007ffebe1a8c10 R09: 000000000000069d
R10: 0000000000014018 R11: 0000000000000286 R12: 0000000000014018
R13: 00007ffebe1a8c10 R14: 0000000000000003 R15: 0000000000080000
 </TASK>
syz-executor699: attempt to access beyond end of device
loop0: rw=0, sector=86, nr_sectors = 2 limit=3
Buffer I/O error on dev loop0, logical block 43, async page read
syz-executor699: attempt to access beyond end of device
loop0: rw=0, sector=88, nr_sectors = 2 limit=3
Buffer I/O error on dev loop0, logical block 44, async page read
syz-executor699: attempt to access beyond end of device
loop0: rw=0, sector=90, nr_sectors = 2 limit=3
Buffer I/O error on dev loop0, logical block 45, async page read
syz-executor699: attempt to access beyond end of device
loop0: rw=0, sector=92, nr_sectors = 2 limit=3
Buffer I/O error on dev loop0, logical block 46, async page read
hfsplus: xattr searching failed

Crashes (7):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/05 06:36 linux-next 1ffec08567f4 509da429 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root possible deadlock in __submit_bio
2024/11/03 10:32 linux-next c88416ba074a f00eed24 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root possible deadlock in __submit_bio
2024/11/01 01:51 linux-next f9f24ca362a4 96eb609f .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root possible deadlock in __submit_bio
2024/11/05 11:34 linux-next 850f22c42f4b 509da429 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __submit_bio
2024/11/05 09:26 linux-next 850f22c42f4b 509da429 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __submit_bio
2024/11/01 09:29 linux-next c88416ba074a 96eb609f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __submit_bio
2024/10/30 22:16 linux-next 86e3904dcdc7 fb888278 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __submit_bio
* Struck through repros no longer work on HEAD.