syzbot


possible deadlock in submit_bio_noacct_nocheck

Status: upstream: reported on 2024/12/28 02:52
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+33bb23065b02ca58a5a3@syzkaller.appspotmail.com
First crash: 53d, last: 25d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [block?] possible deadlock in submit_bio_noacct_nocheck 0 (1) 2024/12/28 02:52

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.13.0-syzkaller-00603-g3d3a9c8b89d4 #0 Not tainted
------------------------------------------------------
syz.1.1759/14749 is trying to acquire lock:
ffff888142b81e00 (&q->q_usage_counter(io)#4){++++}-{0:0}, at: __submit_bio_noacct block/blk-core.c:678 [inline]
ffff888142b81e00 (&q->q_usage_counter(io)#4){++++}-{0:0}, at: submit_bio_noacct_nocheck+0x892/0xd70 block/blk-core.c:741

but task is already holding lock:
ffff888148d6abc0 (mapping.invalidate_lock#2){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:886 [inline]
ffff888148d6abc0 (mapping.invalidate_lock#2){.+.+}-{4:4}, at: page_cache_ra_unbounded+0x173/0x750 mm/readahead.c:226

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (mapping.invalidate_lock#2){.+.+}-{4:4}:
       down_read+0x9a/0x330 kernel/locking/rwsem.c:1524
       filemap_invalidate_lock_shared include/linux/fs.h:886 [inline]
       filemap_fault+0x2e0/0x2820 mm/filemap.c:3323
       __do_fault+0x10a/0x490 mm/memory.c:4907
       do_read_fault mm/memory.c:5322 [inline]
       do_fault mm/memory.c:5456 [inline]
       do_pte_missing+0xebd/0x3e00 mm/memory.c:3979
       handle_pte_fault mm/memory.c:5801 [inline]
       __handle_mm_fault+0x103c/0x2a40 mm/memory.c:5944
       handle_mm_fault+0x3fa/0xaa0 mm/memory.c:6112
       faultin_page mm/gup.c:1196 [inline]
       __get_user_pages+0x8d9/0x3b50 mm/gup.c:1494
       __get_user_pages_locked mm/gup.c:1760 [inline]
       __gup_longterm_locked+0x211/0x1870 mm/gup.c:2532
       gup_fast_fallback+0x1802/0x2690 mm/gup.c:3434
       pin_user_pages_fast+0xa8/0x100 mm/gup.c:3540
       iov_iter_extract_user_pages lib/iov_iter.c:1844 [inline]
       iov_iter_extract_pages+0x3a5/0x2010 lib/iov_iter.c:1907
       __bio_iov_iter_get_pages block/bio.c:1278 [inline]
       bio_iov_iter_get_pages+0x37c/0x1100 block/bio.c:1360
       __blkdev_direct_IO block/fops.c:208 [inline]
       blkdev_direct_IO+0x1054/0x1ad0 block/fops.c:385
       blkdev_direct_write block/fops.c:652 [inline]
       blkdev_write_iter+0x6f9/0xd40 block/fops.c:719
       new_sync_write fs/read_write.c:586 [inline]
       vfs_write+0x5ae/0x1150 fs/read_write.c:679
       ksys_write+0x12b/0x250 fs/read_write.c:731
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (&mm->mmap_lock){++++}-{4:4}:
       __might_fault mm/memory.c:6751 [inline]
       __might_fault+0x11b/0x190 mm/memory.c:6744
       _inline_copy_from_user include/linux/uaccess.h:162 [inline]
       _copy_from_user+0x29/0xd0 lib/usercopy.c:18
       copy_from_user include/linux/uaccess.h:212 [inline]
       __blk_trace_setup+0xa8/0x180 kernel/trace/blktrace.c:626
       blk_trace_setup+0x47/0x70 kernel/trace/blktrace.c:648
       sg_ioctl_common drivers/scsi/sg.c:1114 [inline]
       sg_ioctl+0x7a3/0x26b0 drivers/scsi/sg.c:1156
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl fs/ioctl.c:892 [inline]
       __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&q->debugfs_mutex){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x19b/0xa60 kernel/locking/mutex.c:735
       blk_register_queue+0x13c/0x4f0 block/blk-sysfs.c:774
       add_disk_fwnode+0x785/0x1300 block/genhd.c:493
       add_disk include/linux/blkdev.h:753 [inline]
       brd_alloc.isra.0+0x50a/0x7c0 drivers/block/brd.c:401
       brd_init+0x12b/0x1d0 drivers/block/brd.c:481
       do_one_initcall+0x128/0x630 init/main.c:1266
       do_initcall_level init/main.c:1328 [inline]
       do_initcalls init/main.c:1344 [inline]
       do_basic_setup init/main.c:1363 [inline]
       kernel_init_freeable+0x58f/0x8b0 init/main.c:1577
       kernel_init+0x1c/0x2b0 init/main.c:1466
       ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #1 (&q->sysfs_lock){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x19b/0xa60 kernel/locking/mutex.c:735
       queue_attr_store+0xe2/0x170 block/blk-sysfs.c:710
       sysfs_kf_write+0x117/0x170 fs/sysfs/file.c:139
       kernfs_fop_write_iter+0x33d/0x500 fs/kernfs/file.c:334
       iter_file_splice_write+0x90f/0x10b0 fs/splice.c:743
       do_splice_from fs/splice.c:941 [inline]
       direct_splice_actor+0x18f/0x6c0 fs/splice.c:1164
       splice_direct_to_actor+0x346/0xa40 fs/splice.c:1108
       do_splice_direct_actor fs/splice.c:1207 [inline]
       do_splice_direct+0x178/0x250 fs/splice.c:1233
       do_sendfile+0xaed/0xe30 fs/read_write.c:1363
       __do_sys_sendfile64 fs/read_write.c:1424 [inline]
       __se_sys_sendfile64 fs/read_write.c:1410 [inline]
       __x64_sys_sendfile64+0x1da/0x220 fs/read_write.c:1410
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&q->q_usage_counter(io)#4){++++}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain kernel/locking/lockdep.c:3904 [inline]
       __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226
       lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       __submit_bio+0x49c/0x540 block/blk-core.c:630
       __submit_bio_noacct block/blk-core.c:678 [inline]
       submit_bio_noacct_nocheck+0x892/0xd70 block/blk-core.c:741
       submit_bio_noacct+0x93a/0x1e20 block/blk-core.c:868
       mpage_bio_submit_read fs/mpage.c:75 [inline]
       mpage_readahead+0x41d/0x590 fs/mpage.c:377
       read_pages+0x1a8/0xdc0 mm/readahead.c:160
       page_cache_ra_unbounded+0x3dc/0x750 mm/readahead.c:295
       do_page_cache_ra mm/readahead.c:325 [inline]
       page_cache_ra_order+0x8f2/0xc80 mm/readahead.c:524
       do_sync_mmap_readahead mm/filemap.c:3194 [inline]
       filemap_fault+0x14a5/0x2820 mm/filemap.c:3335
       __do_fault+0x10a/0x490 mm/memory.c:4907
       do_read_fault mm/memory.c:5322 [inline]
       do_fault mm/memory.c:5456 [inline]
       do_pte_missing+0xebd/0x3e00 mm/memory.c:3979
       handle_pte_fault mm/memory.c:5801 [inline]
       __handle_mm_fault+0x103c/0x2a40 mm/memory.c:5944
       handle_mm_fault+0x3fa/0xaa0 mm/memory.c:6112
       do_user_addr_fault+0x60d/0x13f0 arch/x86/mm/fault.c:1338
       handle_page_fault arch/x86/mm/fault.c:1481 [inline]
       exc_page_fault+0x5c/0xc0 arch/x86/mm/fault.c:1539
       asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623

other info that might help us debug this:

Chain exists of:
  &q->q_usage_counter(io)#4 --> &mm->mmap_lock --> mapping.invalidate_lock#2

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  rlock(mapping.invalidate_lock#2);
                               lock(&mm->mmap_lock);
                               lock(mapping.invalidate_lock#2);
  rlock(&q->q_usage_counter(io)#4);

 *** DEADLOCK ***

1 lock held by syz.1.1759/14749:
 #0: ffff888148d6abc0 (mapping.invalidate_lock#2){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:886 [inline]
 #0: ffff888148d6abc0 (mapping.invalidate_lock#2){.+.+}-{4:4}, at: page_cache_ra_unbounded+0x173/0x750 mm/readahead.c:226

stack backtrace:
CPU: 1 UID: 0 PID: 14749 Comm: syz.1.1759 Not tainted 6.13.0-syzkaller-00603-g3d3a9c8b89d4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 print_circular_bug+0x41c/0x610 kernel/locking/lockdep.c:2074
 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain kernel/locking/lockdep.c:3904 [inline]
 __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226
 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849
 bio_queue_enter block/blk.h:75 [inline]
 __submit_bio+0x49c/0x540 block/blk-core.c:630
 __submit_bio_noacct block/blk-core.c:678 [inline]
 submit_bio_noacct_nocheck+0x892/0xd70 block/blk-core.c:741
 submit_bio_noacct+0x93a/0x1e20 block/blk-core.c:868
 mpage_bio_submit_read fs/mpage.c:75 [inline]
 mpage_readahead+0x41d/0x590 fs/mpage.c:377
 read_pages+0x1a8/0xdc0 mm/readahead.c:160
 page_cache_ra_unbounded+0x3dc/0x750 mm/readahead.c:295
 do_page_cache_ra mm/readahead.c:325 [inline]
 page_cache_ra_order+0x8f2/0xc80 mm/readahead.c:524
 do_sync_mmap_readahead mm/filemap.c:3194 [inline]
 filemap_fault+0x14a5/0x2820 mm/filemap.c:3335
 __do_fault+0x10a/0x490 mm/memory.c:4907
 do_read_fault mm/memory.c:5322 [inline]
 do_fault mm/memory.c:5456 [inline]
 do_pte_missing+0xebd/0x3e00 mm/memory.c:3979
 handle_pte_fault mm/memory.c:5801 [inline]
 __handle_mm_fault+0x103c/0x2a40 mm/memory.c:5944
 handle_mm_fault+0x3fa/0xaa0 mm/memory.c:6112
 do_user_addr_fault+0x60d/0x13f0 arch/x86/mm/fault.c:1338
 handle_page_fault arch/x86/mm/fault.c:1481 [inline]
 exc_page_fault+0x5c/0xc0 arch/x86/mm/fault.c:1539
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7f603296b801
Code: 00 0f 1f 84 00 00 00 00 00 48 85 f6 74 37 49 89 f0 89 f8 48 89 fa c5 f9 ef c0 25 ff 0f 00 00 3d e0 0f 00 00 0f 87 5f 02 00 00 <c5> fd 74 0f c5 fd d7 c1 48 83 fe 20 76 11 85 c0 74 6d f3 0f bc c0
RSP: 002b:00007f6033806f58 EFLAGS: 00010283
RAX: 0000000000000000 RBX: 00007f6033807024 RCX: 0000000000000000
RDX: 0000000000000000 RSI: 000000000000000f RDI: 0000000000000000
RBP: 0000000000000005 R08: 000000000000000f R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000286 R12: 0000000000000000
R13: 00007f6033806fc0 R14: 00007f6032b76080 R15: 0000000000000000
 </TASK>

Crashes (30):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/21 15:27 upstream 3d3a9c8b89d4 6e87cfa2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/21 02:14 upstream ffd294d346d1 6e87cfa2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/21 02:01 upstream ffd294d346d1 6e87cfa2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/20 19:36 upstream ffd294d346d1 6e87cfa2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/18 10:48 upstream ad26fc09dabf f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/18 10:42 upstream ad26fc09dabf f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/18 10:01 upstream ad26fc09dabf f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/18 01:56 upstream ad26fc09dabf f2cb035c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/17 22:10 upstream 9bffa1ad25b8 bb91bdd4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/17 20:16 upstream 9bffa1ad25b8 bb91bdd4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/17 17:45 upstream 9bffa1ad25b8 bb91bdd4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/17 14:36 upstream 9bffa1ad25b8 bb91bdd4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/14 18:17 upstream c45323b7560e 0dce2409 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/14 18:16 upstream c45323b7560e 0dce2409 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/14 06:08 upstream c45323b7560e b1f1cd88 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/14 02:53 upstream c45323b7560e b1f1cd88 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/11 12:50 upstream 77a903cd8e5a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/11 11:31 upstream 77a903cd8e5a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/09 17:54 upstream eea6e4b4dfb8 9220929f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/07 23:39 upstream fbfd64d25c7a f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/06 23:20 upstream 5428dc1906dd f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/06 00:11 upstream ab75170520d4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/04 19:17 upstream 63676eefb7a0 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2025/01/04 10:43 upstream 63676eefb7a0 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2024/12/29 05:14 upstream 059dd502b263 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2024/12/28 03:09 upstream 8379578b11d5 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2024/12/26 08:26 upstream 9b2ffa6148b1 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2024/12/24 11:52 upstream f07044dd0df0 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2024/12/24 07:08 upstream f07044dd0df0 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
2024/12/24 02:42 upstream f07044dd0df0 444551c4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto possible deadlock in submit_bio_noacct_nocheck
* Struck through repros no longer work on HEAD.