syzbot


possible deadlock in blk_mq_update_nr_hw_queues

Status: upstream: reported on 2024/11/24 23:13
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+6279b273d888c2017726@syzkaller.appspotmail.com
First crash: 6d00h, last: 18h02m
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [block?] possible deadlock in blk_mq_update_nr_hw_queues 0 (1) 2024/11/24 23:13

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.12.0-syzkaller-09435-g2c22dc1ee3a1 #0 Not tainted
------------------------------------------------------
syz.5.2309/15857 is trying to acquire lock:
ffff88814433d418 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_elv_switch_none block/blk-mq.c:4847 [inline]
ffff88814433d418 (&q->sysfs_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:4925 [inline]
ffff88814433d418 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x3fa/0x1ae0 block/blk-mq.c:4985

but task is already holding lock:
ffff88814433cee8 (&q->q_usage_counter(io)#54){++++}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&q->q_usage_counter(io)#54){++++}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       blk_mq_submit_bio+0x1536/0x23a0 block/blk-mq.c:3092
       __submit_bio+0x2c6/0x560 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
       submit_bh fs/buffer.c:2819 [inline]
       block_read_full_folio+0x93b/0xcd0 fs/buffer.c:2446
       filemap_read_folio+0x14b/0x630 mm/filemap.c:2366
       filemap_update_page mm/filemap.c:2450 [inline]
       filemap_get_pages+0x17af/0x2540 mm/filemap.c:2571
       filemap_read+0x45c/0xf50 mm/filemap.c:2646
       blkdev_read_iter+0x2d8/0x430 block/fops.c:767
       new_sync_read fs/read_write.c:484 [inline]
       vfs_read+0x991/0xb70 fs/read_write.c:565
       ksys_read+0x18f/0x2b0 fs/read_write.c:708
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (mapping.invalidate_lock#2){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
       filemap_invalidate_lock_shared include/linux/fs.h:873 [inline]
       page_cache_ra_unbounded+0x143/0x8c0 mm/readahead.c:226
       do_sync_mmap_readahead+0x499/0x970
       filemap_fault+0x8c5/0x1950 mm/filemap.c:3344
       __do_fault+0x135/0x460 mm/memory.c:4907
       do_read_fault mm/memory.c:5322 [inline]
       do_fault mm/memory.c:5456 [inline]
       do_pte_missing mm/memory.c:3979 [inline]
       handle_pte_fault+0x335a/0x68a0 mm/memory.c:5801
       __handle_mm_fault mm/memory.c:5944 [inline]
       handle_mm_fault+0x1053/0x1ad0 mm/memory.c:6112
       faultin_page mm/gup.c:1187 [inline]
       __get_user_pages+0x1c82/0x49e0 mm/gup.c:1485
       __get_user_pages_locked mm/gup.c:1751 [inline]
       get_user_pages_unlocked+0x2a8/0x9d0 mm/gup.c:2728
       hva_to_pfn_slow virt/kvm/kvm_main.c:2820 [inline]
       hva_to_pfn+0x445/0xfe0 virt/kvm/kvm_main.c:2916
       kvm_follow_pfn virt/kvm/kvm_main.c:2963 [inline]
       __kvm_faultin_pfn+0x497/0x580 virt/kvm/kvm_main.c:2984
       __kvm_mmu_faultin_pfn arch/x86/kvm/mmu/mmu.c:4354 [inline]
       kvm_mmu_faultin_pfn+0x6c3/0x1580 arch/x86/kvm/mmu/mmu.c:4474
       kvm_tdp_mmu_page_fault arch/x86/kvm/mmu/mmu.c:4642 [inline]
       kvm_tdp_page_fault+0x215/0x300 arch/x86/kvm/mmu/mmu.c:4678
       kvm_mmu_do_page_fault+0x583/0xca0 arch/x86/kvm/mmu/mmu_internal.h:325
       kvm_mmu_page_fault+0x2db/0xc30 arch/x86/kvm/mmu/mmu.c:6090
       __vmx_handle_exit arch/x86/kvm/vmx/vmx.c:6620 [inline]
       vmx_handle_exit+0x10e4/0x1e90 arch/x86/kvm/vmx/vmx.c:6637
       vcpu_enter_guest arch/x86/kvm/x86.c:11081 [inline]
       vcpu_run+0x587e/0x8a70 arch/x86/kvm/x86.c:11242
       kvm_arch_vcpu_ioctl_run+0xa76/0x19d0 arch/x86/kvm/x86.c:11560
       kvm_vcpu_ioctl+0x920/0xea0 virt/kvm/kvm_main.c:4340
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __might_fault+0xc6/0x120 mm/memory.c:6751
       _inline_copy_from_user include/linux/uaccess.h:162 [inline]
       _copy_from_user+0x2a/0xc0 lib/usercopy.c:18
       copy_from_user include/linux/uaccess.h:212 [inline]
       __blk_trace_setup kernel/trace/blktrace.c:626 [inline]
       blk_trace_ioctl+0x1ad/0x9a0 kernel/trace/blktrace.c:740
       blkdev_ioctl+0x40c/0x6a0 block/ioctl.c:682
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&q->debugfs_mutex){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       blk_register_queue+0x156/0x460 block/blk-sysfs.c:774
       add_disk_fwnode+0x648/0xf80 block/genhd.c:493
       add_disk include/linux/blkdev.h:751 [inline]
       brd_alloc+0x547/0x790 drivers/block/brd.c:399
       brd_init+0x126/0x1b0 drivers/block/brd.c:479
       do_one_initcall+0x248/0x880 init/main.c:1266
       do_initcall_level+0x157/0x210 init/main.c:1328
       do_initcalls+0x3f/0x80 init/main.c:1344
       kernel_init_freeable+0x435/0x5d0 init/main.c:1577
       kernel_init+0x1d/0x2b0 init/main.c:1466
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #0 (&q->sysfs_lock){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       blk_mq_elv_switch_none block/blk-mq.c:4847 [inline]
       __blk_mq_update_nr_hw_queues block/blk-mq.c:4925 [inline]
       blk_mq_update_nr_hw_queues+0x3fa/0x1ae0 block/blk-mq.c:4985
       nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
       nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
       __nbd_ioctl drivers/block/nbd.c:1539 [inline]
       nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &q->sysfs_lock --> mapping.invalidate_lock#2 --> &q->q_usage_counter(io)#54

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&q->q_usage_counter(io)#54);
                               lock(mapping.invalidate_lock#2);
                               lock(&q->q_usage_counter(io)#54);
  lock(&q->sysfs_lock);

 *** DEADLOCK ***

4 locks held by syz.5.2309/15857:
 #0: ffff888025ac1998 (&nbd->config_lock){+.+.}-{4:4}, at: nbd_ioctl+0x13c/0xf40 drivers/block/nbd.c:1572
 #1: ffff888025ac18d8 (&set->tag_list_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0xc2/0x1ae0 block/blk-mq.c:4984
 #2: ffff88814433cee8 (&q->q_usage_counter(io)#54){++++}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 #3: ffff88814433cf20 (&q->q_usage_counter(queue)#38){+.+.}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413

stack backtrace:
CPU: 1 UID: 0 PID: 15857 Comm: syz.5.2309 Not tainted 6.12.0-syzkaller-09435-g2c22dc1ee3a1 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 __mutex_lock_common kernel/locking/mutex.c:585 [inline]
 __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
 blk_mq_elv_switch_none block/blk-mq.c:4847 [inline]
 __blk_mq_update_nr_hw_queues block/blk-mq.c:4925 [inline]
 blk_mq_update_nr_hw_queues+0x3fa/0x1ae0 block/blk-mq.c:4985
 nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
 __nbd_ioctl drivers/block/nbd.c:1539 [inline]
 nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
 blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fe831f7e819
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fe832d3a038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fe832135fa0 RCX: 00007fe831f7e819
RDX: 0000000000000000 RSI: 000000000000ab03 RDI: 0000000000000003
RBP: 00007fe831ff175e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fe832135fa0 R15: 00007ffe9ded79c8
 </TASK>

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/26 05:39 upstream 2c22dc1ee3a1 a84878fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_update_nr_hw_queues
2024/11/26 05:38 upstream 2c22dc1ee3a1 a84878fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_update_nr_hw_queues
2024/11/25 03:06 upstream 9f16d5e6f220 68da6d95 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_update_nr_hw_queues
2024/11/23 02:44 upstream 06afb0f36106 68da6d95 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in blk_mq_update_nr_hw_queues
2024/11/22 09:21 upstream 28eb75e178d3 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in blk_mq_update_nr_hw_queues
2024/11/22 03:06 upstream fcc79e1714e8 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_update_nr_hw_queues
2024/11/21 19:53 upstream fcc79e1714e8 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_update_nr_hw_queues
2024/11/20 23:08 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_update_nr_hw_queues
* Struck through repros no longer work on HEAD.