syzbot


possible deadlock in elevator_disable

Status: auto-obsoleted due to no activity on 2025/03/26 07:18
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+dbad16606916438a362a@syzkaller.appspotmail.com
First crash: 121d, last: 71d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [block?] possible deadlock in elevator_disable 0 (1) 2024/11/30 19:22

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc7-syzkaller-00019-gc45323b7560e #0 Not tainted
------------------------------------------------------
syz.2.4355/19322 is trying to acquire lock:
ffff888025a818b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_exit block/elevator.c:158 [inline]
ffff888025a818b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_disable+0xd3/0x3f0 block/elevator.c:674

but task is already holding lock:
ffff888143f0d418 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_elv_switch_none block/blk-mq.c:4932 [inline]
ffff888143f0d418 (&q->sysfs_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:5010 [inline]
ffff888143f0d418 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x3fa/0x1ae0 block/blk-mq.c:5070

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #5 (&q->sysfs_lock){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       blk_mq_elv_switch_none block/blk-mq.c:4932 [inline]
       __blk_mq_update_nr_hw_queues block/blk-mq.c:5010 [inline]
       blk_mq_update_nr_hw_queues+0x3fa/0x1ae0 block/blk-mq.c:5070
       nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
       nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
       __nbd_ioctl drivers/block/nbd.c:1539 [inline]
       nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #4 (&q->q_usage_counter(io)#49){++++}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090
       __submit_bio+0x2c6/0x560 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
       submit_bh fs/buffer.c:2819 [inline]
       block_read_full_folio+0x9b3/0xae0 fs/buffer.c:2446
       filemap_read_folio+0x148/0x3b0 mm/filemap.c:2357
       filemap_update_page mm/filemap.c:2441 [inline]
       filemap_get_pages+0x18ca/0x2080 mm/filemap.c:2562
       filemap_read+0x452/0xf50 mm/filemap.c:2637
       blkdev_read_iter+0x2d8/0x430 block/fops.c:770
       new_sync_read fs/read_write.c:484 [inline]
       vfs_read+0x991/0xb70 fs/read_write.c:565
       ksys_read+0x18f/0x2b0 fs/read_write.c:708
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (mapping.invalidate_lock#2){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
       filemap_invalidate_lock_shared include/linux/fs.h:873 [inline]
       filemap_fault+0xb3e/0x1490 mm/filemap.c:3342
       __do_fault+0x135/0x390 mm/memory.c:4907
       do_read_fault mm/memory.c:5322 [inline]
       do_fault mm/memory.c:5456 [inline]
       do_pte_missing mm/memory.c:3979 [inline]
       handle_pte_fault+0x39eb/0x5ed0 mm/memory.c:5801
       __handle_mm_fault mm/memory.c:5944 [inline]
       handle_mm_fault+0x1053/0x1ad0 mm/memory.c:6112
       faultin_page mm/gup.c:1196 [inline]
       __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494
       populate_vma_page_range+0x264/0x330 mm/gup.c:1932
       __mm_populate+0x27a/0x460 mm/gup.c:2035
       do_mlock+0x61f/0x7e0 mm/mlock.c:653
       __do_sys_mlock mm/mlock.c:661 [inline]
       __se_sys_mlock mm/mlock.c:659 [inline]
       __x64_sys_mlock+0x60/0x70 mm/mlock.c:659
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __might_fault+0xc6/0x120 mm/memory.c:6751
       _inline_copy_from_user include/linux/uaccess.h:162 [inline]
       _copy_from_user+0x2a/0xc0 lib/usercopy.c:18
       copy_from_user include/linux/uaccess.h:212 [inline]
       __blk_trace_setup kernel/trace/blktrace.c:626 [inline]
       blk_trace_ioctl+0x1ad/0x9a0 kernel/trace/blktrace.c:740
       blkdev_ioctl+0x40c/0x6a0 block/ioctl.c:682
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&q->debugfs_mutex){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       blk_mq_exit_sched+0x106/0x4a0 block/blk-mq-sched.c:531
       elevator_exit block/elevator.c:159 [inline]
       elevator_disable+0xde/0x3f0 block/elevator.c:674
       blk_mq_elv_switch_none block/blk-mq.c:4946 [inline]
       __blk_mq_update_nr_hw_queues block/blk-mq.c:5010 [inline]
       blk_mq_update_nr_hw_queues+0x646/0x1ae0 block/blk-mq.c:5070
       nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
       nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
       __nbd_ioctl drivers/block/nbd.c:1539 [inline]
       nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&eq->sysfs_lock){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       elevator_exit block/elevator.c:158 [inline]
       elevator_disable+0xd3/0x3f0 block/elevator.c:674
       blk_mq_elv_switch_none block/blk-mq.c:4946 [inline]
       __blk_mq_update_nr_hw_queues block/blk-mq.c:5010 [inline]
       blk_mq_update_nr_hw_queues+0x646/0x1ae0 block/blk-mq.c:5070
       nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
       nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
       __nbd_ioctl drivers/block/nbd.c:1539 [inline]
       nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &eq->sysfs_lock --> &q->q_usage_counter(io)#49 --> &q->sysfs_lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&q->sysfs_lock);
                               lock(&q->q_usage_counter(io)#49);
                               lock(&q->sysfs_lock);
  lock(&eq->sysfs_lock);

 *** DEADLOCK ***

5 locks held by syz.2.4355/19322:
 #0: ffff88802597c998 (&nbd->config_lock){+.+.}-{4:4}, at: nbd_ioctl+0x13c/0xf40 drivers/block/nbd.c:1572
 #1: ffff88802597c8d8 (&set->tag_list_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0xc2/0x1ae0 block/blk-mq.c:5069
 #2: ffff888143f0cee8 (&q->q_usage_counter(io)#51){+.+.}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 #3: ffff888143f0cf20 (&q->q_usage_counter(queue)#35){+.+.}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 #4: ffff888143f0d418 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_elv_switch_none block/blk-mq.c:4932 [inline]
 #4: ffff888143f0d418 (&q->sysfs_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:5010 [inline]
 #4: ffff888143f0d418 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x3fa/0x1ae0 block/blk-mq.c:5070

stack backtrace:
CPU: 1 UID: 0 PID: 19322 Comm: syz.2.4355 Not tainted 6.13.0-rc7-syzkaller-00019-gc45323b7560e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 __mutex_lock_common kernel/locking/mutex.c:585 [inline]
 __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
 elevator_exit block/elevator.c:158 [inline]
 elevator_disable+0xd3/0x3f0 block/elevator.c:674
 blk_mq_elv_switch_none block/blk-mq.c:4946 [inline]
 __blk_mq_update_nr_hw_queues block/blk-mq.c:5010 [inline]
 blk_mq_update_nr_hw_queues+0x646/0x1ae0 block/blk-mq.c:5070
 nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
 __nbd_ioctl drivers/block/nbd.c:1539 [inline]
 nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
 blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f96e7385d29
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f96e8183038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f96e7575fa0 RCX: 00007f96e7385d29
RDX: 0000000000000000 RSI: 000000000000ab03 RDI: 0000000000000003
RBP: 00007f96e7401b08 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f96e7575fa0 R15: 00007ffe28589d48
 </TASK>
block nbd2: shutting down sockets

Crashes (31):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/13 23:56 upstream c45323b7560e b1f1cd88 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/18 17:32 upstream aef25be35d23 1432fc84 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/14 20:36 upstream a446e965a188 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/12 12:42 upstream 231825b2e1ff 941924eb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/07 03:47 upstream 9a6e8c7c3a02 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/11/27 02:49 upstream 7eef7e306d3c 52b38cc1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/22 22:55 upstream bcde95ce32b6 b4fbdbd4 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/16 20:53 upstream 78d4f34e2115 eec85da6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/08 14:52 upstream 7503345ac5f5 9ac0fdc6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/07 20:59 upstream 7503345ac5f5 9ac0fdc6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/04 11:19 upstream ceb8bf2ceaa7 b50eb251 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2025/01/15 07:18 upstream c3812b15000c 7315a7cf .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2025/01/12 06:52 upstream b62cef9a5c67 6dbc6a9b .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/30 23:07 upstream ccb98ccef0e5 d3ccff63 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/24 03:49 upstream f07044dd0df0 444551c4 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 14:45 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 14:45 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 14:25 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 14:25 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 04:33 upstream 59dbb9d81adf a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 19:18 upstream 59dbb9d81adf a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 16:49 upstream f44d154d6e3d a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 16:49 upstream f44d154d6e3d a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 13:24 upstream f44d154d6e3d bc1a1b50 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/16 12:04 upstream 78d4f34e2115 eec85da6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/15 13:40 upstream 2d8308bf5b67 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/14 17:11 upstream a446e965a188 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/14 13:41 upstream a446e965a188 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/07 17:34 upstream b5f217084ab3 9ac0fdc6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/01 06:27 upstream d8b78066f4c9 68914665 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/11/26 19:18 upstream 7eef7e306d3c e9a9a9f2 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
* Struck through repros no longer work on HEAD.