syzbot


possible deadlock in blk_mq_exit_sched

Status: upstream: reported on 2024/12/19 02:01
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+d8caa4d9cdee21b5e671@syzkaller.appspotmail.com
First crash: 6d12h, last: 8h40m
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [block?] possible deadlock in blk_mq_exit_sched 0 (1) 2024/12/19 02:01

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc3-syzkaller-00073-geabcdba3ad40 #0 Not tainted
------------------------------------------------------
syz.0.2140/14023 is trying to acquire lock:
ffff888144740918 (&q->debugfs_mutex){+.+.}-{4:4}, at: blk_mq_exit_sched+0x106/0x4a0 block/blk-mq-sched.c:531

but task is already holding lock:
ffff8880253298b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_exit block/elevator.c:158 [inline]
ffff8880253298b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_disable+0xd3/0x3f0 block/elevator.c:674

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&eq->sysfs_lock){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       elevator_exit block/elevator.c:158 [inline]
       elevator_disable+0xd3/0x3f0 block/elevator.c:674
       blk_mq_elv_switch_none block/blk-mq.c:4942 [inline]
       __blk_mq_update_nr_hw_queues block/blk-mq.c:5005 [inline]
       blk_mq_update_nr_hw_queues+0x683/0x1b20 block/blk-mq.c:5068
       nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
       nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
       __nbd_ioctl drivers/block/nbd.c:1539 [inline]
       nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (&q->q_usage_counter(io)#49){++++}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090
       __submit_bio+0x2c6/0x560 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
       mpage_bio_submit_read fs/mpage.c:75 [inline]
       mpage_readahead+0x630/0x780 fs/mpage.c:377
       read_pages+0x176/0x750 mm/readahead.c:160
       page_cache_ra_unbounded+0x606/0x720 mm/readahead.c:295
       do_page_cache_ra mm/readahead.c:325 [inline]
       force_page_cache_ra mm/readahead.c:354 [inline]
       page_cache_sync_ra+0x3c5/0xad0 mm/readahead.c:566
       page_cache_sync_readahead include/linux/pagemap.h:1397 [inline]
       filemap_get_pages+0x605/0x2080 mm/filemap.c:2546
       filemap_read+0x452/0xf50 mm/filemap.c:2646
       blkdev_read_iter+0x2d8/0x430 block/fops.c:770
       new_sync_read fs/read_write.c:484 [inline]
       vfs_read+0x991/0xb70 fs/read_write.c:565
       ksys_read+0x18f/0x2b0 fs/read_write.c:708
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (mapping.invalidate_lock#2){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
       filemap_invalidate_lock_shared include/linux/fs.h:873 [inline]
       filemap_fault+0x615/0x1490 mm/filemap.c:3332
       __do_fault+0x135/0x390 mm/memory.c:4907
       do_cow_fault mm/memory.c:5352 [inline]
       do_fault mm/memory.c:5458 [inline]
       do_pte_missing mm/memory.c:3979 [inline]
       handle_pte_fault+0xcab/0x5ed0 mm/memory.c:5801
       __handle_mm_fault mm/memory.c:5944 [inline]
       handle_mm_fault+0x1053/0x1ad0 mm/memory.c:6112
       faultin_page mm/gup.c:1196 [inline]
       __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494
       populate_vma_page_range+0x264/0x330 mm/gup.c:1932
       __mm_populate+0x27a/0x460 mm/gup.c:2035
       do_mlock+0x61f/0x7e0 mm/mlock.c:653
       __do_sys_mlock mm/mlock.c:661 [inline]
       __se_sys_mlock mm/mlock.c:659 [inline]
       __x64_sys_mlock+0x60/0x70 mm/mlock.c:659
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __might_fault+0xc6/0x120 mm/memory.c:6751
       _inline_copy_from_user include/linux/uaccess.h:162 [inline]
       _copy_from_user+0x2a/0xc0 lib/usercopy.c:18
       copy_from_user include/linux/uaccess.h:212 [inline]
       __blk_trace_setup kernel/trace/blktrace.c:626 [inline]
       blk_trace_ioctl+0x1ad/0x9a0 kernel/trace/blktrace.c:740
       blkdev_ioctl+0x40c/0x6a0 block/ioctl.c:682
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&q->debugfs_mutex){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       blk_mq_exit_sched+0x106/0x4a0 block/blk-mq-sched.c:531
       elevator_exit block/elevator.c:159 [inline]
       elevator_disable+0xde/0x3f0 block/elevator.c:674
       blk_mq_elv_switch_none block/blk-mq.c:4942 [inline]
       __blk_mq_update_nr_hw_queues block/blk-mq.c:5005 [inline]
       blk_mq_update_nr_hw_queues+0x683/0x1b20 block/blk-mq.c:5068
       nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
       nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
       __nbd_ioctl drivers/block/nbd.c:1539 [inline]
       nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
       blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &q->debugfs_mutex --> &q->q_usage_counter(io)#49 --> &eq->sysfs_lock

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&eq->sysfs_lock);
                               lock(&q->q_usage_counter(io)#49);
                               lock(&eq->sysfs_lock);
  lock(&q->debugfs_mutex);

 *** DEADLOCK ***

7 locks held by syz.0.2140/14023:
 #0: ffff888144738198 (&nbd->config_lock){+.+.}-{4:4}, at: nbd_ioctl+0x13c/0xf40 drivers/block/nbd.c:1572
 #1: ffff8881447380d8 (&set->tag_list_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0xc2/0x1b20 block/blk-mq.c:5067
 #2: ffff888144740668 (&q->sysfs_dir_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:4995 [inline]
 #2: ffff888144740668 (&q->sysfs_dir_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x2c1/0x1b20 block/blk-mq.c:5068
 #3: ffff8881447405d8 (&q->sysfs_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:4996 [inline]
 #3: ffff8881447405d8 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x2cf/0x1b20 block/blk-mq.c:5068
 #4: ffff8881447400a8 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 #5: ffff8881447400e0 (&q->q_usage_counter(queue)#33){+.+.}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 #6: ffff8880253298b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_exit block/elevator.c:158 [inline]
 #6: ffff8880253298b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_disable+0xd3/0x3f0 block/elevator.c:674

stack backtrace:
CPU: 0 UID: 0 PID: 14023 Comm: syz.0.2140 Not tainted 6.13.0-rc3-syzkaller-00073-geabcdba3ad40 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/25/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 __mutex_lock_common kernel/locking/mutex.c:585 [inline]
 __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
 blk_mq_exit_sched+0x106/0x4a0 block/blk-mq-sched.c:531
 elevator_exit block/elevator.c:159 [inline]
 elevator_disable+0xde/0x3f0 block/elevator.c:674
 blk_mq_elv_switch_none block/blk-mq.c:4942 [inline]
 __blk_mq_update_nr_hw_queues block/blk-mq.c:5005 [inline]
 blk_mq_update_nr_hw_queues+0x683/0x1b20 block/blk-mq.c:5068
 nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 nbd_start_device_ioctl drivers/block/nbd.c:1464 [inline]
 __nbd_ioctl drivers/block/nbd.c:1539 [inline]
 nbd_ioctl+0x5dc/0xf40 drivers/block/nbd.c:1579
 blkdev_ioctl+0x57d/0x6a0 block/ioctl.c:693
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1716385d29
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f17171ea038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f1716575fa0 RCX: 00007f1716385d29
RDX: 0000000000000000 RSI: 000000000000ab03 RDI: 0000000000000006
RBP: 00007f1716401aa8 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f1716575fa0 R15: 00007ffc16a9d138
 </TASK>

Crashes (11):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/12/19 19:06 upstream eabcdba3ad40 1d58202c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_exit_sched
2024/12/16 03:25 upstream 78d4f34e2115 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in blk_mq_exit_sched
2024/12/21 05:33 upstream e9b8ffafd20a d7f584ee .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in blk_mq_exit_sched
2024/12/20 18:26 upstream 8faabc041a00 49cfeac8 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in blk_mq_exit_sched
2024/12/20 08:22 upstream 8faabc041a00 c87fa8a3 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in blk_mq_exit_sched
2024/12/16 23:30 upstream f44d154d6e3d f93b2b55 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in blk_mq_exit_sched
2024/12/16 07:51 upstream dccbe2047a5b 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in blk_mq_exit_sched
2024/12/15 14:53 upstream 2d8308bf5b67 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in blk_mq_exit_sched
2024/12/15 02:33 upstream a0e3919a2df2 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in blk_mq_exit_sched
2024/12/15 02:33 upstream a0e3919a2df2 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in blk_mq_exit_sched
2024/12/15 01:57 upstream a0e3919a2df2 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in blk_mq_exit_sched
* Struck through repros no longer work on HEAD.