syzbot


possible deadlock in elevator_disable

Status: upstream: reported on 2024/11/30 19:22
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+dbad16606916438a362a@syzkaller.appspotmail.com
First crash: 24d, last: 2d20h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [block?] possible deadlock in elevator_disable 0 (1) 2024/11/30 19:22

Sample crash report:
netlink: 4 bytes leftover after parsing attributes in process `syz.5.7116'.
netlink: 32 bytes leftover after parsing attributes in process `syz.5.7116'.
======================================================
WARNING: possible circular locking dependency detected
6.13.0-rc3-syzkaller-00044-gaef25be35d23 #0 Not tainted
------------------------------------------------------
syz.5.7116/510 is trying to acquire lock:
ffff88807a2678b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_exit block/elevator.c:158 [inline]
ffff88807a2678b8 (&eq->sysfs_lock){+.+.}-{4:4}, at: elevator_disable+0xd3/0x3f0 block/elevator.c:674

but task is already holding lock:
ffff88814331d8b0 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #4 (&q->q_usage_counter(io)#49){++++}-{0:0}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       bio_queue_enter block/blk.h:75 [inline]
       blk_mq_submit_bio+0x1536/0x2390 block/blk-mq.c:3090
       __submit_bio+0x2c6/0x560 block/blk-core.c:629
       __submit_bio_noacct_mq block/blk-core.c:710 [inline]
       submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739
       submit_bh fs/buffer.c:2819 [inline]
       block_read_full_folio+0x9b3/0xae0 fs/buffer.c:2446
       filemap_read_folio+0x148/0x3b0 mm/filemap.c:2366
       filemap_update_page mm/filemap.c:2450 [inline]
       filemap_get_pages+0x18ca/0x2080 mm/filemap.c:2571
       filemap_read+0x452/0xf50 mm/filemap.c:2646
       blkdev_read_iter+0x2d8/0x430 block/fops.c:770
       new_sync_read fs/read_write.c:484 [inline]
       vfs_read+0x991/0xb70 fs/read_write.c:565
       ksys_read+0x18f/0x2b0 fs/read_write.c:708
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #3 (mapping.invalidate_lock#2){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       down_read+0xb1/0xa40 kernel/locking/rwsem.c:1524
       filemap_invalidate_lock_shared include/linux/fs.h:873 [inline]
       page_cache_ra_unbounded+0x142/0x720 mm/readahead.c:226
       do_sync_mmap_readahead+0x499/0x970
       filemap_fault+0x8a9/0x1490 mm/filemap.c:3344
       __do_fault+0x135/0x390 mm/memory.c:4907
       do_read_fault mm/memory.c:5322 [inline]
       do_fault mm/memory.c:5456 [inline]
       do_pte_missing mm/memory.c:3979 [inline]
       handle_pte_fault+0x39eb/0x5ed0 mm/memory.c:5801
       __handle_mm_fault mm/memory.c:5944 [inline]
       handle_mm_fault+0x1053/0x1ad0 mm/memory.c:6112
       faultin_page mm/gup.c:1196 [inline]
       __get_user_pages+0x1c82/0x49e0 mm/gup.c:1494
       __get_user_pages_locked mm/gup.c:1760 [inline]
       get_user_pages_unlocked+0x2a8/0x9d0 mm/gup.c:2737
       hva_to_pfn_slow virt/kvm/kvm_main.c:2820 [inline]
       hva_to_pfn+0x445/0xfe0 virt/kvm/kvm_main.c:2916
       kvm_follow_pfn virt/kvm/kvm_main.c:2963 [inline]
       __kvm_faultin_pfn+0x497/0x580 virt/kvm/kvm_main.c:2984
       __kvm_mmu_faultin_pfn arch/x86/kvm/mmu/mmu.c:4354 [inline]
       kvm_mmu_faultin_pfn+0x6c3/0x1580 arch/x86/kvm/mmu/mmu.c:4474
       kvm_tdp_mmu_page_fault arch/x86/kvm/mmu/mmu.c:4642 [inline]
       kvm_tdp_page_fault+0x215/0x300 arch/x86/kvm/mmu/mmu.c:4678
       kvm_mmu_do_page_fault+0x583/0xca0 arch/x86/kvm/mmu/mmu_internal.h:325
       kvm_mmu_page_fault+0x2db/0xc30 arch/x86/kvm/mmu/mmu.c:6090
       __vmx_handle_exit arch/x86/kvm/vmx/vmx.c:6620 [inline]
       vmx_handle_exit+0x10e4/0x1e90 arch/x86/kvm/vmx/vmx.c:6637
       vcpu_enter_guest arch/x86/kvm/x86.c:11081 [inline]
       vcpu_run+0x586a/0x8a90 arch/x86/kvm/x86.c:11242
       kvm_arch_vcpu_ioctl_run+0xa76/0x19d0 arch/x86/kvm/x86.c:11560
       kvm_vcpu_ioctl+0x920/0xea0 virt/kvm/kvm_main.c:4340
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&mm->mmap_lock){++++}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __might_fault+0xc6/0x120 mm/memory.c:6751
       _inline_copy_from_user include/linux/uaccess.h:162 [inline]
       _copy_from_user+0x2a/0xc0 lib/usercopy.c:18
       copy_from_user include/linux/uaccess.h:212 [inline]
       __blk_trace_setup kernel/trace/blktrace.c:626 [inline]
       blk_trace_ioctl+0x1ad/0x9a0 kernel/trace/blktrace.c:740
       blkdev_ioctl+0x40c/0x6a0 block/ioctl.c:682
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:906 [inline]
       __se_sys_ioctl+0xf5/0x170 fs/ioctl.c:892
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&q->debugfs_mutex){+.+.}-{4:4}:
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       blk_mq_exit_sched+0x106/0x4a0 block/blk-mq-sched.c:531
       elevator_exit+0x5e/0x80 block/elevator.c:159
       del_gendisk+0x7a8/0x920 block/genhd.c:735
       nbd_dev_remove drivers/block/nbd.c:264 [inline]
       nbd_dev_remove_work+0x47/0xe0 drivers/block/nbd.c:280
       process_one_work kernel/workqueue.c:3229 [inline]
       process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3310
       worker_thread+0x870/0xd30 kernel/workqueue.c:3391
       kthread+0x2f0/0x390 kernel/kthread.c:389
       ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

-> #0 (&eq->sysfs_lock){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3161 [inline]
       check_prevs_add kernel/locking/lockdep.c:3280 [inline]
       validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
       __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
       lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
       __mutex_lock_common kernel/locking/mutex.c:585 [inline]
       __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
       elevator_exit block/elevator.c:158 [inline]
       elevator_disable+0xd3/0x3f0 block/elevator.c:674
       blk_mq_elv_switch_none block/blk-mq.c:4942 [inline]
       __blk_mq_update_nr_hw_queues block/blk-mq.c:5005 [inline]
       blk_mq_update_nr_hw_queues+0x683/0x1b20 block/blk-mq.c:5068
       nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
       nbd_genl_connect+0x157c/0x1c80 drivers/block/nbd.c:2139
       genl_family_rcv_msg_doit net/netlink/genetlink.c:1115 [inline]
       genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline]
       genl_rcv_msg+0xb14/0xec0 net/netlink/genetlink.c:1210
       netlink_rcv_skb+0x1e3/0x430 net/netlink/af_netlink.c:2542
       genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219
       netlink_unicast_kernel net/netlink/af_netlink.c:1321 [inline]
       netlink_unicast+0x7f6/0x990 net/netlink/af_netlink.c:1347
       netlink_sendmsg+0x8e4/0xcb0 net/netlink/af_netlink.c:1891
       sock_sendmsg_nosec net/socket.c:711 [inline]
       __sock_sendmsg+0x221/0x270 net/socket.c:726
       ____sys_sendmsg+0x52a/0x7e0 net/socket.c:2583
       ___sys_sendmsg net/socket.c:2637 [inline]
       __sys_sendmsg+0x269/0x350 net/socket.c:2669
       do_syscall_x64 arch/x86/entry/common.c:52 [inline]
       do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

other info that might help us debug this:

Chain exists of:
  &eq->sysfs_lock --> mapping.invalidate_lock#2 --> &q->q_usage_counter(io)#49

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&q->q_usage_counter(io)#49);
                               lock(mapping.invalidate_lock#2);
                               lock(&q->q_usage_counter(io)#49);
  lock(&eq->sysfs_lock);

 *** DEADLOCK ***

8 locks held by syz.5.7116/510:
 #0: ffffffff8fd02fd0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8fd02e88 (genl_mutex){+.+.}-{4:4}, at: genl_lock net/netlink/genetlink.c:35 [inline]
 #1: ffffffff8fd02e88 (genl_mutex){+.+.}-{4:4}, at: genl_op_lock net/netlink/genetlink.c:60 [inline]
 #1: ffffffff8fd02e88 (genl_mutex){+.+.}-{4:4}, at: genl_rcv_msg+0x121/0xec0 net/netlink/genetlink.c:1209
 #2: ffff888026187198 (&nbd->config_lock){+.+.}-{4:4}, at: nbd_genl_connect+0xc26/0x1c80 drivers/block/nbd.c:2049
 #3: ffff8880261870d8 (&set->tag_list_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0xc2/0x1b20 block/blk-mq.c:5067
 #4: ffff88814331de70 (&q->sysfs_dir_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:4995 [inline]
 #4: ffff88814331de70 (&q->sysfs_dir_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x2c1/0x1b20 block/blk-mq.c:5068
 #5: ffff88814331dde0 (&q->sysfs_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:4996 [inline]
 #5: ffff88814331dde0 (&q->sysfs_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x2cf/0x1b20 block/blk-mq.c:5068
 #6: ffff88814331d8b0 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 #7: ffff88814331d8e8 (&q->q_usage_counter(queue)#33){+.+.}-{0:0}, at: nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413

stack backtrace:
CPU: 1 UID: 0 PID: 510 Comm: syz.5.7116 Not tainted 6.13.0-rc3-syzkaller-00044-gaef25be35d23 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/25/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074
 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206
 check_prev_add kernel/locking/lockdep.c:3161 [inline]
 check_prevs_add kernel/locking/lockdep.c:3280 [inline]
 validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849
 __mutex_lock_common kernel/locking/mutex.c:585 [inline]
 __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735
 elevator_exit block/elevator.c:158 [inline]
 elevator_disable+0xd3/0x3f0 block/elevator.c:674
 blk_mq_elv_switch_none block/blk-mq.c:4942 [inline]
 __blk_mq_update_nr_hw_queues block/blk-mq.c:5005 [inline]
 blk_mq_update_nr_hw_queues+0x683/0x1b20 block/blk-mq.c:5068
 nbd_start_device+0x16c/0xaa0 drivers/block/nbd.c:1413
 nbd_genl_connect+0x157c/0x1c80 drivers/block/nbd.c:2139
 genl_family_rcv_msg_doit net/netlink/genetlink.c:1115 [inline]
 genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline]
 genl_rcv_msg+0xb14/0xec0 net/netlink/genetlink.c:1210
 netlink_rcv_skb+0x1e3/0x430 net/netlink/af_netlink.c:2542
 genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219
 netlink_unicast_kernel net/netlink/af_netlink.c:1321 [inline]
 netlink_unicast+0x7f6/0x990 net/netlink/af_netlink.c:1347
 netlink_sendmsg+0x8e4/0xcb0 net/netlink/af_netlink.c:1891
 sock_sendmsg_nosec net/socket.c:711 [inline]
 __sock_sendmsg+0x221/0x270 net/socket.c:726
 ____sys_sendmsg+0x52a/0x7e0 net/socket.c:2583
 ___sys_sendmsg net/socket.c:2637 [inline]
 __sys_sendmsg+0x269/0x350 net/socket.c:2669
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff68cb85d29
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ff68d928038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007ff68cd75fa0 RCX: 00007ff68cb85d29
RDX: 0000000000000000 RSI: 00000000200002c0 RDI: 000000000000000a
RBP: 00007ff68cc01a20 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007ff68cd75fa0 R15: 00007ffcc2ee1cb8
 </TASK>

Crashes (25):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/12/18 17:32 upstream aef25be35d23 1432fc84 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/14 20:36 upstream a446e965a188 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/12 12:42 upstream 231825b2e1ff 941924eb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/07 03:47 upstream 9a6e8c7c3a02 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/11/27 02:49 upstream 7eef7e306d3c 52b38cc1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in elevator_disable
2024/12/16 20:53 upstream 78d4f34e2115 eec85da6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/08 14:52 upstream 7503345ac5f5 9ac0fdc6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/07 20:59 upstream 7503345ac5f5 9ac0fdc6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/04 11:19 upstream ceb8bf2ceaa7 b50eb251 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in elevator_disable
2024/12/18 14:45 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 14:45 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 14:25 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 14:25 upstream aef25be35d23 1432fc84 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/18 04:33 upstream 59dbb9d81adf a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 19:18 upstream 59dbb9d81adf a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 16:49 upstream f44d154d6e3d a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 16:49 upstream f44d154d6e3d a0626d3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/17 13:24 upstream f44d154d6e3d bc1a1b50 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/16 12:04 upstream 78d4f34e2115 eec85da6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/15 13:40 upstream 2d8308bf5b67 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/14 17:11 upstream a446e965a188 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/14 13:41 upstream a446e965a188 7cbfbb3a .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/07 17:34 upstream b5f217084ab3 9ac0fdc6 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/12/01 06:27 upstream d8b78066f4c9 68914665 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
2024/11/26 19:18 upstream 7eef7e306d3c e9a9a9f2 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream-386 possible deadlock in elevator_disable
* Struck through repros no longer work on HEAD.