====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Tainted: G L ------------------------------------------------------ syz.2.6827/25572 is trying to acquire lock: ffffffff8d69a778 (pcpu_alloc_mutex){+.+.}-{4:4}, at: pcpu_alloc_noprof+0x26c/0x16d0 mm/percpu.c:1782 but task is already holding lock: ffff888142fb0060 (&q->q_usage_counter(io)#51){++++}-{0:0}, at: nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&q->q_usage_counter(io)#51){++++}-{0:0}: blk_alloc_queue+0x537/0x620 block/blk-core.c:461 blk_mq_alloc_queue block/blk-mq.c:4415 [inline] __blk_mq_alloc_disk+0x15c/0x340 block/blk-mq.c:4462 nbd_dev_add+0x46c/0xae0 drivers/block/nbd.c:1954 nbd_init+0x168/0x1f0 drivers/block/nbd.c:2692 do_one_initcall+0x1f1/0x800 init/main.c:1378 do_initcall_level+0x104/0x190 init/main.c:1440 do_initcalls+0x59/0xa0 init/main.c:1456 kernel_init_freeable+0x2a7/0x3d0 init/main.c:1688 kernel_init+0x1d/0x1d0 init/main.c:1578 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:4301 [inline] fs_reclaim_acquire+0x72/0x100 mm/page_alloc.c:4315 might_alloc include/linux/sched/mm.h:317 [inline] slab_pre_alloc_hook mm/slub.c:4904 [inline] slab_alloc_node mm/slub.c:5239 [inline] __do_kmalloc_node mm/slub.c:5656 [inline] __kmalloc_noprof+0x9d/0x7e0 mm/slub.c:5669 kmalloc_noprof include/linux/slab.h:961 [inline] kzalloc_noprof include/linux/slab.h:1094 [inline] pcpu_mem_zalloc mm/percpu.c:510 [inline] pcpu_alloc_chunk mm/percpu.c:1430 [inline] pcpu_create_chunk+0x54/0xbe0 mm/percpu-vm.c:338 pcpu_alloc_noprof+0x77d/0x16d0 mm/percpu.c:1838 crash_notes_memory_init+0x1d/0x50 kernel/crash_core.c:491 do_one_initcall+0x1f1/0x800 init/main.c:1378 do_initcall_level+0x104/0x190 init/main.c:1440 do_initcalls+0x59/0xa0 init/main.c:1456 kernel_init_freeable+0x2a7/0x3d0 init/main.c:1688 kernel_init+0x1d/0x1d0 init/main.c:1578 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246 -> #0 (pcpu_alloc_mutex){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a6/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0x107/0x340 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] _mutex_lock_killable+0x63/0x1d0 kernel/locking/rtmutex_api.c:573 pcpu_alloc_noprof+0x26c/0x16d0 mm/percpu.c:1782 init_alloc_hint lib/sbitmap.c:16 [inline] sbitmap_init_node+0x1e1/0x640 lib/sbitmap.c:126 sbitmap_queue_init_node+0x3e/0x4d0 lib/sbitmap.c:454 bt_alloc block/blk-mq-tag.c:546 [inline] blk_mq_init_tags+0x164/0x2d0 block/blk-mq-tag.c:571 blk_mq_alloc_rq_map block/blk-mq.c:3546 [inline] blk_mq_alloc_map_and_rqs+0xbb/0x9c0 block/blk-mq.c:4114 __blk_mq_alloc_map_and_rqs block/blk-mq.c:4136 [inline] blk_mq_realloc_tag_set_tags block/blk-mq.c:4803 [inline] __blk_mq_update_nr_hw_queues block/blk-mq.c:5134 [inline] blk_mq_update_nr_hw_queues+0xa3a/0x1a90 block/blk-mq.c:5186 nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 nbd_start_device_ioctl drivers/block/nbd.c:1548 [inline] __nbd_ioctl drivers/block/nbd.c:1623 [inline] nbd_ioctl+0x570/0xe10 drivers/block/nbd.c:1663 blkdev_ioctl+0x611/0x710 block/ioctl.c:792 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f other info that might help us debug this: Chain exists of: pcpu_alloc_mutex --> fs_reclaim --> &q->q_usage_counter(io)#51 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&q->q_usage_counter(io)#51); lock(fs_reclaim); lock(&q->q_usage_counter(io)#51); lock(pcpu_alloc_mutex); *** DEADLOCK *** 4 locks held by syz.2.6827/25572: #0: ffff8880245521b0 (&set->update_nr_hwq_lock){++++}-{4:4}, at: blk_mq_update_nr_hw_queues+0xac/0x1a90 block/blk-mq.c:5184 #1: ffff8880245520c8 (&set->tag_list_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0xbf/0x1a90 block/blk-mq.c:5185 #2: ffff888142fb0060 (&q->q_usage_counter(io)#51){++++}-{0:0}, at: nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 #3: ffff888142fb0098 (&q->q_usage_counter(queue)#35){+.+.}-{0:0}, at: nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 stack backtrace: CPU: 1 UID: 0 PID: 25572 Comm: syz.2.6827 Tainted: G L syzkaller #0 PREEMPT_{RT,(full)} Tainted: [L]=SOFTLOCKUP Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025 Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_circular_bug+0x2e2/0x300 kernel/locking/lockdep.c:2043 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a6/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0x107/0x340 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] _mutex_lock_killable+0x63/0x1d0 kernel/locking/rtmutex_api.c:573 pcpu_alloc_noprof+0x26c/0x16d0 mm/percpu.c:1782 init_alloc_hint lib/sbitmap.c:16 [inline] sbitmap_init_node+0x1e1/0x640 lib/sbitmap.c:126 sbitmap_queue_init_node+0x3e/0x4d0 lib/sbitmap.c:454 bt_alloc block/blk-mq-tag.c:546 [inline] blk_mq_init_tags+0x164/0x2d0 block/blk-mq-tag.c:571 blk_mq_alloc_rq_map block/blk-mq.c:3546 [inline] blk_mq_alloc_map_and_rqs+0xbb/0x9c0 block/blk-mq.c:4114 __blk_mq_alloc_map_and_rqs block/blk-mq.c:4136 [inline] blk_mq_realloc_tag_set_tags block/blk-mq.c:4803 [inline] __blk_mq_update_nr_hw_queues block/blk-mq.c:5134 [inline] blk_mq_update_nr_hw_queues+0xa3a/0x1a90 block/blk-mq.c:5186 nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 nbd_start_device_ioctl drivers/block/nbd.c:1548 [inline] __nbd_ioctl drivers/block/nbd.c:1623 [inline] nbd_ioctl+0x570/0xe10 drivers/block/nbd.c:1663 blkdev_ioctl+0x611/0x710 block/ioctl.c:792 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xec/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f72cb78f749 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f72c99f6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f72cb9e5fa0 RCX: 00007f72cb78f749 RDX: 0000000000000000 RSI: 000000000000ab03 RDI: 0000000000000004 RBP: 00007f72cb813f91 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007f72cb9e6038 R14: 00007f72cb9e5fa0 R15: 00007ffc786776c8 block nbd2: shutting down sockets