====================================================== WARNING: possible circular locking dependency detected 6.15.0-rc3-syzkaller-00342-g5bc1018675ec #0 Not tainted ------------------------------------------------------ syz-executor.0/6136 is trying to acquire lock: ffff888026382318 (&q->elevator_lock){+.+.}-{4:4}, at: blk_mq_elv_switch_none block/blk-mq.c:4951 [inline] ffff888026382318 (&q->elevator_lock){+.+.}-{4:4}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:5031 [inline] ffff888026382318 (&q->elevator_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x4a9/0x1370 block/blk-mq.c:5083 but task is already holding lock: ffff888026381de8 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave block/blk-mq.c:215 [inline] ffff888026381de8 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:5023 [inline] ffff888026381de8 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: blk_mq_update_nr_hw_queues+0x263/0x1370 block/blk-mq.c:5083 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&q->q_usage_counter(io)#49){++++}-{0:0}: blk_alloc_queue+0x619/0x760 block/blk-core.c:461 blk_mq_alloc_queue+0x179/0x290 block/blk-mq.c:4348 __blk_mq_alloc_disk+0x29/0x120 block/blk-mq.c:4395 nbd_dev_add+0x49d/0xbb0 drivers/block/nbd.c:1933 nbd_init+0x181/0x320 drivers/block/nbd.c:2670 do_one_initcall+0x120/0x6e0 init/main.c:1257 do_initcall_level init/main.c:1319 [inline] do_initcalls init/main.c:1335 [inline] do_basic_setup init/main.c:1354 [inline] kernel_init_freeable+0x5c2/0x900 init/main.c:1567 kernel_init+0x1c/0x2b0 init/main.c:1457 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:4064 [inline] fs_reclaim_acquire+0x102/0x150 mm/page_alloc.c:4078 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4112 [inline] slab_alloc_node mm/slub.c:4190 [inline] kmem_cache_alloc_noprof+0x53/0x3b0 mm/slub.c:4217 __kernfs_new_node+0xd2/0x8a0 fs/kernfs/dir.c:637 kernfs_new_node+0x13c/0x1e0 fs/kernfs/dir.c:713 kernfs_create_dir_ns+0x4c/0x1a0 fs/kernfs/dir.c:1081 sysfs_create_dir_ns+0x13a/0x2b0 fs/sysfs/dir.c:59 create_dir lib/kobject.c:73 [inline] kobject_add_internal+0x2c4/0x9b0 lib/kobject.c:240 kobject_add_varg lib/kobject.c:374 [inline] kobject_add+0x16e/0x240 lib/kobject.c:426 elv_register_queue+0xd3/0x2a0 block/elevator.c:462 blk_register_queue+0x3c4/0x560 block/blk-sysfs.c:874 add_disk_fwnode+0x911/0x13a0 block/genhd.c:505 add_disk include/linux/blkdev.h:757 [inline] nbd_dev_add+0x78e/0xbb0 drivers/block/nbd.c:1963 nbd_init+0x181/0x320 drivers/block/nbd.c:2670 do_one_initcall+0x120/0x6e0 init/main.c:1257 do_initcall_level init/main.c:1319 [inline] do_initcalls init/main.c:1335 [inline] do_basic_setup init/main.c:1354 [inline] kernel_init_freeable+0x5c2/0x900 init/main.c:1567 kernel_init+0x1c/0x2b0 init/main.c:1457 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 -> #0 (&q->elevator_lock){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3166 [inline] check_prevs_add kernel/locking/lockdep.c:3285 [inline] validate_chain kernel/locking/lockdep.c:3909 [inline] __lock_acquire+0x1173/0x1ba0 kernel/locking/lockdep.c:5235 lock_acquire kernel/locking/lockdep.c:5866 [inline] lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823 __mutex_lock_common kernel/locking/mutex.c:601 [inline] __mutex_lock+0x199/0xb90 kernel/locking/mutex.c:746 blk_mq_elv_switch_none block/blk-mq.c:4951 [inline] __blk_mq_update_nr_hw_queues block/blk-mq.c:5031 [inline] blk_mq_update_nr_hw_queues+0x4a9/0x1370 block/blk-mq.c:5083 nbd_start_device+0x172/0xcd0 drivers/block/nbd.c:1476 nbd_start_device_ioctl drivers/block/nbd.c:1527 [inline] __nbd_ioctl drivers/block/nbd.c:1602 [inline] nbd_ioctl+0x219/0xda0 drivers/block/nbd.c:1642 blkdev_ioctl+0x274/0x6d0 block/ioctl.c:704 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:906 [inline] __se_sys_ioctl fs/ioctl.c:892 [inline] __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f other info that might help us debug this: Chain exists of: &q->elevator_lock --> fs_reclaim --> &q->q_usage_counter(io)#49 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&q->q_usage_counter(io)#49); lock(fs_reclaim); lock(&q->q_usage_counter(io)#49); lock(&q->elevator_lock); *** DEADLOCK *** 4 locks held by syz-executor.0/6136: #0: ffff888026578998 (&nbd->config_lock){+.+.}-{4:4}, at: nbd_ioctl+0x150/0xda0 drivers/block/nbd.c:1635 #1: ffff8880265788d8 (&set->tag_list_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0x8e/0x1370 block/blk-mq.c:5082 #2: ffff888026381de8 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: blk_mq_freeze_queue_nomemsave block/blk-mq.c:215 [inline] #2: ffff888026381de8 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:5023 [inline] #2: ffff888026381de8 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: blk_mq_update_nr_hw_queues+0x263/0x1370 block/blk-mq.c:5083 #3: ffff888026381e20 (&q->q_usage_counter(queue)){+.+.}-{0:0}, at: blk_mq_freeze_queue_nomemsave block/blk-mq.c:215 [inline] #3: ffff888026381e20 (&q->q_usage_counter(queue)){+.+.}-{0:0}, at: __blk_mq_update_nr_hw_queues block/blk-mq.c:5023 [inline] #3: ffff888026381e20 (&q->q_usage_counter(queue)){+.+.}-{0:0}, at: blk_mq_update_nr_hw_queues+0x263/0x1370 block/blk-mq.c:5083 stack backtrace: CPU: 1 UID: 0 PID: 6136 Comm: syz-executor.0 Not tainted 6.15.0-rc3-syzkaller-00342-g5bc1018675ec #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/19/2025 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_circular_bug+0x275/0x350 kernel/locking/lockdep.c:2079 check_noncircular+0x14c/0x170 kernel/locking/lockdep.c:2211 check_prev_add kernel/locking/lockdep.c:3166 [inline] check_prevs_add kernel/locking/lockdep.c:3285 [inline] validate_chain kernel/locking/lockdep.c:3909 [inline] __lock_acquire+0x1173/0x1ba0 kernel/locking/lockdep.c:5235 lock_acquire kernel/locking/lockdep.c:5866 [inline] lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823 __mutex_lock_common kernel/locking/mutex.c:601 [inline] __mutex_lock+0x199/0xb90 kernel/locking/mutex.c:746 blk_mq_elv_switch_none block/blk-mq.c:4951 [inline] __blk_mq_update_nr_hw_queues block/blk-mq.c:5031 [inline] blk_mq_update_nr_hw_queues+0x4a9/0x1370 block/blk-mq.c:5083 nbd_start_device+0x172/0xcd0 drivers/block/nbd.c:1476 nbd_start_device_ioctl drivers/block/nbd.c:1527 [inline] __nbd_ioctl drivers/block/nbd.c:1602 [inline] nbd_ioctl+0x219/0xda0 drivers/block/nbd.c:1642 blkdev_ioctl+0x274/0x6d0 block/ioctl.c:704 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:906 [inline] __se_sys_ioctl fs/ioctl.c:892 [inline] __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xcd/0x260 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fe9772799e9 Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fe9784ad0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007fe97738bf60 RCX: 00007fe9772799e9 RDX: 0000000000000000 RSI: 000000000000ab03 RDI: 0000000000000007 RBP: 00007fe9784ad120 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002 R13: 000000000000000b R14: 00007fe97738bf60 R15: 00007ffdafb49678 block nbd0: shutting down sockets