====================================================== WARNING: possible circular locking dependency detected 6.12.0-rc5-next-20241101-syzkaller #0 Not tainted ------------------------------------------------------ kswapd0/89 is trying to acquire lock: ffff888143f33ac8 (&q->q_usage_counter(io)#66){++++}-{0:0}, at: __submit_bio+0x2c2/0x560 block/blk-core.c:629 but task is already holding lock: ffffffff8ea3cca0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6862 [inline] ffffffff8ea3cca0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x3700 mm/vmscan.c:7244 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 __fs_reclaim_acquire mm/page_alloc.c:3872 [inline] fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3886 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4055 [inline] slab_alloc_node mm/slub.c:4133 [inline] __kmalloc_cache_node_noprof+0x40/0x3a0 mm/slub.c:4322 kmalloc_node_noprof include/linux/slab.h:924 [inline] blk_mq_init_tags+0x73/0x270 block/blk-mq-tag.c:578 blk_mq_alloc_rq_map block/blk-mq.c:3457 [inline] blk_mq_alloc_map_and_rqs+0xc5/0x970 block/blk-mq.c:3941 blk_mq_sched_alloc_map_and_rqs block/blk-mq-sched.c:389 [inline] blk_mq_init_sched+0x2cf/0x830 block/blk-mq-sched.c:464 elevator_init_mq+0x1d8/0x2d0 block/elevator.c:605 add_disk_fwnode+0x10d/0xf80 block/genhd.c:413 sd_probe+0xba6/0x1100 drivers/scsi/sd.c:4024 really_probe+0x2b8/0xad0 drivers/base/dd.c:658 __driver_probe_device+0x1a2/0x390 drivers/base/dd.c:800 driver_probe_device+0x50/0x430 drivers/base/dd.c:830 __device_attach_driver+0x2d6/0x530 drivers/base/dd.c:958 bus_for_each_drv+0x24e/0x2e0 drivers/base/bus.c:459 __device_attach_async_helper+0x22d/0x300 drivers/base/dd.c:987 async_run_entry_fn+0xa8/0x420 kernel/async.c:129 process_one_work kernel/workqueue.c:3229 [inline] process_scheduled_works+0xa63/0x1850 kernel/workqueue.c:3310 worker_thread+0x870/0xd30 kernel/workqueue.c:3391 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #0 (&q->q_usage_counter(io)#66){++++}-{0:0}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 bio_queue_enter block/blk.h:75 [inline] blk_mq_submit_bio+0x1510/0x2490 block/blk-mq.c:3069 __submit_bio+0x2c2/0x560 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739 swap_writepage_bdev_async mm/page_io.c:443 [inline] __swap_writepage+0x5fc/0x1400 mm/page_io.c:466 swap_writepage+0x8f4/0xf70 mm/page_io.c:281 shmem_writepage+0x14d0/0x1f40 mm/shmem.c:1561 pageout mm/vmscan.c:689 [inline] shrink_folio_list+0x3c0e/0x8cc0 mm/vmscan.c:1367 evict_folios+0x5568/0x7be0 mm/vmscan.c:4591 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4787 shrink_one+0x3b9/0x850 mm/vmscan.c:4832 shrink_many mm/vmscan.c:4895 [inline] lru_gen_shrink_node mm/vmscan.c:4973 [inline] shrink_node+0x37cd/0x3e60 mm/vmscan.c:5954 kswapd_shrink_node mm/vmscan.c:6783 [inline] balance_pgdat mm/vmscan.c:6975 [inline] kswapd+0x1ca9/0x3700 mm/vmscan.c:7244 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&q->q_usage_counter(io)#66); lock(fs_reclaim); rlock(&q->q_usage_counter(io)#66); *** DEADLOCK *** 1 lock held by kswapd0/89: #0: ffffffff8ea3cca0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6862 [inline] #0: ffffffff8ea3cca0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0xbf1/0x3700 mm/vmscan.c:7244 stack backtrace: CPU: 1 UID: 0 PID: 89 Comm: kswapd0 Not tainted 6.12.0-rc5-next-20241101-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 bio_queue_enter block/blk.h:75 [inline] blk_mq_submit_bio+0x1510/0x2490 block/blk-mq.c:3069 __submit_bio+0x2c2/0x560 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x4d3/0xe30 block/blk-core.c:739 swap_writepage_bdev_async mm/page_io.c:443 [inline] __swap_writepage+0x5fc/0x1400 mm/page_io.c:466 swap_writepage+0x8f4/0xf70 mm/page_io.c:281 shmem_writepage+0x14d0/0x1f40 mm/shmem.c:1561 pageout mm/vmscan.c:689 [inline] shrink_folio_list+0x3c0e/0x8cc0 mm/vmscan.c:1367 evict_folios+0x5568/0x7be0 mm/vmscan.c:4591 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4787 shrink_one+0x3b9/0x850 mm/vmscan.c:4832 shrink_many mm/vmscan.c:4895 [inline] lru_gen_shrink_node mm/vmscan.c:4973 [inline] shrink_node+0x37cd/0x3e60 mm/vmscan.c:5954 kswapd_shrink_node mm/vmscan.c:6783 [inline] balance_pgdat mm/vmscan.c:6975 [inline] kswapd+0x1ca9/0x3700 mm/vmscan.c:7244 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244