====================================================== WARNING: possible circular locking dependency detected 6.13.0-rc4-syzkaller-00004-gf07044dd0df0 #0 Not tainted ------------------------------------------------------ syz.1.45/6266 is trying to acquire lock: ffff8880238c6c40 (&q->q_usage_counter(io)#68){++++}-{0:0}, at: bio_queue_enter block/blk.h:79 [inline] ffff8880238c6c40 (&q->q_usage_counter(io)#68){++++}-{0:0}, at: blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3090 but task is already holding lock: ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xa8b/0x25b0 mm/page_alloc.c:4766 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:3853 [inline] fs_reclaim_acquire+0x102/0x150 mm/page_alloc.c:3867 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4070 [inline] slab_alloc_node mm/slub.c:4148 [inline] __do_kmalloc_node mm/slub.c:4297 [inline] __kmalloc_node_noprof+0xb7/0x520 mm/slub.c:4304 __kvmalloc_node_noprof+0xad/0x1a0 mm/util.c:650 sbitmap_init_node+0x1ca/0x770 lib/sbitmap.c:132 scsi_realloc_sdev_budget_map+0x2c7/0x610 drivers/scsi/scsi_scan.c:246 scsi_add_lun+0x11b4/0x1fd0 drivers/scsi/scsi_scan.c:1106 scsi_probe_and_add_lun+0x4fa/0xda0 drivers/scsi/scsi_scan.c:1287 __scsi_add_device+0x24b/0x290 drivers/scsi/scsi_scan.c:1622 ata_scsi_scan_host+0x215/0x780 drivers/ata/libata-scsi.c:4575 async_run_entry_fn+0x9c/0x530 kernel/async.c:129 process_one_work+0x958/0x1b30 kernel/workqueue.c:3229 process_scheduled_works kernel/workqueue.c:3310 [inline] worker_thread+0x6c8/0xf00 kernel/workqueue.c:3391 kthread+0x2c1/0x3a0 kernel/kthread.c:389 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #0 (&q->q_usage_counter(io)#68){++++}-{0:0}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain kernel/locking/lockdep.c:3904 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849 __bio_queue_enter+0x4c6/0x740 block/blk-core.c:361 bio_queue_enter block/blk.h:79 [inline] blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3090 __submit_bio+0x384/0x540 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x698/0xd70 block/blk-core.c:739 submit_bio_noacct+0x93a/0x1e20 block/blk-core.c:868 swap_writepage_bdev_async mm/page_io.c:451 [inline] __swap_writepage+0x3a3/0xf50 mm/page_io.c:474 swap_writepage+0x403/0x1120 mm/page_io.c:289 shmem_writepage+0xf76/0x1490 mm/shmem.c:1579 pageout+0x3b2/0xaa0 mm/vmscan.c:689 shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1367 evict_folios+0x6e3/0x19c0 mm/vmscan.c:4593 try_to_shrink_lruvec+0x61e/0xa80 mm/vmscan.c:4789 shrink_one+0x3e3/0x7b0 mm/vmscan.c:4834 shrink_many mm/vmscan.c:4897 [inline] lru_gen_shrink_node mm/vmscan.c:4975 [inline] shrink_node+0xbf0/0x3f20 mm/vmscan.c:5956 shrink_zones mm/vmscan.c:6215 [inline] do_try_to_free_pages+0x35f/0x1a30 mm/vmscan.c:6277 try_to_free_pages+0x2ae/0x6b0 mm/vmscan.c:6527 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] __alloc_pages_noprof+0xb0c/0x25b0 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x2c9/0x610 mm/mempolicy.c:2269 folio_alloc_mpol_noprof+0x36/0xd0 mm/mempolicy.c:2287 shmem_alloc_folio+0x135/0x160 mm/shmem.c:1798 shmem_alloc_and_add_folio+0x48b/0xc00 mm/shmem.c:1837 shmem_get_folio_gfp+0x689/0x1530 mm/shmem.c:2357 shmem_get_folio mm/shmem.c:2463 [inline] shmem_write_begin+0x161/0x300 mm/shmem.c:3119 generic_perform_write+0x2ba/0x920 mm/filemap.c:4055 shmem_file_write_iter+0x10e/0x140 mm/shmem.c:3295 __kernel_write_iter+0x318/0xa80 fs/read_write.c:612 dump_emit_page fs/coredump.c:884 [inline] dump_user_range+0x389/0x8c0 fs/coredump.c:945 elf_core_dump+0x2baa/0x3df0 fs/binfmt_elf.c:2129 do_coredump+0x2dd5/0x43e0 fs/coredump.c:758 get_signal+0x23f3/0x2610 kernel/signal.c:3002 arch_do_signal_or_restart+0x90/0x7e0 arch/x86/kernel/signal.c:337 exit_to_user_mode_loop kernel/entry/common.c:111 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline] irqentry_exit_to_user_mode+0x13f/0x280 kernel/entry/common.c:231 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&q->q_usage_counter(io)#68); lock(fs_reclaim); rlock(&q->q_usage_counter(io)#68); *** DEADLOCK *** 3 locks held by syz.1.45/6266: #0: ffff88804c3a8420 (sb_writers#5){.+.+}-{0:0}, at: get_signal+0x23f3/0x2610 kernel/signal.c:3002 #1: ffff88806781de38 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline] #1: ffff88806781de38 (&sb->s_type->i_mutex_key#12){+.+.}-{4:4}, at: shmem_file_write_iter+0x86/0x140 mm/shmem.c:3285 #2: ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] #2: ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] #2: ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] #2: ffffffff8df4ef60 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xa8b/0x25b0 mm/page_alloc.c:4766 stack backtrace: CPU: 2 UID: 0 PID: 6266 Comm: syz.1.45 Not tainted 6.13.0-rc4-syzkaller-00004-gf07044dd0df0 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_circular_bug+0x41c/0x610 kernel/locking/lockdep.c:2074 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain kernel/locking/lockdep.c:3904 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849 __bio_queue_enter+0x4c6/0x740 block/blk-core.c:361 bio_queue_enter block/blk.h:79 [inline] blk_mq_submit_bio+0x7ca/0x24c0 block/blk-mq.c:3090 __submit_bio+0x384/0x540 block/blk-core.c:629 __submit_bio_noacct_mq block/blk-core.c:710 [inline] submit_bio_noacct_nocheck+0x698/0xd70 block/blk-core.c:739 submit_bio_noacct+0x93a/0x1e20 block/blk-core.c:868 swap_writepage_bdev_async mm/page_io.c:451 [inline] __swap_writepage+0x3a3/0xf50 mm/page_io.c:474 swap_writepage+0x403/0x1120 mm/page_io.c:289 shmem_writepage+0xf76/0x1490 mm/shmem.c:1579 pageout+0x3b2/0xaa0 mm/vmscan.c:689 shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1367 evict_folios+0x6e3/0x19c0 mm/vmscan.c:4593 try_to_shrink_lruvec+0x61e/0xa80 mm/vmscan.c:4789 shrink_one+0x3e3/0x7b0 mm/vmscan.c:4834 shrink_many mm/vmscan.c:4897 [inline] lru_gen_shrink_node mm/vmscan.c:4975 [inline] shrink_node+0xbf0/0x3f20 mm/vmscan.c:5956 shrink_zones mm/vmscan.c:6215 [inline] do_try_to_free_pages+0x35f/0x1a30 mm/vmscan.c:6277 try_to_free_pages+0x2ae/0x6b0 mm/vmscan.c:6527 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] __alloc_pages_noprof+0xb0c/0x25b0 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x2c9/0x610 mm/mempolicy.c:2269 folio_alloc_mpol_noprof+0x36/0xd0 mm/mempolicy.c:2287 shmem_alloc_folio+0x135/0x160 mm/shmem.c:1798 shmem_alloc_and_add_folio+0x48b/0xc00 mm/shmem.c:1837 shmem_get_folio_gfp+0x689/0x1530 mm/shmem.c:2357 shmem_get_folio mm/shmem.c:2463 [inline] shmem_write_begin+0x161/0x300 mm/shmem.c:3119 generic_perform_write+0x2ba/0x920 mm/filemap.c:4055 shmem_file_write_iter+0x10e/0x140 mm/shmem.c:3295 __kernel_write_iter+0x318/0xa80 fs/read_write.c:612 dump_emit_page fs/coredump.c:884 [inline] dump_user_range+0x389/0x8c0 fs/coredump.c:945 elf_core_dump+0x2baa/0x3df0 fs/binfmt_elf.c:2129 do_coredump+0x2dd5/0x43e0 fs/coredump.c:758 get_signal+0x23f3/0x2610 kernel/signal.c:3002 arch_do_signal_or_restart+0x90/0x7e0 arch/x86/kernel/signal.c:337 exit_to_user_mode_loop kernel/entry/common.c:111 [inline] exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline] irqentry_exit_to_user_mode+0x13f/0x280 kernel/entry/common.c:231 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 RIP: 0023:0xf7f86579 Code: Unable to access opcode bytes at 0xf7f8654f. RSP: 002b:00000000f50b542c EFLAGS: 00010286 RAX: 0000000000000000 RBX: 00000000f50b5460 RCX: 0000000000000058 RDX: 0000000000000000 RSI: 0000000001840000 RDI: 0000000000000001 RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000296 R12: 0000000000000000 R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 syz.1.45 (6266) used greatest stack depth: 18040 bytes left