syzbot


possible deadlock in mempool_free

Status: upstream: reported on 2024/03/05 11:02
Subsystems: block
[Documentation on labels]
Reported-by: syzbot+03a410b5470dc0d57748@syzkaller.appspotmail.com
First crash: 182d, last: 98d
Discussions (2)
Title Replies (including bot) Last reply
[syzbot] Monthly block report (Mar 2024) 0 (1) 2024/03/12 09:29
[syzbot] [block?] possible deadlock in mempool_free 0 (1) 2024/03/05 11:02

Sample crash report:
=====================================================
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
6.9.0-syzkaller-12277-g56fb6f92854f #0 Not tainted
-----------------------------------------------------
syz-executor.1/5126 [HC0[0]:SC0[0]:HE0:SE1] is trying to acquire:
ffffffff8e43d740 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}, at: fs_reclaim_acquire+0x93/0x140 mm/page_alloc.c:3800

and this task is already holding:
ffff88801a6a0118 (&pool->lock#5){..-.}-{2:2}, at: mempool_alloc_noprof+0x286/0x5a0 mm/mempool.c:406
which would create a new lock dependency:
 (&pool->lock#5){..-.}-{2:2} -> (mmu_notifier_invalidate_range_start){+.+.}-{0:0}

but this new dependency connects a SOFTIRQ-irq-safe lock:
 (&pool->lock#5){..-.}-{2:2}

... which became SOFTIRQ-irq-safe at:
  lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
  __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
  _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
  mempool_free+0x115/0x390 mm/mempool.c:539
  blk_update_request+0x5e7/0x10d0 block/blk-mq.c:929
  scsi_end_request+0x80/0x880 drivers/scsi/scsi_lib.c:631
  scsi_io_completion+0x1bd/0x430 drivers/scsi/scsi_lib.c:1068
  blk_complete_reqs block/blk-mq.c:1132 [inline]
  blk_done_softirq+0x102/0x150 block/blk-mq.c:1137
  handle_softirqs+0x2c6/0x970 kernel/softirq.c:554
  __do_softirq kernel/softirq.c:588 [inline]
  invoke_softirq kernel/softirq.c:428 [inline]
  __irq_exit_rcu+0xf4/0x1c0 kernel/softirq.c:637
  irq_exit_rcu+0x9/0x30 kernel/softirq.c:649
  common_interrupt+0xaa/0xd0 arch/x86/kernel/irq.c:278
  asm_common_interrupt+0x26/0x40 arch/x86/include/asm/idtentry.h:693
  __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
  _raw_spin_unlock_irqrestore+0xd8/0x140 kernel/locking/spinlock.c:194
  free_unref_page_commit+0x57d/0x1140 mm/page_alloc.c:2540
  free_unref_folios+0x15dc/0x19e0 mm/page_alloc.c:2685
  shrink_folio_list+0x33cd/0x8f70 mm/vmscan.c:1446
  evict_folios+0xb2e/0x2710 mm/vmscan.c:4553
  try_to_shrink_lruvec+0xb6b/0xe90 mm/vmscan.c:4749
  shrink_one+0x3cf/0x880 mm/vmscan.c:4788
  shrink_many mm/vmscan.c:4851 [inline]
  lru_gen_shrink_node mm/vmscan.c:4951 [inline]
  shrink_node+0x37eb/0x3fe0 mm/vmscan.c:5910
  shrink_zones mm/vmscan.c:6168 [inline]
  do_try_to_free_pages+0x77d/0x1c40 mm/vmscan.c:6230
  try_to_free_pages+0x9f6/0x10b0 mm/vmscan.c:6465
  __perform_reclaim mm/page_alloc.c:3859 [inline]
  __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
  __alloc_pages_slowpath+0xdc3/0x23d0 mm/page_alloc.c:4287
  __alloc_pages_noprof+0x43e/0x6c0 mm/page_alloc.c:4673
  alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2265
  shmem_alloc_folio mm/shmem.c:1628 [inline]
  shmem_alloc_and_add_folio+0x24d/0xdb0 mm/shmem.c:1668
  shmem_get_folio_gfp+0x82d/0x1f50 mm/shmem.c:2055
  shmem_get_folio mm/shmem.c:2160 [inline]
  shmem_write_begin+0x170/0x4d0 mm/shmem.c:2743
  generic_perform_write+0x324/0x640 mm/filemap.c:4015
  shmem_file_write_iter+0xfc/0x120 mm/shmem.c:2919
  new_sync_write fs/read_write.c:497 [inline]
  vfs_write+0xa74/0xc90 fs/read_write.c:590
  ksys_write+0x1a0/0x2c0 fs/read_write.c:643
  do_syscall_x64 arch/x86/entry/common.c:52 [inline]
  do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
  entry_SYSCALL_64_after_hwframe+0x77/0x7f

to a SOFTIRQ-irq-unsafe lock:
 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}

... which became SOFTIRQ-irq-unsafe at:
...
  lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
  fs_reclaim_acquire+0xaf/0x140 mm/page_alloc.c:3800
  might_alloc include/linux/sched/mm.h:334 [inline]
  slab_pre_alloc_hook mm/slub.c:3890 [inline]
  slab_alloc_node mm/slub.c:3980 [inline]
  kmalloc_trace_noprof+0x3d/0x2c0 mm/slub.c:4147
  kmalloc_noprof include/linux/slab.h:660 [inline]
  kzalloc_noprof include/linux/slab.h:778 [inline]
  __kthread_create_worker+0x5c/0x3e0 kernel/kthread.c:865
  kthread_create_worker+0xda/0x120 kernel/kthread.c:908
  wq_cpu_intensive_thresh_init+0x18/0x160 kernel/workqueue.c:7775
  workqueue_init+0x26/0x8a0 kernel/workqueue.c:7824
  kernel_init_freeable+0x3fe/0x5d0 init/main.c:1562
  kernel_init+0x1d/0x2b0 init/main.c:1467
  ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
  ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244

other info that might help us debug this:

 Possible interrupt unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(mmu_notifier_invalidate_range_start);
                               local_irq_disable();
                               lock(&pool->lock#5);
                               lock(mmu_notifier_invalidate_range_start);
  <Interrupt>
    lock(&pool->lock#5);

 *** DEADLOCK ***

3 locks held by syz-executor.1/5126:
 #0: ffff88807f6f41a0 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88807f6f41a0 (mapping.invalidate_lock){++++}-{3:3}, at: page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
 #1: ffffffff8e333e60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #1: ffffffff8e333e60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #1: ffffffff8e333e60 (rcu_read_lock){....}-{1:2}, at: blk_mq_run_hw_queue+0x40c/0xae0 block/blk-mq.c:2250
 #2: ffff88801a6a0118 (&pool->lock#5){..-.}-{2:2}, at: mempool_alloc_noprof+0x286/0x5a0 mm/mempool.c:406

the dependencies between SOFTIRQ-irq-safe lock and the holding lock:
-> (&pool->lock#5){..-.}-{2:2} {
   IN-SOFTIRQ-W at:
                    lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
                    __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                    _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
                    mempool_free+0x115/0x390 mm/mempool.c:539
                    blk_update_request+0x5e7/0x10d0 block/blk-mq.c:929
                    scsi_end_request+0x80/0x880 drivers/scsi/scsi_lib.c:631
                    scsi_io_completion+0x1bd/0x430 drivers/scsi/scsi_lib.c:1068
                    blk_complete_reqs block/blk-mq.c:1132 [inline]
                    blk_done_softirq+0x102/0x150 block/blk-mq.c:1137
                    handle_softirqs+0x2c6/0x970 kernel/softirq.c:554
                    __do_softirq kernel/softirq.c:588 [inline]
                    invoke_softirq kernel/softirq.c:428 [inline]
                    __irq_exit_rcu+0xf4/0x1c0 kernel/softirq.c:637
                    irq_exit_rcu+0x9/0x30 kernel/softirq.c:649
                    common_interrupt+0xaa/0xd0 arch/x86/kernel/irq.c:278
                    asm_common_interrupt+0x26/0x40 arch/x86/include/asm/idtentry.h:693
                    __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:152 [inline]
                    _raw_spin_unlock_irqrestore+0xd8/0x140 kernel/locking/spinlock.c:194
                    free_unref_page_commit+0x57d/0x1140 mm/page_alloc.c:2540
                    free_unref_folios+0x15dc/0x19e0 mm/page_alloc.c:2685
                    shrink_folio_list+0x33cd/0x8f70 mm/vmscan.c:1446
                    evict_folios+0xb2e/0x2710 mm/vmscan.c:4553
                    try_to_shrink_lruvec+0xb6b/0xe90 mm/vmscan.c:4749
                    shrink_one+0x3cf/0x880 mm/vmscan.c:4788
                    shrink_many mm/vmscan.c:4851 [inline]
                    lru_gen_shrink_node mm/vmscan.c:4951 [inline]
                    shrink_node+0x37eb/0x3fe0 mm/vmscan.c:5910
                    shrink_zones mm/vmscan.c:6168 [inline]
                    do_try_to_free_pages+0x77d/0x1c40 mm/vmscan.c:6230
                    try_to_free_pages+0x9f6/0x10b0 mm/vmscan.c:6465
                    __perform_reclaim mm/page_alloc.c:3859 [inline]
                    __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
                    __alloc_pages_slowpath+0xdc3/0x23d0 mm/page_alloc.c:4287
                    __alloc_pages_noprof+0x43e/0x6c0 mm/page_alloc.c:4673
                    alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2265
                    shmem_alloc_folio mm/shmem.c:1628 [inline]
                    shmem_alloc_and_add_folio+0x24d/0xdb0 mm/shmem.c:1668
                    shmem_get_folio_gfp+0x82d/0x1f50 mm/shmem.c:2055
                    shmem_get_folio mm/shmem.c:2160 [inline]
                    shmem_write_begin+0x170/0x4d0 mm/shmem.c:2743
                    generic_perform_write+0x324/0x640 mm/filemap.c:4015
                    shmem_file_write_iter+0xfc/0x120 mm/shmem.c:2919
                    new_sync_write fs/read_write.c:497 [inline]
                    vfs_write+0xa74/0xc90 fs/read_write.c:590
                    ksys_write+0x1a0/0x2c0 fs/read_write.c:643
                    do_syscall_x64 arch/x86/entry/common.c:52 [inline]
                    do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
                    entry_SYSCALL_64_after_hwframe+0x77/0x7f
   INITIAL USE at:
                   lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
                   __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
                   _raw_spin_lock_irqsave+0xd5/0x120 kernel/locking/spinlock.c:162
                   mempool_alloc_noprof+0x286/0x5a0 mm/mempool.c:406
                   bio_alloc_bioset+0x26f/0x1130 block/bio.c:554
                   bio_alloc include/linux/bio.h:437 [inline]
                   swap_writepage_bdev_async mm/page_io.c:361 [inline]
                   __swap_writepage+0x534/0x13e0 mm/page_io.c:390
                   swap_writepage+0xd5/0x1d0 mm/page_io.c:209
                   pageout mm/vmscan.c:660 [inline]
                   shrink_folio_list+0x3782/0x8f70 mm/vmscan.c:1341
                   evict_folios+0xb2e/0x2710 mm/vmscan.c:4553
                   try_to_shrink_lruvec+0xb6b/0xe90 mm/vmscan.c:4749
                   shrink_one+0x3cf/0x880 mm/vmscan.c:4788
                   shrink_many mm/vmscan.c:4851 [inline]
                   lru_gen_shrink_node mm/vmscan.c:4951 [inline]
                   shrink_node+0x37eb/0x3fe0 mm/vmscan.c:5910
                   shrink_zones mm/vmscan.c:6168 [inline]
                   do_try_to_free_pages+0x77d/0x1c40 mm/vmscan.c:6230
                   try_to_free_pages+0x9f6/0x10b0 mm/vmscan.c:6465
                   __perform_reclaim mm/page_alloc.c:3859 [inline]
                   __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
                   __alloc_pages_slowpath+0xdc3/0x23d0 mm/page_alloc.c:4287
                   __alloc_pages_noprof+0x43e/0x6c0 mm/page_alloc.c:4673
                   __alloc_pages_node_noprof include/linux/gfp.h:269 [inline]
                   alloc_pages_node_noprof include/linux/gfp.h:296 [inline]
                   alloc_slab_page+0x5f/0x120 mm/slub.c:2264
                   allocate_slab+0x1bc/0x2e0 mm/slub.c:2435
                   new_slab mm/slub.c:2480 [inline]
                   ___slab_alloc+0xcd1/0x14b0 mm/slub.c:3666
                   __slab_alloc+0x58/0xa0 mm/slub.c:3756
                   __slab_alloc_node mm/slub.c:3809 [inline]
                   slab_alloc_node mm/slub.c:3988 [inline]
                   kmalloc_trace_noprof+0x1d5/0x2c0 mm/slub.c:4147
                   kmalloc_noprof include/linux/slab.h:660 [inline]
                   kzalloc_noprof include/linux/slab.h:778 [inline]
                   snmp6_alloc_dev net/ipv6/addrconf.c:359 [inline]
                   ipv6_add_dev+0x570/0x1220 net/ipv6/addrconf.c:409
                   addrconf_notify+0x6a7/0x1020 net/ipv6/addrconf.c:3652
                   notifier_call_chain+0x1a1/0x3e0 kernel/notifier.c:93
                   call_netdevice_notifiers_extack net/core/dev.c:2030 [inline]
                   call_netdevice_notifiers net/core/dev.c:2044 [inline]
                   register_netdevice+0x1570/0x19e0 net/core/dev.c:10407
                   geneve_configure+0x6dd/0xa60 drivers/net/geneve.c:1381
                   geneve_newlink+0x109/0x1b0 drivers/net/geneve.c:1632
                   rtnl_newlink_create net/core/rtnetlink.c:3510 [inline]
                   __rtnl_newlink net/core/rtnetlink.c:3730 [inline]
                   rtnl_newlink+0x1591/0x20a0 net/core/rtnetlink.c:3743
                   rtnetlink_rcv_msg+0x89d/0x10d0 net/core/rtnetlink.c:6595
                   netlink_rcv_skb+0x1e5/0x430 net/netlink/af_netlink.c:2564
                   netlink_unicast_kernel net/netlink/af_netlink.c:1335 [inline]
                   netlink_unicast+0x7ec/0x980 net/netlink/af_netlink.c:1361
                   netlink_sendmsg+0x8db/0xcb0 net/netlink/af_netlink.c:1905
                   sock_sendmsg_nosec net/socket.c:730 [inline]
                   __sock_sendmsg+0x223/0x270 net/socket.c:745
                   __sys_sendto+0x3a4/0x4f0 net/socket.c:2192
                   __do_sys_sendto net/socket.c:2204 [inline]
                   __se_sys_sendto net/socket.c:2200 [inline]
                   __x64_sys_sendto+0xde/0x100 net/socket.c:2200
                   do_syscall_x64 arch/x86/entry/common.c:52 [inline]
                   do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
                   entry_SYSCALL_64_after_hwframe+0x77/0x7f
 }
 ... key      at: [<ffffffff947f3fa0>] mempool_init_node.__key+0x0/0x20

the dependencies between the lock to be acquired
 and SOFTIRQ-irq-unsafe lock:
-> (mmu_notifier_invalidate_range_start){+.+.}-{0:0} {
   HARDIRQ-ON-W at:
                    lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
                    fs_reclaim_acquire+0xaf/0x140 mm/page_alloc.c:3800
                    might_alloc include/linux/sched/mm.h:334 [inline]
                    slab_pre_alloc_hook mm/slub.c:3890 [inline]
                    slab_alloc_node mm/slub.c:3980 [inline]
                    kmalloc_trace_noprof+0x3d/0x2c0 mm/slub.c:4147
                    kmalloc_noprof include/linux/slab.h:660 [inline]
                    kzalloc_noprof include/linux/slab.h:778 [inline]
                    __kthread_create_worker+0x5c/0x3e0 kernel/kthread.c:865
                    kthread_create_worker+0xda/0x120 kernel/kthread.c:908
                    wq_cpu_intensive_thresh_init+0x18/0x160 kernel/workqueue.c:7775
                    workqueue_init+0x26/0x8a0 kernel/workqueue.c:7824
                    kernel_init_freeable+0x3fe/0x5d0 init/main.c:1562
                    kernel_init+0x1d/0x2b0 init/main.c:1467
                    ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
                    ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
   SOFTIRQ-ON-W at:
                    lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
                    fs_reclaim_acquire+0xaf/0x140 mm/page_alloc.c:3800
                    might_alloc include/linux/sched/mm.h:334 [inline]
                    slab_pre_alloc_hook mm/slub.c:3890 [inline]
                    slab_alloc_node mm/slub.c:3980 [inline]
                    kmalloc_trace_noprof+0x3d/0x2c0 mm/slub.c:4147
                    kmalloc_noprof include/linux/slab.h:660 [inline]
                    kzalloc_noprof include/linux/slab.h:778 [inline]
                    __kthread_create_worker+0x5c/0x3e0 kernel/kthread.c:865
                    kthread_create_worker+0xda/0x120 kernel/kthread.c:908
                    wq_cpu_intensive_thresh_init+0x18/0x160 kernel/workqueue.c:7775
                    workqueue_init+0x26/0x8a0 kernel/workqueue.c:7824
                    kernel_init_freeable+0x3fe/0x5d0 init/main.c:1562
                    kernel_init+0x1d/0x2b0 init/main.c:1467
                    ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
                    ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
   INITIAL USE at:
                   lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
                   fs_reclaim_acquire+0xaf/0x140 mm/page_alloc.c:3800
                   might_alloc include/linux/sched/mm.h:334 [inline]
                   slab_pre_alloc_hook mm/slub.c:3890 [inline]
                   slab_alloc_node mm/slub.c:3980 [inline]
                   kmalloc_trace_noprof+0x3d/0x2c0 mm/slub.c:4147
                   kmalloc_noprof include/linux/slab.h:660 [inline]
                   kzalloc_noprof include/linux/slab.h:778 [inline]
                   __kthread_create_worker+0x5c/0x3e0 kernel/kthread.c:865
                   kthread_create_worker+0xda/0x120 kernel/kthread.c:908
                   wq_cpu_intensive_thresh_init+0x18/0x160 kernel/workqueue.c:7775
                   workqueue_init+0x26/0x8a0 kernel/workqueue.c:7824
                   kernel_init_freeable+0x3fe/0x5d0 init/main.c:1562
                   kernel_init+0x1d/0x2b0 init/main.c:1467
                   ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
                   ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 }
 ... key      at: [<ffffffff8e43d740>] __mmu_notifier_invalidate_range_start_map+0x0/0x40
 ... acquired at:
   lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
   fs_reclaim_acquire+0xaf/0x140 mm/page_alloc.c:3800
   might_alloc include/linux/sched/mm.h:334 [inline]
   prepare_alloc_pages+0x147/0x5d0 mm/page_alloc.c:4431
   __alloc_pages_noprof+0x166/0x6c0 mm/page_alloc.c:4649
   alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2265
   stack_depot_save_flags+0x666/0x830 lib/stackdepot.c:627
   kasan_save_stack mm/kasan/common.c:48 [inline]
   kasan_save_track+0x51/0x80 mm/kasan/common.c:68
   unpoison_slab_object mm/kasan/common.c:312 [inline]
   __kasan_mempool_unpoison_object+0xa0/0x170 mm/kasan/common.c:535
   remove_element+0x129/0x1a0 mm/mempool.c:150
   mempool_alloc_noprof+0x54e/0x5a0 mm/mempool.c:408
   __sg_alloc_table+0xce/0x3c0 lib/scatterlist.c:321
   sg_alloc_table_chained+0xe6/0x1c0 lib/sg_pool.c:133
   scsi_alloc_sgtables+0x290/0xcb0 drivers/scsi/scsi_lib.c:1133
   sd_setup_read_write_cmnd drivers/scsi/sd.c:1227 [inline]
   sd_init_command+0x531/0x2100 drivers/scsi/sd.c:1345
   scsi_prepare_cmd drivers/scsi/scsi_lib.c:1698 [inline]
   scsi_queue_rq+0x18cf/0x2f70 drivers/scsi/scsi_lib.c:1832
   blk_mq_dispatch_rq_list+0xb8b/0x1b30 block/blk-mq.c:2037
   __blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
   blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
   __blk_mq_sched_dispatch_requests+0xb8a/0x1840 block/blk-mq-sched.c:309
   blk_mq_sched_dispatch_requests+0xcb/0x140 block/blk-mq-sched.c:331
   blk_mq_run_hw_queue+0x9a5/0xae0 block/blk-mq.c:2250
   blk_mq_flush_plug_list+0x1115/0x1880 block/blk-mq.c:2799
   __blk_flush_plug+0x420/0x500 block/blk-core.c:1194
   blk_finish_plug+0x5e/0x80 block/blk-core.c:1221
   read_pages+0x644/0x840 mm/readahead.c:183
   page_cache_ra_unbounded+0x6ce/0x7f0 mm/readahead.c:273
   do_async_mmap_readahead mm/filemap.c:3203 [inline]
   filemap_fault+0x78d/0x1760 mm/filemap.c:3299
   __do_fault+0x137/0x460 mm/memory.c:4562
   do_read_fault mm/memory.c:4926 [inline]
   do_fault mm/memory.c:5056 [inline]
   do_pte_missing mm/memory.c:3903 [inline]
   handle_pte_fault+0x3d8d/0x7130 mm/memory.c:5380
   __handle_mm_fault mm/memory.c:5523 [inline]
   handle_mm_fault+0x10df/0x1ba0 mm/memory.c:5688
   do_user_addr_fault arch/x86/mm/fault.c:1338 [inline]
   handle_page_fault arch/x86/mm/fault.c:1481 [inline]
   exc_page_fault+0x459/0x8c0 arch/x86/mm/fault.c:1539
   asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623


stack backtrace:
CPU: 0 PID: 5126 Comm: syz-executor.1 Not tainted 6.9.0-syzkaller-12277-g56fb6f92854f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 print_bad_irq_dependency kernel/locking/lockdep.c:2626 [inline]
 check_irq_usage kernel/locking/lockdep.c:2865 [inline]
 check_prev_add kernel/locking/lockdep.c:3138 [inline]
 check_prevs_add kernel/locking/lockdep.c:3253 [inline]
 validate_chain+0x4de0/0x5900 kernel/locking/lockdep.c:3869
 __lock_acquire+0x1346/0x1fd0 kernel/locking/lockdep.c:5137
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
 fs_reclaim_acquire+0xaf/0x140 mm/page_alloc.c:3800
 might_alloc include/linux/sched/mm.h:334 [inline]
 prepare_alloc_pages+0x147/0x5d0 mm/page_alloc.c:4431
 __alloc_pages_noprof+0x166/0x6c0 mm/page_alloc.c:4649
 alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2265
 stack_depot_save_flags+0x666/0x830 lib/stackdepot.c:627
 kasan_save_stack mm/kasan/common.c:48 [inline]
 kasan_save_track+0x51/0x80 mm/kasan/common.c:68
 unpoison_slab_object mm/kasan/common.c:312 [inline]
 __kasan_mempool_unpoison_object+0xa0/0x170 mm/kasan/common.c:535
 remove_element+0x129/0x1a0 mm/mempool.c:150
 mempool_alloc_noprof+0x54e/0x5a0 mm/mempool.c:408
 __sg_alloc_table+0xce/0x3c0 lib/scatterlist.c:321
 sg_alloc_table_chained+0xe6/0x1c0 lib/sg_pool.c:133
 scsi_alloc_sgtables+0x290/0xcb0 drivers/scsi/scsi_lib.c:1133
 sd_setup_read_write_cmnd drivers/scsi/sd.c:1227 [inline]
 sd_init_command+0x531/0x2100 drivers/scsi/sd.c:1345
 scsi_prepare_cmd drivers/scsi/scsi_lib.c:1698 [inline]
 scsi_queue_rq+0x18cf/0x2f70 drivers/scsi/scsi_lib.c:1832
 blk_mq_dispatch_rq_list+0xb8b/0x1b30 block/blk-mq.c:2037
 __blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
 blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
 __blk_mq_sched_dispatch_requests+0xb8a/0x1840 block/blk-mq-sched.c:309
 blk_mq_sched_dispatch_requests+0xcb/0x140 block/blk-mq-sched.c:331
 blk_mq_run_hw_queue+0x9a5/0xae0 block/blk-mq.c:2250
 blk_mq_flush_plug_list+0x1115/0x1880 block/blk-mq.c:2799
 __blk_flush_plug+0x420/0x500 block/blk-core.c:1194
 blk_finish_plug+0x5e/0x80 block/blk-core.c:1221
 read_pages+0x644/0x840 mm/readahead.c:183
 page_cache_ra_unbounded+0x6ce/0x7f0 mm/readahead.c:273
 do_async_mmap_readahead mm/filemap.c:3203 [inline]
 filemap_fault+0x78d/0x1760 mm/filemap.c:3299
 __do_fault+0x137/0x460 mm/memory.c:4562
 do_read_fault mm/memory.c:4926 [inline]
 do_fault mm/memory.c:5056 [inline]
 do_pte_missing mm/memory.c:3903 [inline]
 handle_pte_fault+0x3d8d/0x7130 mm/memory.c:5380
 __handle_mm_fault mm/memory.c:5523 [inline]
 handle_mm_fault+0x10df/0x1ba0 mm/memory.c:5688
 do_user_addr_fault arch/x86/mm/fault.c:1338 [inline]
 handle_page_fault arch/x86/mm/fault.c:1481 [inline]
 exc_page_fault+0x459/0x8c0 arch/x86/mm/fault.c:1539
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7f31690794a0
Code: Unable to access opcode bytes at 0x7f3169079476.
RSP: 002b:00007ffc087a2dd8 EFLAGS: 00010216
RAX: 0000000000000bb8 RBX: 00000000000000aa RCX: 0000000000000000
RDX: 0000000000000bb8 RSI: 00007ffc087a2ea0 RDI: 0000000000000001
RBP: 00007ffc087a2e3c R08: 0000000000000000 R09: 7fffffffffffffff
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000032
R13: 000000000002a6dd R14: 000000000002a6dd R15: 0000000000000000
 </TASK>
BUG: sleeping function called from invalid context at include/linux/sched/mm.h:337
in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 5126, name: syz-executor.1
preempt_count: 1, expected: 0
RCU nest depth: 1, expected: 0
INFO: lockdep is turned off.
irq event stamp: 987588
hardirqs last  enabled at (987587): [<ffffffff81efb7ab>] seqcount_lockdep_reader_access+0x13b/0x1e0 include/linux/seqlock.h:74
hardirqs last disabled at (987588): [<ffffffff8b90bfc0>] __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:108 [inline]
hardirqs last disabled at (987588): [<ffffffff8b90bfc0>] _raw_spin_lock_irqsave+0xb0/0x120 kernel/locking/spinlock.c:162
softirqs last  enabled at (987108): [<ffffffff8159fb84>] __do_softirq kernel/softirq.c:588 [inline]
softirqs last  enabled at (987108): [<ffffffff8159fb84>] invoke_softirq kernel/softirq.c:428 [inline]
softirqs last  enabled at (987108): [<ffffffff8159fb84>] __irq_exit_rcu+0xf4/0x1c0 kernel/softirq.c:637
softirqs last disabled at (987081): [<ffffffff8159fb84>] __do_softirq kernel/softirq.c:588 [inline]
softirqs last disabled at (987081): [<ffffffff8159fb84>] invoke_softirq kernel/softirq.c:428 [inline]
softirqs last disabled at (987081): [<ffffffff8159fb84>] __irq_exit_rcu+0xf4/0x1c0 kernel/softirq.c:637
Preemption disabled at:
[<0000000000000000>] 0x0
CPU: 0 PID: 5126 Comm: syz-executor.1 Not tainted 6.9.0-syzkaller-12277-g56fb6f92854f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 __might_resched+0x5d4/0x780 kernel/sched/core.c:10196
 might_alloc include/linux/sched/mm.h:337 [inline]
 prepare_alloc_pages+0x1c9/0x5d0 mm/page_alloc.c:4431
 __alloc_pages_noprof+0x166/0x6c0 mm/page_alloc.c:4649
 alloc_pages_mpol_noprof+0x3e8/0x680 mm/mempolicy.c:2265
 stack_depot_save_flags+0x666/0x830 lib/stackdepot.c:627
 kasan_save_stack mm/kasan/common.c:48 [inline]
 kasan_save_track+0x51/0x80 mm/kasan/common.c:68
 unpoison_slab_object mm/kasan/common.c:312 [inline]
 __kasan_mempool_unpoison_object+0xa0/0x170 mm/kasan/common.c:535
 remove_element+0x129/0x1a0 mm/mempool.c:150
 mempool_alloc_noprof+0x54e/0x5a0 mm/mempool.c:408
 __sg_alloc_table+0xce/0x3c0 lib/scatterlist.c:321
 sg_alloc_table_chained+0xe6/0x1c0 lib/sg_pool.c:133
 scsi_alloc_sgtables+0x290/0xcb0 drivers/scsi/scsi_lib.c:1133
 sd_setup_read_write_cmnd drivers/scsi/sd.c:1227 [inline]
 sd_init_command+0x531/0x2100 drivers/scsi/sd.c:1345
 scsi_prepare_cmd drivers/scsi/scsi_lib.c:1698 [inline]
 scsi_queue_rq+0x18cf/0x2f70 drivers/scsi/scsi_lib.c:1832
 blk_mq_dispatch_rq_list+0xb8b/0x1b30 block/blk-mq.c:2037
 __blk_mq_do_dispatch_sched block/blk-mq-sched.c:170 [inline]
 blk_mq_do_dispatch_sched block/blk-mq-sched.c:184 [inline]
 __blk_mq_sched_dispatch_requests+0xb8a/0x1840 block/blk-mq-sched.c:309
 blk_mq_sched_dispatch_requests+0xcb/0x140 block/blk-mq-sched.c:331
 blk_mq_run_hw_queue+0x9a5/0xae0 block/blk-mq.c:2250
 blk_mq_flush_plug_list+0x1115/0x1880 block/blk-mq.c:2799
 __blk_flush_plug+0x420/0x500 block/blk-core.c:1194
 blk_finish_plug+0x5e/0x80 block/blk-core.c:1221
 read_pages+0x644/0x840 mm/readahead.c:183
 page_cache_ra_unbounded+0x6ce/0x7f0 mm/readahead.c:273
 do_async_mmap_readahead mm/filemap.c:3203 [inline]
 filemap_fault+0x78d/0x1760 mm/filemap.c:3299
 __do_fault+0x137/0x460 mm/memory.c:4562
 do_read_fault mm/memory.c:4926 [inline]
 do_fault mm/memory.c:5056 [inline]
 do_pte_missing mm/memory.c:3903 [inline]
 handle_pte_fault+0x3d8d/0x7130 mm/memory.c:5380
 __handle_mm_fault mm/memory.c:5523 [inline]
 handle_mm_fault+0x10df/0x1ba0 mm/memory.c:5688
 do_user_addr_fault arch/x86/mm/fault.c:1338 [inline]
 handle_page_fault arch/x86/mm/fault.c:1481 [inline]
 exc_page_fault+0x459/0x8c0 arch/x86/mm/fault.c:1539
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7f31690794a0
Code: Unable to access opcode bytes at 0x7f3169079476.
RSP: 002b:00007ffc087a2dd8 EFLAGS: 00010216
RAX: 0000000000000bb8 RBX: 00000000000000aa RCX: 0000000000000000
RDX: 0000000000000bb8 RSI: 00007ffc087a2ea0 RDI: 0000000000000001
RBP: 00007ffc087a2e3c R08: 0000000000000000 R09: 7fffffffffffffff
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000032
R13: 000000000002a6dd R14: 000000000002a6dd R15: 0000000000000000
 </TASK>
BUG: scheduling while atomic: syz-executor.1/5126/0x00000002
INFO: lockdep is turned off.
Modules linked in:
Preemption disabled at:
[<0000000000000000>] 0x0

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/25 18:53 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in mempool_free
2024/05/25 18:00 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mempool_free
2024/05/25 17:18 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mempool_free
2024/05/25 17:13 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mempool_free
2024/05/25 17:10 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mempool_free
2024/05/25 12:01 upstream 56fb6f92854f a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in mempool_free
2024/03/02 11:17 upstream 17ba56605bfd 25905f5d .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu-upstream possible deadlock in mempool_free
2024/04/13 18:40 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in mempool_free
* Struck through repros no longer work on HEAD.