====================================================== WARNING: possible circular locking dependency detected 6.13.0-rc7-syzkaller-00102-gce69b4019001 #0 Not tainted ------------------------------------------------------ kthreadd/2 is trying to acquire lock: ffffe8ffffd20c90 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] ffffe8ffffd20c90 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_compress mm/zswap.c:931 [inline] ffffe8ffffd20c90 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_store_page mm/zswap.c:1456 [inline] ffffe8ffffd20c90 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_store+0x908/0x26c0 mm/zswap.c:1563 but task is already holding lock: ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xa8b/0x25b0 mm/page_alloc.c:4766 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:3853 [inline] fs_reclaim_acquire+0x102/0x150 mm/page_alloc.c:3867 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4070 [inline] slab_alloc_node mm/slub.c:4148 [inline] __kmalloc_cache_node_noprof+0x54/0x420 mm/slub.c:4337 kmalloc_node_noprof include/linux/slab.h:924 [inline] zswap_cpu_comp_prepare+0xc9/0x470 mm/zswap.c:828 cpuhp_invoke_callback+0x20f/0xa10 kernel/cpu.c:204 cpuhp_issue_call+0x1c0/0x980 kernel/cpu.c:2375 __cpuhp_state_add_instance_cpuslocked+0x1a4/0x3c0 kernel/cpu.c:2437 __cpuhp_state_add_instance+0xd7/0x2e0 kernel/cpu.c:2458 cpuhp_state_add_instance include/linux/cpuhotplug.h:386 [inline] zswap_pool_create+0x59a/0x7b0 mm/zswap.c:291 __zswap_pool_create_fallback mm/zswap.c:359 [inline] zswap_setup+0x402/0x810 mm/zswap.c:1811 zswap_init+0x2c/0x40 mm/zswap.c:1847 do_one_initcall+0x12b/0x700 init/main.c:1266 do_initcall_level init/main.c:1328 [inline] do_initcalls init/main.c:1344 [inline] do_basic_setup init/main.c:1363 [inline] kernel_init_freeable+0x5c7/0x900 init/main.c:1577 kernel_init+0x1c/0x2b0 init/main.c:1466 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain kernel/locking/lockdep.c:3904 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849 __mutex_lock_common kernel/locking/mutex.c:585 [inline] __mutex_lock+0x19b/0xa60 kernel/locking/mutex.c:735 acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] zswap_compress mm/zswap.c:931 [inline] zswap_store_page mm/zswap.c:1456 [inline] zswap_store+0x908/0x26c0 mm/zswap.c:1563 swap_writepage+0x3b6/0x1120 mm/page_io.c:279 shmem_writepage+0xf7b/0x1490 mm/shmem.c:1579 pageout+0x3b5/0xaa0 mm/vmscan.c:696 shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1374 evict_folios+0x6e7/0x1a50 mm/vmscan.c:4600 try_to_shrink_lruvec+0x61e/0xa80 mm/vmscan.c:4799 shrink_one+0x3e3/0x7b0 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x2763/0x3e60 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x35f/0x1a30 mm/vmscan.c:6287 try_to_free_pages+0x2ae/0x6b0 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] __alloc_pages_noprof+0xb0c/0x25b0 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x2c8/0x620 mm/mempolicy.c:2269 get_free_pages_noprof+0xc/0x40 mm/page_alloc.c:4800 kasan_populate_vmalloc_pte+0x2d/0x160 mm/kasan/shadow.c:304 apply_to_pte_range mm/memory.c:2831 [inline] apply_to_pmd_range mm/memory.c:2875 [inline] apply_to_pud_range mm/memory.c:2911 [inline] apply_to_p4d_range mm/memory.c:2947 [inline] __apply_to_page_range+0x600/0xd30 mm/memory.c:2981 alloc_vmap_area+0x93e/0x2a70 mm/vmalloc.c:2035 __get_vm_area_node+0x19e/0x2f0 mm/vmalloc.c:3137 __vmalloc_node_range_noprof+0x26a/0x1530 mm/vmalloc.c:3806 alloc_thread_stack_node kernel/fork.c:314 [inline] dup_task_struct kernel/fork.c:1116 [inline] copy_process+0x2f06/0x8e50 kernel/fork.c:2224 kernel_clone+0xfd/0x960 kernel/fork.c:2806 kernel_thread+0xc0/0x100 kernel/fork.c:2868 create_kthread kernel/kthread.c:412 [inline] kthreadd+0x4ef/0x7d0 kernel/kthread.c:767 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); lock(fs_reclaim); lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); *** DEADLOCK *** 1 lock held by kthreadd/2: #0: ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] #0: ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] #0: ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] #0: ffffffff8e34fea0 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xa8b/0x25b0 mm/page_alloc.c:4766 stack backtrace: CPU: 1 UID: 0 PID: 2 Comm: kthreadd Not tainted 6.13.0-rc7-syzkaller-00102-gce69b4019001 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_circular_bug+0x419/0x5d0 kernel/locking/lockdep.c:2074 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain kernel/locking/lockdep.c:3904 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849 __mutex_lock_common kernel/locking/mutex.c:585 [inline] __mutex_lock+0x19b/0xa60 kernel/locking/mutex.c:735 acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] zswap_compress mm/zswap.c:931 [inline] zswap_store_page mm/zswap.c:1456 [inline] zswap_store+0x908/0x26c0 mm/zswap.c:1563 swap_writepage+0x3b6/0x1120 mm/page_io.c:279 shmem_writepage+0xf7b/0x1490 mm/shmem.c:1579 pageout+0x3b5/0xaa0 mm/vmscan.c:696 shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1374 evict_folios+0x6e7/0x1a50 mm/vmscan.c:4600 try_to_shrink_lruvec+0x61e/0xa80 mm/vmscan.c:4799 shrink_one+0x3e3/0x7b0 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x2763/0x3e60 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x35f/0x1a30 mm/vmscan.c:6287 try_to_free_pages+0x2ae/0x6b0 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim mm/page_alloc.c:3951 [inline] __alloc_pages_slowpath mm/page_alloc.c:4382 [inline] __alloc_pages_noprof+0xb0c/0x25b0 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x2c8/0x620 mm/mempolicy.c:2269 get_free_pages_noprof+0xc/0x40 mm/page_alloc.c:4800 kasan_populate_vmalloc_pte+0x2d/0x160 mm/kasan/shadow.c:304 apply_to_pte_range mm/memory.c:2831 [inline] apply_to_pmd_range mm/memory.c:2875 [inline] apply_to_pud_range mm/memory.c:2911 [inline] apply_to_p4d_range mm/memory.c:2947 [inline] __apply_to_page_range+0x600/0xd30 mm/memory.c:2981 alloc_vmap_area+0x93e/0x2a70 mm/vmalloc.c:2035 __get_vm_area_node+0x19e/0x2f0 mm/vmalloc.c:3137 __vmalloc_node_range_noprof+0x26a/0x1530 mm/vmalloc.c:3806 alloc_thread_stack_node kernel/fork.c:314 [inline] dup_task_struct kernel/fork.c:1116 [inline] copy_process+0x2f06/0x8e50 kernel/fork.c:2224 kernel_clone+0xfd/0x960 kernel/fork.c:2806 kernel_thread+0xc0/0x100 kernel/fork.c:2868 create_kthread kernel/kthread.c:412 [inline] kthreadd+0x4ef/0x7d0 kernel/kthread.c:767 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244