====================================================== WARNING: possible circular locking dependency detected 6.13.0-rc7-syzkaller-00149-g9bffa1ad25b8 #0 Not tainted ------------------------------------------------------ kswapd0/113 is trying to acquire lock: ffffe8ffac438ff0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] ffffe8ffac438ff0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_compress mm/zswap.c:931 [inline] ffffe8ffac438ff0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_store_page mm/zswap.c:1456 [inline] ffffe8ffac438ff0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_store+0x910/0x2600 mm/zswap.c:1563 but task is already holding lock: ffffffff8df4f160 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x16e/0x18f0 mm/vmscan.c:6874 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:3853 [inline] fs_reclaim_acquire+0x102/0x150 mm/page_alloc.c:3867 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4070 [inline] slab_alloc_node mm/slub.c:4148 [inline] __kmalloc_cache_node_noprof+0x55/0x3f0 mm/slub.c:4337 kmalloc_node_noprof include/linux/slab.h:924 [inline] zswap_cpu_comp_prepare+0xc9/0x470 mm/zswap.c:828 cpuhp_invoke_callback+0x20c/0xa10 kernel/cpu.c:204 cpuhp_issue_call+0x1c0/0x980 kernel/cpu.c:2375 __cpuhp_state_add_instance_cpuslocked+0x1a4/0x3c0 kernel/cpu.c:2437 __cpuhp_state_add_instance+0xd7/0x2e0 kernel/cpu.c:2458 cpuhp_state_add_instance include/linux/cpuhotplug.h:386 [inline] zswap_pool_create+0x41c/0x710 mm/zswap.c:291 __zswap_pool_create_fallback mm/zswap.c:359 [inline] zswap_setup+0x402/0x810 mm/zswap.c:1811 zswap_init+0x2c/0x40 mm/zswap.c:1847 do_one_initcall+0x128/0x630 init/main.c:1266 do_initcall_level init/main.c:1328 [inline] do_initcalls init/main.c:1344 [inline] do_basic_setup init/main.c:1363 [inline] kernel_init_freeable+0x58f/0x8b0 init/main.c:1577 kernel_init+0x1c/0x2b0 init/main.c:1466 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain kernel/locking/lockdep.c:3904 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849 __mutex_lock_common kernel/locking/mutex.c:585 [inline] __mutex_lock+0x19b/0xa60 kernel/locking/mutex.c:735 acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] zswap_compress mm/zswap.c:931 [inline] zswap_store_page mm/zswap.c:1456 [inline] zswap_store+0x910/0x2600 mm/zswap.c:1563 swap_writepage+0x3b6/0x1120 mm/page_io.c:279 shmem_writepage+0xf7b/0x1490 mm/shmem.c:1579 pageout+0x3b2/0xaa0 mm/vmscan.c:696 shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1374 evict_folios+0x6e7/0x1a50 mm/vmscan.c:4600 try_to_shrink_lruvec+0x61e/0xa80 mm/vmscan.c:4799 shrink_one+0x3e3/0x7b0 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0xbf0/0x3f20 mm/vmscan.c:5966 kswapd_shrink_node mm/vmscan.c:6795 [inline] balance_pgdat+0xc1f/0x18f0 mm/vmscan.c:6987 kswapd+0x605/0xc00 mm/vmscan.c:7256 kthread+0x2c1/0x3a0 kernel/kthread.c:389 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); lock(fs_reclaim); lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); *** DEADLOCK *** 1 lock held by kswapd0/113: #0: ffffffff8df4f160 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat+0x16e/0x18f0 mm/vmscan.c:6874 stack backtrace: CPU: 2 UID: 0 PID: 113 Comm: kswapd0 Not tainted 6.13.0-rc7-syzkaller-00149-g9bffa1ad25b8 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 print_circular_bug+0x41c/0x610 kernel/locking/lockdep.c:2074 check_noncircular+0x31a/0x400 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain kernel/locking/lockdep.c:3904 [inline] __lock_acquire+0x249e/0x3c40 kernel/locking/lockdep.c:5226 lock_acquire.part.0+0x11b/0x380 kernel/locking/lockdep.c:5849 __mutex_lock_common kernel/locking/mutex.c:585 [inline] __mutex_lock+0x19b/0xa60 kernel/locking/mutex.c:735 acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] zswap_compress mm/zswap.c:931 [inline] zswap_store_page mm/zswap.c:1456 [inline] zswap_store+0x910/0x2600 mm/zswap.c:1563 swap_writepage+0x3b6/0x1120 mm/page_io.c:279 shmem_writepage+0xf7b/0x1490 mm/shmem.c:1579 pageout+0x3b2/0xaa0 mm/vmscan.c:696 shrink_folio_list+0x3025/0x42d0 mm/vmscan.c:1374 evict_folios+0x6e7/0x1a50 mm/vmscan.c:4600 try_to_shrink_lruvec+0x61e/0xa80 mm/vmscan.c:4799 shrink_one+0x3e3/0x7b0 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0xbf0/0x3f20 mm/vmscan.c:5966 kswapd_shrink_node mm/vmscan.c:6795 [inline] balance_pgdat+0xc1f/0x18f0 mm/vmscan.c:6987 kswapd+0x605/0xc00 mm/vmscan.c:7256 kthread+0x2c1/0x3a0 kernel/kthread.c:389 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244