====================================================== WARNING: possible circular locking dependency detected 6.13.0-rc7-syzkaller-00149-g9bffa1ad25b8 #0 Not tainted ------------------------------------------------------ kworker/u4:2/30 is trying to acquire lock: ffffe8ffffc37f50 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] ffffe8ffffc37f50 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_compress mm/zswap.c:931 [inline] ffffe8ffffc37f50 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_store_page mm/zswap.c:1456 [inline] ffffe8ffffc37f50 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}, at: zswap_store+0xa3b/0x1c30 mm/zswap.c:1563 but task is already holding lock: ffffffff8ea36f00 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] ffffffff8ea36f00 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3951 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 __fs_reclaim_acquire mm/page_alloc.c:3853 [inline] fs_reclaim_acquire+0x88/0x130 mm/page_alloc.c:3867 might_alloc include/linux/sched/mm.h:318 [inline] slab_pre_alloc_hook mm/slub.c:4070 [inline] slab_alloc_node mm/slub.c:4148 [inline] __kmalloc_cache_node_noprof+0x40/0x3a0 mm/slub.c:4337 kmalloc_node_noprof include/linux/slab.h:924 [inline] zswap_cpu_comp_prepare+0xdc/0x400 mm/zswap.c:828 cpuhp_invoke_callback+0x415/0x830 kernel/cpu.c:204 cpuhp_issue_call+0x46f/0x7e0 __cpuhp_state_add_instance_cpuslocked+0x1ed/0x500 kernel/cpu.c:2437 __cpuhp_state_add_instance+0x27/0x40 kernel/cpu.c:2458 cpuhp_state_add_instance include/linux/cpuhotplug.h:386 [inline] zswap_pool_create+0x38c/0x680 mm/zswap.c:291 zswap_setup+0x32a/0x4b0 mm/zswap.c:1811 do_one_initcall+0x248/0x870 init/main.c:1266 do_initcall_level+0x157/0x210 init/main.c:1328 do_initcalls+0x3f/0x80 init/main.c:1344 kernel_init_freeable+0x435/0x5d0 init/main.c:1577 kernel_init+0x1d/0x2b0 init/main.c:1466 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 -> #0 (&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 __mutex_lock_common kernel/locking/mutex.c:585 [inline] __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735 acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] zswap_compress mm/zswap.c:931 [inline] zswap_store_page mm/zswap.c:1456 [inline] zswap_store+0xa3b/0x1c30 mm/zswap.c:1563 swap_writepage+0x647/0xce0 mm/page_io.c:279 shmem_writepage+0x1248/0x1610 mm/shmem.c:1579 pageout mm/vmscan.c:696 [inline] shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1374 evict_folios+0x3c92/0x58c0 mm/vmscan.c:4600 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4799 shrink_one+0x3b9/0x850 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x37c5/0x3e50 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6287 try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3951 __alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4382 __alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x3e1/0x780 mm/mempolicy.c:2269 alloc_slab_page+0x6a/0x110 mm/slub.c:2423 allocate_slab+0x1c0/0x2b0 mm/slub.c:2597 new_slab mm/slub.c:2642 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3830 __slab_alloc+0x58/0xa0 mm/slub.c:3920 __slab_alloc_node mm/slub.c:3995 [inline] slab_alloc_node mm/slub.c:4156 [inline] __do_kmalloc_node mm/slub.c:4297 [inline] __kmalloc_node_track_caller_noprof+0x2e9/0x4c0 mm/slub.c:4317 kmalloc_reserve+0x111/0x2a0 net/core/skbuff.c:609 __alloc_skb+0x1f3/0x440 net/core/skbuff.c:678 alloc_skb include/linux/skbuff.h:1323 [inline] nlmsg_new include/net/netlink.h:1018 [inline] rtmsg_ifinfo_build_skb+0x84/0x260 net/core/rtnetlink.c:4347 unregister_netdevice_many_notify+0xf71/0x1da0 net/core/dev.c:11549 cleanup_net+0x75d/0xd50 net/core/net_namespace.c:643 process_one_work kernel/workqueue.c:3236 [inline] process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317 worker_thread+0x870/0xd30 kernel/workqueue.c:3398 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); lock(fs_reclaim); lock(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); *** DEADLOCK *** 5 locks held by kworker/u4:2/30: #0: ffff88801baf3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline] #0: ffff88801baf3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317 #1: ffffc90000517d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline] #1: ffffc90000517d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317 #2: ffffffff8fca6b50 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xd50 net/core/net_namespace.c:602 #3: ffffffff8fcb3008 (rtnl_mutex){+.+.}-{4:4}, at: cleanup_net+0x6af/0xd50 net/core/net_namespace.c:638 #4: ffffffff8ea36f00 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3926 [inline] #4: ffffffff8ea36f00 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim+0xd4/0x3c0 mm/page_alloc.c:3951 stack backtrace: CPU: 0 UID: 0 PID: 30 Comm: kworker/u4:2 Not tainted 6.13.0-rc7-syzkaller-00149-g9bffa1ad25b8 #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 Workqueue: netns cleanup_net Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 print_circular_bug+0x13a/0x1b0 kernel/locking/lockdep.c:2074 check_noncircular+0x36a/0x4a0 kernel/locking/lockdep.c:2206 check_prev_add kernel/locking/lockdep.c:3161 [inline] check_prevs_add kernel/locking/lockdep.c:3280 [inline] validate_chain+0x18ef/0x5920 kernel/locking/lockdep.c:3904 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5226 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5849 __mutex_lock_common kernel/locking/mutex.c:585 [inline] __mutex_lock+0x1ac/0xee0 kernel/locking/mutex.c:735 acomp_ctx_get_cpu_lock mm/zswap.c:899 [inline] zswap_compress mm/zswap.c:931 [inline] zswap_store_page mm/zswap.c:1456 [inline] zswap_store+0xa3b/0x1c30 mm/zswap.c:1563 swap_writepage+0x647/0xce0 mm/page_io.c:279 shmem_writepage+0x1248/0x1610 mm/shmem.c:1579 pageout mm/vmscan.c:696 [inline] shrink_folio_list+0x3b68/0x5ca0 mm/vmscan.c:1374 evict_folios+0x3c92/0x58c0 mm/vmscan.c:4600 try_to_shrink_lruvec+0x9a6/0xc70 mm/vmscan.c:4799 shrink_one+0x3b9/0x850 mm/vmscan.c:4844 shrink_many mm/vmscan.c:4907 [inline] lru_gen_shrink_node mm/vmscan.c:4985 [inline] shrink_node+0x37c5/0x3e50 mm/vmscan.c:5966 shrink_zones mm/vmscan.c:6225 [inline] do_try_to_free_pages+0x78c/0x1cf0 mm/vmscan.c:6287 try_to_free_pages+0x47c/0x1050 mm/vmscan.c:6537 __perform_reclaim mm/page_alloc.c:3929 [inline] __alloc_pages_direct_reclaim+0x178/0x3c0 mm/page_alloc.c:3951 __alloc_pages_slowpath+0x764/0x1020 mm/page_alloc.c:4382 __alloc_pages_noprof+0x49b/0x710 mm/page_alloc.c:4766 alloc_pages_mpol_noprof+0x3e1/0x780 mm/mempolicy.c:2269 alloc_slab_page+0x6a/0x110 mm/slub.c:2423 allocate_slab+0x1c0/0x2b0 mm/slub.c:2597 new_slab mm/slub.c:2642 [inline] ___slab_alloc+0xc27/0x14a0 mm/slub.c:3830 __slab_alloc+0x58/0xa0 mm/slub.c:3920 __slab_alloc_node mm/slub.c:3995 [inline] slab_alloc_node mm/slub.c:4156 [inline] __do_kmalloc_node mm/slub.c:4297 [inline] __kmalloc_node_track_caller_noprof+0x2e9/0x4c0 mm/slub.c:4317 kmalloc_reserve+0x111/0x2a0 net/core/skbuff.c:609 __alloc_skb+0x1f3/0x440 net/core/skbuff.c:678 alloc_skb include/linux/skbuff.h:1323 [inline] nlmsg_new include/net/netlink.h:1018 [inline] rtmsg_ifinfo_build_skb+0x84/0x260 net/core/rtnetlink.c:4347 unregister_netdevice_many_notify+0xf71/0x1da0 net/core/dev.c:11549 cleanup_net+0x75d/0xd50 net/core/net_namespace.c:643 process_one_work kernel/workqueue.c:3236 [inline] process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317 worker_thread+0x870/0xd30 kernel/workqueue.c:3398 kthread+0x2f0/0x390 kernel/kthread.c:389 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244