IPv6: ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready
======================================================
WARNING: possible circular locking dependency detected
5.14.0-rc1-syzkaller #0 Not tainted
------------------------------------------------------
kworker/0:4/9761 is trying to acquire lock:
ffffffff8ba97aa0 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_acquire+0xf7/0x160 mm/page_alloc.c:4574

but task is already holding lock:
ffff8880b9c4d660 (lock#2){-.-.}-{2:2}, at: __alloc_pages_bulk+0x4ad/0x1870 mm/page_alloc.c:5279

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (lock#2){-.-.}-{2:2}:
       local_lock_acquire include/linux/local_lock_internal.h:42 [inline]
       free_unref_page+0x1bf/0x690 mm/page_alloc.c:3427
       mm_free_pgd kernel/fork.c:636 [inline]
       __mmdrop+0xcb/0x3f0 kernel/fork.c:687
       mmdrop include/linux/sched/mm.h:49 [inline]
       finish_task_switch.isra.0+0x6da/0xa50 kernel/sched/core.c:4582
       context_switch kernel/sched/core.c:4686 [inline]
       __schedule+0x942/0x26f0 kernel/sched/core.c:5940
       preempt_schedule_irq+0x4e/0x90 kernel/sched/core.c:6328
       irqentry_exit+0x31/0x80 kernel/entry/common.c:427
       asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:638
       lock_acquire+0x1ef/0x510 kernel/locking/lockdep.c:5593
       fs_reclaim_acquire mm/page_alloc.c:4569 [inline]
       fs_reclaim_acquire+0xd2/0x160 mm/page_alloc.c:4560
       might_alloc include/linux/sched/mm.h:198 [inline]
       slab_pre_alloc_hook mm/slab.h:485 [inline]
       slab_alloc mm/slab.c:3306 [inline]
       kmem_cache_alloc+0x3a/0x540 mm/slab.c:3507
       anon_vma_alloc mm/rmap.c:89 [inline]
       anon_vma_fork+0xed/0x630 mm/rmap.c:354
       dup_mmap kernel/fork.c:554 [inline]
       dup_mm+0x9a0/0x1380 kernel/fork.c:1379
       copy_mm kernel/fork.c:1431 [inline]
       copy_process+0x71ec/0x74d0 kernel/fork.c:2119
       kernel_clone+0xe7/0xac0 kernel/fork.c:2509
       __do_sys_clone+0xc8/0x110 kernel/fork.c:2626
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x44/0xae

-> #1 (mmu_notifier_invalidate_range_start){+.+.}-{0:0}:
       fs_reclaim_acquire mm/page_alloc.c:4569 [inline]
       fs_reclaim_acquire+0xd2/0x160 mm/page_alloc.c:4560
       might_alloc include/linux/sched/mm.h:198 [inline]
       slab_pre_alloc_hook mm/slab.h:485 [inline]
       slab_alloc mm/slab.c:3306 [inline]
       kmem_cache_alloc_trace+0x39/0x480 mm/slab.c:3573
       kmalloc include/linux/slab.h:591 [inline]
       kzalloc include/linux/slab.h:721 [inline]
       alloc_workqueue_attrs+0x38/0x80 kernel/workqueue.c:3365
       wq_numa_init kernel/workqueue.c:5899 [inline]
       workqueue_init+0x94/0x979 kernel/workqueue.c:6031
       kernel_init_freeable+0x3fb/0x741 init/main.c:1577
       kernel_init+0x1a/0x1d0 init/main.c:1485
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295

-> #0 (fs_reclaim){+.+.}-{0:0}:
       check_prev_add kernel/locking/lockdep.c:3051 [inline]
       check_prevs_add kernel/locking/lockdep.c:3174 [inline]
       validate_chain kernel/locking/lockdep.c:3789 [inline]
       __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015
       lock_acquire kernel/locking/lockdep.c:5625 [inline]
       lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590
       __fs_reclaim_acquire mm/page_alloc.c:4552 [inline]
       fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566
       prepare_alloc_pages+0x15c/0x580 mm/page_alloc.c:5164
       __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363
       alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244
       stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303
       save_stack+0x15e/0x1e0 mm/page_owner.c:120
       __set_page_owner+0x50/0x290 mm/page_owner.c:181
       prep_new_page mm/page_alloc.c:2433 [inline]
       __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301
       alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline]
       vm_area_alloc_pages mm/vmalloc.c:2793 [inline]
       __vmalloc_area_node mm/vmalloc.c:2863 [inline]
       __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966
       __vmalloc_node mm/vmalloc.c:3015 [inline]
       __vmalloc+0x69/0x80 mm/vmalloc.c:3029
       pcpu_mem_zalloc mm/percpu.c:517 [inline]
       pcpu_mem_zalloc+0x51/0xa0 mm/percpu.c:509
       pcpu_alloc_chunk mm/percpu.c:1460 [inline]
       pcpu_create_chunk+0x123/0x720 mm/percpu-vm.c:338
       pcpu_balance_populated mm/percpu.c:2114 [inline]
       pcpu_balance_workfn+0xab4/0xe10 mm/percpu.c:2252
       process_one_work+0x98d/0x1630 kernel/workqueue.c:2276
       worker_thread+0x658/0x11f0 kernel/workqueue.c:2422
       kthread+0x3e5/0x4d0 kernel/kthread.c:319
       ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295

other info that might help us debug this:

Chain exists of:
  fs_reclaim --> mmu_notifier_invalidate_range_start --> lock#2

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(lock#2);
                               lock(mmu_notifier_invalidate_range_start);
                               lock(lock#2);
  lock(fs_reclaim);

 *** DEADLOCK ***

4 locks held by kworker/0:4/9761:
 #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:620 [inline]
 #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:617 [inline]
 #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline]
 #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1630 kernel/workqueue.c:2247
 #1: ffffc9000b2afdb0 (pcpu_balance_work){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1630 kernel/workqueue.c:2251
 #2: ffffffff8ba784e8 (pcpu_alloc_mutex){+.+.}-{3:3}, at: pcpu_balance_workfn+0x21/0xe10 mm/percpu.c:2247
 #3: ffff8880b9c4d660 (lock#2){-.-.}-{2:2}, at: __alloc_pages_bulk+0x4ad/0x1870 mm/page_alloc.c:5279

stack backtrace:
CPU: 0 PID: 9761 Comm: kworker/0:4 Not tainted 5.14.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: events pcpu_balance_workfn
Call Trace:
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105
 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2131
 check_prev_add kernel/locking/lockdep.c:3051 [inline]
 check_prevs_add kernel/locking/lockdep.c:3174 [inline]
 validate_chain kernel/locking/lockdep.c:3789 [inline]
 __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015
 lock_acquire kernel/locking/lockdep.c:5625 [inline]
 lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590
 __fs_reclaim_acquire mm/page_alloc.c:4552 [inline]
 fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566
 prepare_alloc_pages+0x15c/0x580 mm/page_alloc.c:5164
 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363
 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244
 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303
 save_stack+0x15e/0x1e0 mm/page_owner.c:120
 __set_page_owner+0x50/0x290 mm/page_owner.c:181
 prep_new_page mm/page_alloc.c:2433 [inline]
 __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301
 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline]
 vm_area_alloc_pages mm/vmalloc.c:2793 [inline]
 __vmalloc_area_node mm/vmalloc.c:2863 [inline]
 __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966
 __vmalloc_node mm/vmalloc.c:3015 [inline]
 __vmalloc+0x69/0x80 mm/vmalloc.c:3029
 pcpu_mem_zalloc mm/percpu.c:517 [inline]
 pcpu_mem_zalloc+0x51/0xa0 mm/percpu.c:509
 pcpu_alloc_chunk mm/percpu.c:1460 [inline]
 pcpu_create_chunk+0x123/0x720 mm/percpu-vm.c:338
 pcpu_balance_populated mm/percpu.c:2114 [inline]
 pcpu_balance_workfn+0xab4/0xe10 mm/percpu.c:2252
 process_one_work+0x98d/0x1630 kernel/workqueue.c:2276
 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422
 kthread+0x3e5/0x4d0 kernel/kthread.c:319
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
BUG: sleeping function called from invalid context at mm/page_alloc.c:5167
in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 9761, name: kworker/0:4
INFO: lockdep is turned off.
irq event stamp: 40250
hardirqs last  enabled at (40249): [<ffffffff892c9b90>] __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:160 [inline]
hardirqs last  enabled at (40249): [<ffffffff892c9b90>] _raw_spin_unlock_irqrestore+0x50/0x70 kernel/locking/spinlock.c:191
hardirqs last disabled at (40250): [<ffffffff81b17567>] __alloc_pages_bulk+0x1017/0x1870 mm/page_alloc.c:5279
softirqs last  enabled at (40142): [<ffffffff81455b1e>] do_softirq.part.0+0xde/0x130 kernel/softirq.c:459
softirqs last disabled at (40127): [<ffffffff81455b1e>] do_softirq.part.0+0xde/0x130 kernel/softirq.c:459
CPU: 0 PID: 9761 Comm: kworker/0:4 Not tainted 5.14.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: events pcpu_balance_workfn
Call Trace:
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105
 ___might_sleep.cold+0x1f1/0x237 kernel/sched/core.c:9154
 prepare_alloc_pages+0x3da/0x580 mm/page_alloc.c:5167
 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363
 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244
 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303
 save_stack+0x15e/0x1e0 mm/page_owner.c:120
 __set_page_owner+0x50/0x290 mm/page_owner.c:181
 prep_new_page mm/page_alloc.c:2433 [inline]
 __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301
 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline]
 vm_area_alloc_pages mm/vmalloc.c:2793 [inline]
 __vmalloc_area_node mm/vmalloc.c:2863 [inline]
 __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966
 __vmalloc_node mm/vmalloc.c:3015 [inline]
 __vmalloc+0x69/0x80 mm/vmalloc.c:3029
 pcpu_mem_zalloc mm/percpu.c:517 [inline]
 pcpu_mem_zalloc+0x51/0xa0 mm/percpu.c:509
 pcpu_alloc_chunk mm/percpu.c:1460 [inline]
 pcpu_create_chunk+0x123/0x720 mm/percpu-vm.c:338
 pcpu_balance_populated mm/percpu.c:2114 [inline]
 pcpu_balance_workfn+0xab4/0xe10 mm/percpu.c:2252
 process_one_work+0x98d/0x1630 kernel/workqueue.c:2276
 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422
 kthread+0x3e5/0x4d0 kernel/kthread.c:319
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295