====================================================== WARNING: possible circular locking dependency detected 5.14.0-rc1-syzkaller #0 Not tainted ------------------------------------------------------ kworker/1:8/9849 is trying to acquire lock: ffffffff8ba97aa0 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_acquire+0xf7/0x160 mm/page_alloc.c:4574 but task is already holding lock: ffff8880b9d4d660 (lock#2){-.-.}-{2:2}, at: __alloc_pages_bulk+0x4ad/0x1870 mm/page_alloc.c:5279 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (lock#2){-.-.}-{2:2}: local_lock_acquire include/linux/local_lock_internal.h:42 [inline] free_unref_page+0x1bf/0x690 mm/page_alloc.c:3427 mm_free_pgd kernel/fork.c:636 [inline] __mmdrop+0xcb/0x3f0 kernel/fork.c:687 mmdrop include/linux/sched/mm.h:49 [inline] finish_task_switch.isra.0+0x6da/0xa50 kernel/sched/core.c:4582 context_switch kernel/sched/core.c:4686 [inline] __schedule+0x942/0x26f0 kernel/sched/core.c:5940 preempt_schedule_irq+0x4e/0x90 kernel/sched/core.c:6328 irqentry_exit+0x31/0x80 kernel/entry/common.c:427 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:638 lock_acquire+0x1ef/0x510 kernel/locking/lockdep.c:5593 down_write+0x92/0x150 kernel/locking/rwsem.c:1406 i_mmap_lock_write include/linux/fs.h:494 [inline] __vma_adjust+0x237/0x2680 mm/mmap.c:850 vma_adjust include/linux/mm.h:2546 [inline] __split_vma+0x467/0x550 mm/mmap.c:2767 __do_munmap+0xcc1/0x11c0 mm/mmap.c:2877 do_munmap mm/mmap.c:2922 [inline] munmap_vma_range mm/mmap.c:604 [inline] mmap_region+0x85a/0x1760 mm/mmap.c:1753 do_mmap+0x86e/0x1180 mm/mmap.c:1584 vm_mmap_pgoff+0x1b7/0x290 mm/util.c:519 ksys_mmap_pgoff+0x4a8/0x620 mm/mmap.c:1635 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae -> #1 (&mapping->i_mmap_rwsem){+.+.}-{3:3}: down_write+0x92/0x150 kernel/locking/rwsem.c:1406 i_mmap_lock_write include/linux/fs.h:494 [inline] dma_resv_lockdep+0x341/0x536 drivers/dma-buf/dma-resv.c:689 do_one_initcall+0x103/0x650 init/main.c:1282 do_initcall_level init/main.c:1355 [inline] do_initcalls init/main.c:1371 [inline] do_basic_setup init/main.c:1391 [inline] kernel_init_freeable+0x6b8/0x741 init/main.c:1593 kernel_init+0x1a/0x1d0 init/main.c:1485 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 -> #0 (fs_reclaim){+.+.}-{0:0}: check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add kernel/locking/lockdep.c:3174 [inline] validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015 lock_acquire kernel/locking/lockdep.c:5625 [inline] lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590 __fs_reclaim_acquire mm/page_alloc.c:4552 [inline] fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566 prepare_alloc_pages+0x15c/0x580 mm/page_alloc.c:5164 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x15e/0x1e0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 __vmalloc_node mm/vmalloc.c:3015 [inline] __vmalloc+0x69/0x80 mm/vmalloc.c:3029 pcpu_mem_zalloc mm/percpu.c:517 [inline] pcpu_mem_zalloc+0x51/0xa0 mm/percpu.c:509 pcpu_alloc_chunk mm/percpu.c:1465 [inline] pcpu_create_chunk+0x18a/0x720 mm/percpu-vm.c:338 pcpu_balance_populated mm/percpu.c:2114 [inline] pcpu_balance_workfn+0xab4/0xe10 mm/percpu.c:2252 process_one_work+0x98d/0x1630 kernel/workqueue.c:2276 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422 kthread+0x3e5/0x4d0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 other info that might help us debug this: Chain exists of: fs_reclaim --> &mapping->i_mmap_rwsem --> lock#2 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(lock#2); lock(&mapping->i_mmap_rwsem); lock(lock#2); lock(fs_reclaim); *** DEADLOCK *** 4 locks held by kworker/1:8/9849: #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:620 [inline] #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:617 [inline] #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline] #0: ffff888010867d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1630 kernel/workqueue.c:2247 #1: ffffc9000a83fdb0 (pcpu_balance_work){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1630 kernel/workqueue.c:2251 #2: ffffffff8ba784e8 (pcpu_alloc_mutex){+.+.}-{3:3}, at: pcpu_balance_workfn+0x21/0xe10 mm/percpu.c:2247 #3: ffff8880b9d4d660 (lock#2){-.-.}-{2:2}, at: __alloc_pages_bulk+0x4ad/0x1870 mm/page_alloc.c:5279 stack backtrace: CPU: 1 PID: 9849 Comm: kworker/1:8 Not tainted 5.14.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events pcpu_balance_workfn Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2131 check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add kernel/locking/lockdep.c:3174 [inline] validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015 lock_acquire kernel/locking/lockdep.c:5625 [inline] lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590 __fs_reclaim_acquire mm/page_alloc.c:4552 [inline] fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566 prepare_alloc_pages+0x15c/0x580 mm/page_alloc.c:5164 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x15e/0x1e0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 __vmalloc_node mm/vmalloc.c:3015 [inline] __vmalloc+0x69/0x80 mm/vmalloc.c:3029 pcpu_mem_zalloc mm/percpu.c:517 [inline] pcpu_mem_zalloc+0x51/0xa0 mm/percpu.c:509 pcpu_alloc_chunk mm/percpu.c:1465 [inline] pcpu_create_chunk+0x18a/0x720 mm/percpu-vm.c:338 pcpu_balance_populated mm/percpu.c:2114 [inline] pcpu_balance_workfn+0xab4/0xe10 mm/percpu.c:2252 process_one_work+0x98d/0x1630 kernel/workqueue.c:2276 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422 kthread+0x3e5/0x4d0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 BUG: sleeping function called from invalid context at mm/page_alloc.c:5167 in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 9849, name: kworker/1:8 INFO: lockdep is turned off. irq event stamp: 566882 hardirqs last enabled at (566881): [] __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:160 [inline] hardirqs last enabled at (566881): [] _raw_spin_unlock_irqrestore+0x50/0x70 kernel/locking/spinlock.c:191 hardirqs last disabled at (566882): [] __alloc_pages_bulk+0x1017/0x1870 mm/page_alloc.c:5279 softirqs last enabled at (566778): [] do_softirq.part.0+0xde/0x130 kernel/softirq.c:459 softirqs last disabled at (566765): [] do_softirq.part.0+0xde/0x130 kernel/softirq.c:459 CPU: 1 PID: 9849 Comm: kworker/1:8 Not tainted 5.14.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events pcpu_balance_workfn Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105 ___might_sleep.cold+0x1f1/0x237 kernel/sched/core.c:9154 prepare_alloc_pages+0x3da/0x580 mm/page_alloc.c:5167 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x15e/0x1e0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 __vmalloc_node mm/vmalloc.c:3015 [inline] __vmalloc+0x69/0x80 mm/vmalloc.c:3029 pcpu_mem_zalloc mm/percpu.c:517 [inline] pcpu_mem_zalloc+0x51/0xa0 mm/percpu.c:509 pcpu_alloc_chunk mm/percpu.c:1465 [inline] pcpu_create_chunk+0x18a/0x720 mm/percpu-vm.c:338 pcpu_balance_populated mm/percpu.c:2114 [inline] pcpu_balance_workfn+0xab4/0xe10 mm/percpu.c:2252 process_one_work+0x98d/0x1630 kernel/workqueue.c:2276 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422 kthread+0x3e5/0x4d0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 IPv6: ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready IPv6: ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready IPv6: ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready