====================================================== WARNING: possible circular locking dependency detected 5.14.0-rc1-syzkaller #0 Not tainted ------------------------------------------------------ kworker/u4:2/47 is trying to acquire lock: ffffffff87750940 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_acquire+0xf7/0x160 mm/page_alloc.c:4574 but task is already holding lock: ffff8881f694bee0 (lock#2){..-.}-{2:2}, at: __alloc_pages_bulk+0x406/0x1600 mm/page_alloc.c:5279 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (lock#2){..-.}-{2:2}: local_lock_acquire include/linux/local_lock_internal.h:42 [inline] rmqueue_pcplist mm/page_alloc.c:3663 [inline] rmqueue mm/page_alloc.c:3701 [inline] get_page_from_freelist+0xc9b/0x28b0 mm/page_alloc.c:4163 __alloc_pages+0x1b2/0x4e0 mm/page_alloc.c:5374 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 alloc_slab_page mm/slub.c:1713 [inline] allocate_slab+0x32b/0x4c0 mm/slub.c:1853 new_slab mm/slub.c:1916 [inline] new_slab_objects mm/slub.c:2662 [inline] ___slab_alloc+0x4ba/0x820 mm/slub.c:2825 __slab_alloc+0x68/0x80 mm/slub.c:2865 slab_alloc_node mm/slub.c:2947 [inline] slab_alloc mm/slub.c:2989 [inline] kmem_cache_alloc+0x339/0x360 mm/slub.c:2994 anon_vma_chain_alloc mm/rmap.c:136 [inline] anon_vma_clone+0xe0/0x5f0 mm/rmap.c:282 anon_vma_fork+0x82/0x630 mm/rmap.c:345 dup_mmap kernel/fork.c:554 [inline] dup_mm+0x8a6/0x11e0 kernel/fork.c:1379 copy_mm kernel/fork.c:1431 [inline] copy_process+0x5ec0/0x7040 kernel/fork.c:2119 kernel_clone+0xe7/0xa70 kernel/fork.c:2509 __do_sys_clone+0xc8/0x110 kernel/fork.c:2626 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae -> #2 (&anon_vma->rwsem){++++}-{3:3}: down_write+0x92/0x150 kernel/locking/rwsem.c:1406 anon_vma_lock_write include/linux/rmap.h:116 [inline] __vma_adjust+0x2f5/0x26b0 mm/mmap.c:868 vma_adjust include/linux/mm.h:2545 [inline] __split_vma+0x2b3/0x550 mm/mmap.c:2770 split_vma+0x95/0xd0 mm/mmap.c:2799 mprotect_fixup+0x6eb/0x8e0 mm/mprotect.c:483 do_mprotect_pkey+0x558/0x9a0 mm/mprotect.c:636 __do_sys_mprotect mm/mprotect.c:662 [inline] __se_sys_mprotect mm/mprotect.c:659 [inline] __x64_sys_mprotect+0x74/0xb0 mm/mprotect.c:659 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae -> #1 (&mapping->i_mmap_rwsem){+.+.}-{3:3}: down_write+0x92/0x150 kernel/locking/rwsem.c:1406 i_mmap_lock_write include/linux/fs.h:494 [inline] dma_resv_lockdep+0x348/0x540 drivers/dma-buf/dma-resv.c:689 do_one_initcall+0x103/0x5d0 init/main.c:1282 do_initcall_level init/main.c:1355 [inline] do_initcalls init/main.c:1371 [inline] do_basic_setup init/main.c:1391 [inline] kernel_init_freeable+0x6ae/0x737 init/main.c:1593 kernel_init+0x1a/0x1d0 init/main.c:1485 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 -> #0 (fs_reclaim){+.+.}-{0:0}: check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add kernel/locking/lockdep.c:3174 [inline] validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015 lock_acquire kernel/locking/lockdep.c:5625 [inline] lock_acquire+0x19d/0x4d0 kernel/locking/lockdep.c:5590 __fs_reclaim_acquire mm/page_alloc.c:4552 [inline] fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566 prepare_alloc_pages+0x155/0x4f0 mm/page_alloc.c:5164 __alloc_pages+0x12f/0x4e0 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x102/0x1d0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x7ed/0x1600 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 alloc_thread_stack_node kernel/fork.c:245 [inline] dup_task_struct kernel/fork.c:875 [inline] copy_process+0x8db/0x7040 kernel/fork.c:1952 kernel_clone+0xe7/0xa70 kernel/fork.c:2509 kernel_thread+0xb5/0xf0 kernel/fork.c:2561 call_usermodehelper_exec_sync kernel/umh.c:135 [inline] call_usermodehelper_exec_work+0x69/0x180 kernel/umh.c:166 process_one_work+0x98d/0x15b0 kernel/workqueue.c:2276 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422 kthread+0x3c0/0x4a0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 other info that might help us debug this: Chain exists of: fs_reclaim --> &anon_vma->rwsem --> lock#2 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(lock#2); lock(&anon_vma->rwsem); lock(lock#2); lock(fs_reclaim); *** DEADLOCK *** 3 locks held by kworker/u4:2/47: #0: ffff888100069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888100069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:620 [inline] #0: ffff888100069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888100069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:617 [inline] #0: ffff888100069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline] #0: ffff888100069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x871/0x15b0 kernel/workqueue.c:2247 #1: ffffc900002b7db0 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15b0 kernel/workqueue.c:2251 #2: ffff8881f694bee0 (lock#2){..-.}-{2:2}, at: __alloc_pages_bulk+0x406/0x1600 mm/page_alloc.c:5279 stack backtrace: CPU: 1 PID: 47 Comm: kworker/u4:2 Not tainted 5.14.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events_unbound call_usermodehelper_exec_work Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2131 check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add kernel/locking/lockdep.c:3174 [inline] validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015 lock_acquire kernel/locking/lockdep.c:5625 [inline] lock_acquire+0x19d/0x4d0 kernel/locking/lockdep.c:5590 __fs_reclaim_acquire mm/page_alloc.c:4552 [inline] fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566 prepare_alloc_pages+0x155/0x4f0 mm/page_alloc.c:5164 __alloc_pages+0x12f/0x4e0 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x102/0x1d0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x7ed/0x1600 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 alloc_thread_stack_node kernel/fork.c:245 [inline] dup_task_struct kernel/fork.c:875 [inline] copy_process+0x8db/0x7040 kernel/fork.c:1952 kernel_clone+0xe7/0xa70 kernel/fork.c:2509 kernel_thread+0xb5/0xf0 kernel/fork.c:2561 call_usermodehelper_exec_sync kernel/umh.c:135 [inline] call_usermodehelper_exec_work+0x69/0x180 kernel/umh.c:166 process_one_work+0x98d/0x15b0 kernel/workqueue.c:2276 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422 kthread+0x3c0/0x4a0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 BUG: sleeping function called from invalid context at mm/page_alloc.c:5167 in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 47, name: kworker/u4:2 INFO: lockdep is turned off. irq event stamp: 106572 hardirqs last enabled at (106571): [] __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:160 [inline] hardirqs last enabled at (106571): [] _raw_spin_unlock_irqrestore+0x42/0x50 kernel/locking/spinlock.c:191 hardirqs last disabled at (106572): [] __alloc_pages_bulk+0xebb/0x1600 mm/page_alloc.c:5279 softirqs last enabled at (105528): [] wb_workfn+0xb83/0x10b0 fs/fs-writeback.c:2251 softirqs last disabled at (105524): [] spin_lock_bh include/linux/spinlock.h:359 [inline] softirqs last disabled at (105524): [] wb_wakeup_delayed+0x62/0xf0 mm/backing-dev.c:268 CPU: 1 PID: 47 Comm: kworker/u4:2 Not tainted 5.14.0-rc1-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events_unbound call_usermodehelper_exec_work Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105 ___might_sleep.cold+0x141/0x16f kernel/sched/core.c:9154 prepare_alloc_pages+0x32d/0x4f0 mm/page_alloc.c:5167 __alloc_pages+0x12f/0x4e0 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x102/0x1d0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x7ed/0x1600 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 alloc_thread_stack_node kernel/fork.c:245 [inline] dup_task_struct kernel/fork.c:875 [inline] copy_process+0x8db/0x7040 kernel/fork.c:1952 kernel_clone+0xe7/0xa70 kernel/fork.c:2509 kernel_thread+0xb5/0xf0 kernel/fork.c:2561 call_usermodehelper_exec_sync kernel/umh.c:135 [inline] call_usermodehelper_exec_work+0x69/0x180 kernel/umh.c:166 process_one_work+0x98d/0x15b0 kernel/workqueue.c:2276 worker_thread+0x658/0x11f0 kernel/workqueue.c:2422 kthread+0x3c0/0x4a0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295