WARNING: possible circular locking dependency detected 5.14.0-rc1-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.0/17773 is trying to acquire lock: ffffffff8ba97a20 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_acquire+0xf7/0x160 mm/page_alloc.c:4574 but task is already holding lock: ffff88802cb4d660 (lock#2){-.-.}-{2:2}, at: __alloc_pages_bulk+0x4ad/0x1870 mm/page_alloc.c:5279 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (lock#2){-.-.}-{2:2}: local_lock_acquire include/linux/local_lock_internal.h:42 [inline] rmqueue_pcplist mm/page_alloc.c:3663 [inline] rmqueue mm/page_alloc.c:3701 [inline] get_page_from_freelist+0x4aa/0x2f80 mm/page_alloc.c:4163 __alloc_pages+0x1b2/0x500 mm/page_alloc.c:5374 __alloc_pages_node include/linux/gfp.h:570 [inline] kmem_getpages mm/slab.c:1377 [inline] cache_grow_begin+0x75/0x460 mm/slab.c:2593 cache_alloc_refill+0x27f/0x380 mm/slab.c:2965 ____cache_alloc mm/slab.c:3048 [inline] ____cache_alloc mm/slab.c:3031 [inline] __do_cache_alloc mm/slab.c:3275 [inline] slab_alloc mm/slab.c:3316 [inline] kmem_cache_alloc+0x454/0x540 mm/slab.c:3507 kmem_cache_zalloc include/linux/slab.h:711 [inline] fill_pool+0x264/0x5c0 lib/debugobjects.c:171 __debug_object_init+0x7a/0xd10 lib/debugobjects.c:560 debug_object_init lib/debugobjects.c:615 [inline] debug_object_activate+0x32c/0x3e0 lib/debugobjects.c:701 debug_rcu_head_queue kernel/rcu/rcu.h:176 [inline] __call_rcu kernel/rcu/tree.c:3013 [inline] call_rcu+0x2c/0x750 kernel/rcu/tree.c:3109 destroy_inode+0x129/0x1b0 fs/inode.c:288 iput_final fs/inode.c:1660 [inline] iput.part.0+0x539/0x850 fs/inode.c:1686 iput+0x58/0x70 fs/inode.c:1676 dentry_unlink_inode+0x2b1/0x3d0 fs/dcache.c:376 __dentry_kill+0x3c0/0x640 fs/dcache.c:582 shrink_dentry_list+0x128/0x490 fs/dcache.c:1176 prune_dcache_sb+0xe7/0x140 fs/dcache.c:1257 super_cache_scan+0x336/0x590 fs/super.c:105 do_shrink_slab+0x42d/0xbd0 mm/vmscan.c:709 shrink_slab+0x17c/0x6e0 mm/vmscan.c:869 shrink_node_memcgs mm/vmscan.c:2868 [inline] shrink_node+0x8d1/0x1df0 mm/vmscan.c:2983 kswapd_shrink_node mm/vmscan.c:3726 [inline] balance_pgdat+0x7ce/0x13b0 mm/vmscan.c:3917 kswapd+0x5b6/0xdb0 mm/vmscan.c:4176 kthread+0x3e5/0x4d0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 -> #0 (fs_reclaim){+.+.}-{0:0}: check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add kernel/locking/lockdep.c:3174 [inline] validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015 lock_acquire kernel/locking/lockdep.c:5625 [inline] lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590 __fs_reclaim_acquire mm/page_alloc.c:4552 [inline] fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566 prepare_alloc_pages+0x15c/0x580 mm/page_alloc.c:5164 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x15e/0x1e0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 __vmalloc_node mm/vmalloc.c:3015 [inline] vzalloc+0x67/0x80 mm/vmalloc.c:3085 kvm_dev_ioctl_get_cpuid+0x12a/0x660 arch/x86/kvm/cpuid.c:1083 kvm_arch_dev_ioctl+0x19d/0x4d0 arch/x86/kvm/x86.c:4153 kvm_dev_ioctl+0xce/0x1770 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4520 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:1069 [inline] __se_sys_ioctl fs/ioctl.c:1055 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:1055 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(lock#2); lock(fs_reclaim); lock(lock#2); lock(fs_reclaim); *** DEADLOCK *** 1 lock held by syz-executor.0/17773: #0: ffff88802cb4d660 (lock#2){-.-.}-{2:2}, at: __alloc_pages_bulk+0x4ad/0x1870 mm/page_alloc.c:5279 stack backtrace: CPU: 1 PID: 17773 Comm: syz-executor.0 Not tainted 5.14.0-rc1-syzkaller #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105 check_noncircular+0x25f/0x2e0 kernel/locking/lockdep.c:2131 check_prev_add kernel/locking/lockdep.c:3051 [inline] check_prevs_add kernel/locking/lockdep.c:3174 [inline] validate_chain kernel/locking/lockdep.c:3789 [inline] __lock_acquire+0x2a07/0x54a0 kernel/locking/lockdep.c:5015 lock_acquire kernel/locking/lockdep.c:5625 [inline] lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5590 __fs_reclaim_acquire mm/page_alloc.c:4552 [inline] fs_reclaim_acquire+0x117/0x160 mm/page_alloc.c:4566 prepare_alloc_pages+0x15c/0x580 mm/page_alloc.c:5164 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x15e/0x1e0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 __vmalloc_node mm/vmalloc.c:3015 [inline] vzalloc+0x67/0x80 mm/vmalloc.c:3085 kvm_dev_ioctl_get_cpuid+0x12a/0x660 arch/x86/kvm/cpuid.c:1083 kvm_arch_dev_ioctl+0x19d/0x4d0 arch/x86/kvm/x86.c:4153 kvm_dev_ioctl+0xce/0x1770 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4520 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:1069 [inline] __se_sys_ioctl fs/ioctl.c:1055 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:1055 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x466397 Code: 3c 1c 48 f7 d8 49 39 c4 72 b8 e8 a4 48 02 00 85 c0 78 bd 48 83 c4 08 4c 89 e0 5b 41 5c c3 0f 1f 44 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f56a9fe25f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000020000000 RCX: 0000000000466397 RDX: 00007f56a9fe2d30 RSI: 00000000c008ae05 RDI: 0000000000000009 RBP: 0000000020001000 R08: 0000000000000000 R09: 00000000000000b3 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000020001800 R13: 00007f56a9fe2d30 R14: 0000000000000009 R15: 0000000000000000 BUG: sleeping function called from invalid context at mm/page_alloc.c:5167 in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 17773, name: syz-executor.0 INFO: lockdep is turned off. irq event stamp: 3478 hardirqs last enabled at (3477): [] slab_alloc_node mm/slab.c:3256 [inline] hardirqs last enabled at (3477): [] kmem_cache_alloc_node_trace+0x412/0x5d0 mm/slab.c:3617 hardirqs last disabled at (3478): [] __alloc_pages_bulk+0x1017/0x1870 mm/page_alloc.c:5279 softirqs last enabled at (3406): [] invoke_softirq kernel/softirq.c:432 [inline] softirqs last enabled at (3406): [] __irq_exit_rcu+0x16e/0x1c0 kernel/softirq.c:636 softirqs last disabled at (3395): [] invoke_softirq kernel/softirq.c:432 [inline] softirqs last disabled at (3395): [] __irq_exit_rcu+0x16e/0x1c0 kernel/softirq.c:636 CPU: 1 PID: 17773 Comm: syz-executor.0 Not tainted 5.14.0-rc1-syzkaller #0 Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:105 ___might_sleep.cold+0x1f1/0x237 kernel/sched/core.c:9154 prepare_alloc_pages+0x3da/0x580 mm/page_alloc.c:5167 __alloc_pages+0x12f/0x500 mm/page_alloc.c:5363 alloc_pages+0x18c/0x2a0 mm/mempolicy.c:2244 stack_depot_save+0x39d/0x4e0 lib/stackdepot.c:303 save_stack+0x15e/0x1e0 mm/page_owner.c:120 __set_page_owner+0x50/0x290 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2433 [inline] __alloc_pages_bulk+0x8b9/0x1870 mm/page_alloc.c:5301 alloc_pages_bulk_array_node include/linux/gfp.h:557 [inline] vm_area_alloc_pages mm/vmalloc.c:2793 [inline] __vmalloc_area_node mm/vmalloc.c:2863 [inline] __vmalloc_node_range+0x39d/0x960 mm/vmalloc.c:2966 __vmalloc_node mm/vmalloc.c:3015 [inline] vzalloc+0x67/0x80 mm/vmalloc.c:3085 kvm_dev_ioctl_get_cpuid+0x12a/0x660 arch/x86/kvm/cpuid.c:1083 kvm_arch_dev_ioctl+0x19d/0x4d0 arch/x86/kvm/x86.c:4153 kvm_dev_ioctl+0xce/0x1770 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4520 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:1069 [inline] __se_sys_ioctl fs/ioctl.c:1055 [inline] __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:1055 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x466397 Code: 3c 1c 48 f7 d8 49 39 c4 72 b8 e8 a4 48 02 00 85 c0 78 bd 48 83 c4 08 4c 89 e0 5b 41 5c c3 0f 1f 44 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 bc ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f56a9fe25f8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 0000000020000000 RCX: 0000000000466397 RDX: 00007f56a9fe2d30 RSI: 00000000c008ae05 RDI: 0000000000000009 RBP: 0000000020001000 R08: 0000000000000000 R09: 00000000000000b3 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000020001800 R13: 00007f56a9fe2d30 R14: 0000000000000009 R15: 0000000000000000