====================================================== WARNING: possible circular locking dependency detected 5.8.0-rc3-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.0/4198 is trying to acquire lock: ffff8881df537390 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:799 [inline] ffff8881df537390 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}, at: shmem_fallocate+0x153/0xd90 mm/shmem.c:2707 but task is already holding lock: ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_release mm/page_alloc.c:4202 [inline] ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_release mm/page_alloc.c:4198 [inline] ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4227 [inline] ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4244 [inline] ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x1554/0x2780 mm/page_alloc.c:4650 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:4183 [inline] fs_reclaim_acquire mm/page_alloc.c:4194 [inline] prepare_alloc_pages mm/page_alloc.c:4780 [inline] __alloc_pages_nodemask+0x3d1/0x930 mm/page_alloc.c:4832 alloc_pages_vma+0xdd/0x720 mm/mempolicy.c:2255 shmem_alloc_page+0x11f/0x1f0 mm/shmem.c:1502 shmem_alloc_and_acct_page+0x161/0x8a0 mm/shmem.c:1527 shmem_getpage_gfp+0x511/0x2450 mm/shmem.c:1823 shmem_getpage mm/shmem.c:153 [inline] shmem_write_begin+0xf9/0x1d0 mm/shmem.c:2459 generic_perform_write+0x20a/0x4f0 mm/filemap.c:3299 __generic_file_write_iter+0x24b/0x610 mm/filemap.c:3428 generic_file_write_iter+0x3a6/0x5c0 mm/filemap.c:3460 call_write_iter include/linux/fs.h:1907 [inline] new_sync_write+0x422/0x650 fs/read_write.c:484 __vfs_write+0xc9/0x100 fs/read_write.c:497 vfs_write+0x268/0x5d0 fs/read_write.c:559 ksys_write+0x12d/0x250 fs/read_write.c:612 do_syscall_64+0x60/0xe0 arch/x86/entry/common.c:359 entry_SYSCALL_64_after_hwframe+0x44/0xa9 -> #0 (&sb->s_type->i_mutex_key#16){+.+.}-{3:3}: check_prev_add kernel/locking/lockdep.c:2496 [inline] check_prevs_add kernel/locking/lockdep.c:2601 [inline] validate_chain kernel/locking/lockdep.c:3218 [inline] __lock_acquire+0x2acb/0x56e0 kernel/locking/lockdep.c:4380 lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:4959 down_write+0x8d/0x150 kernel/locking/rwsem.c:1531 inode_lock include/linux/fs.h:799 [inline] shmem_fallocate+0x153/0xd90 mm/shmem.c:2707 ashmem_shrink_scan.part.0+0x2e9/0x490 drivers/staging/android/ashmem.c:490 ashmem_shrink_scan+0x6c/0xa0 drivers/staging/android/ashmem.c:473 do_shrink_slab+0x3c6/0xab0 mm/vmscan.c:518 shrink_slab+0x16f/0x5c0 mm/vmscan.c:679 shrink_node_memcgs mm/vmscan.c:2658 [inline] shrink_node+0x519/0x1b60 mm/vmscan.c:2770 shrink_zones mm/vmscan.c:2973 [inline] do_try_to_free_pages+0x38b/0x1340 mm/vmscan.c:3026 try_to_free_pages+0x29a/0x8b0 mm/vmscan.c:3265 __perform_reclaim mm/page_alloc.c:4223 [inline] __alloc_pages_direct_reclaim mm/page_alloc.c:4244 [inline] __alloc_pages_slowpath.constprop.0+0x949/0x2780 mm/page_alloc.c:4650 __alloc_pages_nodemask+0x68f/0x930 mm/page_alloc.c:4863 alloc_pages_current+0x187/0x280 mm/mempolicy.c:2292 alloc_pages include/linux/gfp.h:545 [inline] alloc_mmu_pages+0x7f/0x170 arch/x86/kvm/mmu/mmu.c:5671 kvm_mmu_create+0x3cb/0x560 arch/x86/kvm/mmu/mmu.c:5704 kvm_arch_vcpu_create+0x16d/0xb70 arch/x86/kvm/x86.c:9428 kvm_vm_ioctl_create_vcpu arch/x86/kvm/../../../virt/kvm/kvm_main.c:3060 [inline] kvm_vm_ioctl+0x1547/0x23c0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3620 vfs_ioctl fs/ioctl.c:48 [inline] ksys_ioctl+0x11a/0x180 fs/ioctl.c:753 __do_sys_ioctl fs/ioctl.c:762 [inline] __se_sys_ioctl fs/ioctl.c:760 [inline] __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:760 do_syscall_64+0x60/0xe0 arch/x86/entry/common.c:359 entry_SYSCALL_64_after_hwframe+0x44/0xa9 other info that might help us debug this: Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(fs_reclaim); lock(&sb->s_type->i_mutex_key#16); lock(fs_reclaim); lock(&sb->s_type->i_mutex_key#16); *** DEADLOCK *** 2 locks held by syz-executor.0/4198: #0: ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_release mm/page_alloc.c:4202 [inline] #0: ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: fs_reclaim_release mm/page_alloc.c:4198 [inline] #0: ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:4227 [inline] #0: ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:4244 [inline] #0: ffffffff89c6c340 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath.constprop.0+0x1554/0x2780 mm/page_alloc.c:4650 #1: ffffffff89c46b70 (shrinker_rwsem){++++}-{3:3}, at: shrink_slab+0xc7/0x5c0 mm/vmscan.c:669 stack backtrace: CPU: 1 PID: 4198 Comm: syz-executor.0 Not tainted 5.8.0-rc3-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x18f/0x20d lib/dump_stack.c:118 check_noncircular+0x324/0x3e0 kernel/locking/lockdep.c:1827 check_prev_add kernel/locking/lockdep.c:2496 [inline] check_prevs_add kernel/locking/lockdep.c:2601 [inline] validate_chain kernel/locking/lockdep.c:3218 [inline] __lock_acquire+0x2acb/0x56e0 kernel/locking/lockdep.c:4380 lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:4959 down_write+0x8d/0x150 kernel/locking/rwsem.c:1531 inode_lock include/linux/fs.h:799 [inline] shmem_fallocate+0x153/0xd90 mm/shmem.c:2707 ashmem_shrink_scan.part.0+0x2e9/0x490 drivers/staging/android/ashmem.c:490 ashmem_shrink_scan+0x6c/0xa0 drivers/staging/android/ashmem.c:473 do_shrink_slab+0x3c6/0xab0 mm/vmscan.c:518 shrink_slab+0x16f/0x5c0 mm/vmscan.c:679 shrink_node_memcgs mm/vmscan.c:2658 [inline] shrink_node+0x519/0x1b60 mm/vmscan.c:2770 shrink_zones mm/vmscan.c:2973 [inline] do_try_to_free_pages+0x38b/0x1340 mm/vmscan.c:3026 try_to_free_pages+0x29a/0x8b0 mm/vmscan.c:3265 __perform_reclaim mm/page_alloc.c:4223 [inline] __alloc_pages_direct_reclaim mm/page_alloc.c:4244 [inline] __alloc_pages_slowpath.constprop.0+0x949/0x2780 mm/page_alloc.c:4650 __alloc_pages_nodemask+0x68f/0x930 mm/page_alloc.c:4863 alloc_pages_current+0x187/0x280 mm/mempolicy.c:2292 alloc_pages include/linux/gfp.h:545 [inline] alloc_mmu_pages+0x7f/0x170 arch/x86/kvm/mmu/mmu.c:5671 kvm_mmu_create+0x3cb/0x560 arch/x86/kvm/mmu/mmu.c:5704 kvm_arch_vcpu_create+0x16d/0xb70 arch/x86/kvm/x86.c:9428 kvm_vm_ioctl_create_vcpu arch/x86/kvm/../../../virt/kvm/kvm_main.c:3060 [inline] kvm_vm_ioctl+0x1547/0x23c0 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3620 vfs_ioctl fs/ioctl.c:48 [inline] ksys_ioctl+0x11a/0x180 fs/ioctl.c:753 __do_sys_ioctl fs/ioctl.c:762 [inline] __se_sys_ioctl fs/ioctl.c:760 [inline] __x64_sys_ioctl+0x6f/0xb0 fs/ioctl.c:760 do_syscall_64+0x60/0xe0 arch/x86/entry/common.c:359 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x45cb29 Code: Bad RIP value. RSP: 002b:00007fe3e99b3c78 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00000000004e7ee0 RCX: 000000000045cb29 RDX: 0000000000000000 RSI: 000000000000ae41 RDI: 0000000000000005 RBP: 000000000078bf00 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff R13: 00000000000003a2 R14: 00000000004c64a3 R15: 00007fe3e99b46d4