F2FS-fs (loop5): Can't find valid F2FS filesystem in 1th superblock F2FS-fs (loop5): Invalid Fs Meta Ino: node(0) meta(0) root(0) F2FS-fs (loop5): Can't find valid F2FS filesystem in 2th superblock ====================================================== WARNING: possible circular locking dependency detected kobject: 'kvm' (00000000c741e568): kobject_uevent_env 4.20.0-rc3+ #345 Not tainted ------------------------------------------------------ kworker/0:1/12 is trying to acquire lock: 00000000e360c65c (&sb->s_type->i_mutex_key#10){++++}, at: inode_lock include/linux/fs.h:757 [inline] 00000000e360c65c (&sb->s_type->i_mutex_key#10){++++}, at: __generic_file_fsync+0xb5/0x200 fs/libfs.c:981 but task is already holding lock: kobject: 'kvm' (00000000c741e568): fill_kobj_path: path = '/devices/virtual/misc/kvm' 000000008742d2a2 ((work_completion)(&dio->complete_work)){+.+.}, at: process_one_work+0xb9a/0x1c40 kernel/workqueue.c:2128 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 ((work_completion)(&dio->complete_work)){+.+.}: process_one_work+0xc0a/0x1c40 kernel/workqueue.c:2129 worker_thread+0x17f/0x1390 kernel/workqueue.c:2296 kthread+0x35a/0x440 kernel/kthread.c:246 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 -> #1 ((wq_completion)"dio/%s"sb->s_id){+.+.}: flush_workqueue+0x30a/0x1e10 kernel/workqueue.c:2655 drain_workqueue+0x2a9/0x640 kernel/workqueue.c:2820 destroy_workqueue+0xc6/0x9c0 kernel/workqueue.c:4155 __alloc_workqueue_key+0xe56/0x10a0 kernel/workqueue.c:4138 sb_init_dio_done_wq+0x37/0x90 fs/direct-io.c:623 do_blockdev_direct_IO+0x1325/0x9db0 fs/direct-io.c:1283 __blockdev_direct_IO+0x9d/0xc6 fs/direct-io.c:1417 ext4_direct_IO_write fs/ext4/inode.c:3774 [inline] ext4_direct_IO+0xbe6/0x2230 fs/ext4/inode.c:3901 generic_file_direct_write+0x275/0x4b0 mm/filemap.c:3043 __generic_file_write_iter+0x2ff/0x630 mm/filemap.c:3222 ext4_file_write_iter+0x390/0x1420 fs/ext4/file.c:266 call_write_iter include/linux/fs.h:1857 [inline] aio_write+0x3b1/0x610 fs/aio.c:1561 io_submit_one+0xaa1/0xf80 fs/aio.c:1835 __do_sys_io_submit fs/aio.c:1916 [inline] __se_sys_io_submit fs/aio.c:1887 [inline] __x64_sys_io_submit+0x1b7/0x580 fs/aio.c:1887 do_syscall_64+0x1b9/0x820 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #0 (&sb->s_type->i_mutex_key#10){++++}: lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844 down_write+0x8a/0x130 kernel/locking/rwsem.c:70 inode_lock include/linux/fs.h:757 [inline] __generic_file_fsync+0xb5/0x200 fs/libfs.c:981 ext4_sync_file+0xa4f/0x1510 fs/ext4/fsync.c:120 vfs_fsync_range+0x140/0x220 fs/sync.c:197 generic_write_sync include/linux/fs.h:2781 [inline] dio_complete+0x75c/0x9e0 fs/direct-io.c:329 dio_aio_complete_work+0x20/0x30 fs/direct-io.c:341 process_one_work+0xc90/0x1c40 kernel/workqueue.c:2153 worker_thread+0x17f/0x1390 kernel/workqueue.c:2296 kthread+0x35a/0x440 kernel/kthread.c:246 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 other info that might help us debug this: Chain exists of: &sb->s_type->i_mutex_key#10 --> (wq_completion)"dio/%s"sb->s_id --> (work_completion)(&dio->complete_work) Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((work_completion)(&dio->complete_work)); lock((wq_completion)"dio/%s"sb->s_id); lock((work_completion)(&dio->complete_work)); lock(&sb->s_type->i_mutex_key#10); *** DEADLOCK *** 2 locks held by kworker/0:1/12: #0: 0000000027ea4724 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: __write_once_size include/linux/compiler.h:209 [inline] #0: 0000000027ea4724 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: 0000000027ea4724 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: atomic64_set include/asm-generic/atomic-instrumented.h:40 [inline] #0: 0000000027ea4724 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: atomic_long_set include/asm-generic/atomic-long.h:59 [inline] #0: 0000000027ea4724 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: set_work_data kernel/workqueue.c:617 [inline] #0: 0000000027ea4724 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline] #0: 0000000027ea4724 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: process_one_work+0xb43/0x1c40 kernel/workqueue.c:2124 #1: 000000008742d2a2 ((work_completion)(&dio->complete_work)){+.+.}, at: process_one_work+0xb9a/0x1c40 kernel/workqueue.c:2128 stack backtrace: CPU: 0 PID: 12 Comm: kworker/0:1 Not tainted 4.20.0-rc3+ #345 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: dio/sda1 dio_aio_complete_work Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x244/0x39d lib/dump_stack.c:113 print_circular_bug.isra.35.cold.54+0x1bd/0x27d kernel/locking/lockdep.c:1221 check_prev_add kernel/locking/lockdep.c:1863 [inline] check_prevs_add kernel/locking/lockdep.c:1976 [inline] validate_chain kernel/locking/lockdep.c:2347 [inline] __lock_acquire+0x3399/0x4c20 kernel/locking/lockdep.c:3341 lock_acquire+0x1ed/0x520 kernel/locking/lockdep.c:3844 down_write+0x8a/0x130 kernel/locking/rwsem.c:70 inode_lock include/linux/fs.h:757 [inline] __generic_file_fsync+0xb5/0x200 fs/libfs.c:981 ext4_sync_file+0xa4f/0x1510 fs/ext4/fsync.c:120 vfs_fsync_range+0x140/0x220 fs/sync.c:197 generic_write_sync include/linux/fs.h:2781 [inline] dio_complete+0x75c/0x9e0 fs/direct-io.c:329 dio_aio_complete_work+0x20/0x30 fs/direct-io.c:341 process_one_work+0xc90/0x1c40 kernel/workqueue.c:2153 worker_thread+0x17f/0x1390 kernel/workqueue.c:2296 kthread+0x35a/0x440 kernel/kthread.c:246 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 ------------[ cut here ]------------ kobject: 'loop0' (00000000902ef9f4): kobject_uevent_env downgrading a read lock WARNING: CPU: 0 PID: 24480 at kernel/locking/lockdep.c:3556 __lock_downgrade kernel/locking/lockdep.c:3556 [inline] WARNING: CPU: 0 PID: 24480 at kernel/locking/lockdep.c:3556 lock_downgrade+0x4d7/0x900 kernel/locking/lockdep.c:3819 kobject: 'loop0' (00000000902ef9f4): fill_kobj_path: path = '/devices/virtual/block/loop0'