====================================================== WARNING: possible circular locking dependency detected 5.0.0-rc8-next-20190228 #45 Not tainted ------------------------------------------------------ kworker/1:3/10706 is trying to acquire lock: 000000003b3fa6d7 (&sb->s_type->i_mutex_key#10){++++}, at: inode_lock include/linux/fs.h:763 [inline] 000000003b3fa6d7 (&sb->s_type->i_mutex_key#10){++++}, at: __generic_file_fsync+0xb5/0x200 fs/libfs.c:983 but task is already holding lock: 000000002dccfa19 ((work_completion)(&dio->complete_work)){+.+.}, at: process_one_work+0x8b4/0x1790 kernel/workqueue.c:2242 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 ((work_completion)(&dio->complete_work)){+.+.}: process_one_work+0x90f/0x1790 kernel/workqueue.c:2243 worker_thread+0x98/0xe40 kernel/workqueue.c:2413 kthread+0x357/0x430 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 -> #1 ((wq_completion)"dio/%s"sb->s_id){+.+.}: flush_workqueue+0x126/0x14c0 kernel/workqueue.c:2772 drain_workqueue+0x1b4/0x470 kernel/workqueue.c:2937 destroy_workqueue+0x21/0x6f0 kernel/workqueue.c:4275 sb_init_dio_done_wq+0x77/0x90 fs/direct-io.c:636 dio_set_defer_completion fs/direct-io.c:648 [inline] get_more_blocks fs/direct-io.c:726 [inline] do_direct_IO fs/direct-io.c:1004 [inline] do_blockdev_direct_IO+0x27b7/0x8db0 fs/direct-io.c:1336 __blockdev_direct_IO+0xa1/0xca fs/direct-io.c:1422 ext4_direct_IO_write fs/ext4/inode.c:3768 [inline] ext4_direct_IO+0xa60/0x1cf0 fs/ext4/inode.c:3895 generic_file_direct_write+0x20f/0x4b0 mm/filemap.c:3197 __generic_file_write_iter+0x2ee/0x630 mm/filemap.c:3380 ext4_file_write_iter+0x346/0x11a0 fs/ext4/file.c:266 call_write_iter include/linux/fs.h:1857 [inline] aio_write+0x358/0x570 fs/aio.c:1582 __io_submit_one fs/aio.c:1861 [inline] io_submit_one+0x10ea/0x1cf0 fs/aio.c:1909 __do_sys_io_submit fs/aio.c:1954 [inline] __se_sys_io_submit fs/aio.c:1924 [inline] __x64_sys_io_submit+0x1bd/0x580 fs/aio.c:1924 do_syscall_64+0x103/0x610 arch/x86/entry/common.c:290 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #0 (&sb->s_type->i_mutex_key#10){++++}: lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:3833 down_write+0x38/0x90 kernel/locking/rwsem.c:70 inode_lock include/linux/fs.h:763 [inline] __generic_file_fsync+0xb5/0x200 fs/libfs.c:983 ext4_sync_file+0x867/0x14c0 fs/ext4/fsync.c:120 vfs_fsync_range+0x144/0x230 fs/sync.c:197 generic_write_sync include/linux/fs.h:2787 [inline] dio_complete+0x498/0x9f0 fs/direct-io.c:329 dio_aio_complete_work+0x20/0x30 fs/direct-io.c:341 process_one_work+0x98e/0x1790 kernel/workqueue.c:2267 worker_thread+0x98/0xe40 kernel/workqueue.c:2413 kthread+0x357/0x430 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 other info that might help us debug this: Chain exists of: &sb->s_type->i_mutex_key#10 --> (wq_completion)"dio/%s"sb->s_id --> (work_completion)(&dio->complete_work) Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock((work_completion)(&dio->complete_work)); lock((wq_completion)"dio/%s"sb->s_id); lock((work_completion)(&dio->complete_work)); lock(&sb->s_type->i_mutex_key#10); *** DEADLOCK *** 2 locks held by kworker/1:3/10706: #0: 00000000a12dec43 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: __write_once_size include/linux/compiler.h:224 [inline] #0: 00000000a12dec43 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: 00000000a12dec43 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: atomic64_set include/asm-generic/atomic-instrumented.h:855 [inline] #0: 00000000a12dec43 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: atomic_long_set include/asm-generic/atomic-long.h:40 [inline] #0: 00000000a12dec43 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: set_work_data kernel/workqueue.c:617 [inline] #0: 00000000a12dec43 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline] #0: 00000000a12dec43 ((wq_completion)"dio/%s"sb->s_id){+.+.}, at: process_one_work+0x87e/0x1790 kernel/workqueue.c:2238 #1: 000000002dccfa19 ((work_completion)(&dio->complete_work)){+.+.}, at: process_one_work+0x8b4/0x1790 kernel/workqueue.c:2242 stack backtrace: CPU: 1 PID: 10706 Comm: kworker/1:3 Not tainted 5.0.0-rc8-next-20190228 #45 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: dio/sda1 dio_aio_complete_work Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x172/0x1f0 lib/dump_stack.c:113 print_circular_bug.isra.0.cold+0x1cc/0x28f kernel/locking/lockdep.c:1225 check_prev_add kernel/locking/lockdep.c:1856 [inline] check_prevs_add kernel/locking/lockdep.c:1969 [inline] validate_chain kernel/locking/lockdep.c:2340 [inline] __lock_acquire+0x2fca/0x4710 kernel/locking/lockdep.c:3323 lock_acquire+0x16f/0x3f0 kernel/locking/lockdep.c:3833 down_write+0x38/0x90 kernel/locking/rwsem.c:70 inode_lock include/linux/fs.h:763 [inline] __generic_file_fsync+0xb5/0x200 fs/libfs.c:983 ext4_sync_file+0x867/0x14c0 fs/ext4/fsync.c:120 vfs_fsync_range+0x144/0x230 fs/sync.c:197 generic_write_sync include/linux/fs.h:2787 [inline] dio_complete+0x498/0x9f0 fs/direct-io.c:329 dio_aio_complete_work+0x20/0x30 fs/direct-io.c:341 process_one_work+0x98e/0x1790 kernel/workqueue.c:2267 worker_thread+0x98/0xe40 kernel/workqueue.c:2413 kthread+0x357/0x430 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352