====================================================== WARNING: possible circular locking dependency detected 4.13.0-rc6-next-20170824+ #8 Not tainted ------------------------------------------------------ kworker/0:1/24 is trying to acquire lock: ((&irqfd->shutdown)){+.+.}, at: [] process_one_work+0xb2c/0x1be0 kernel/workqueue.c:2094 but now in release context of a crosslock acquired at the following: ((complete)&rcu.completion){+.+.}, at: [] __synchronize_srcu+0x1b5/0x250 kernel/rcu/srcutree.c:898 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 ((complete)&rcu.completion){+.+.}: check_prevs_add kernel/locking/lockdep.c:2020 [inline] validate_chain kernel/locking/lockdep.c:2469 [inline] __lock_acquire+0x3286/0x4620 kernel/locking/lockdep.c:3498 lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002 complete_acquire include/linux/completion.h:39 [inline] __wait_for_common kernel/sched/completion.c:108 [inline] wait_for_common kernel/sched/completion.c:122 [inline] wait_for_completion+0xc8/0x770 kernel/sched/completion.c:143 __synchronize_srcu+0x1b5/0x250 kernel/rcu/srcutree.c:898 synchronize_srcu_expedited kernel/rcu/srcutree.c:923 [inline] synchronize_srcu+0x1a3/0x560 kernel/rcu/srcutree.c:974 kvm_irqfd_assign arch/x86/kvm/../../../virt/kvm/eventfd.c:364 [inline] kvm_irqfd+0x994/0x1d50 arch/x86/kvm/../../../virt/kvm/eventfd.c:572 kvm_vm_ioctl+0x1079/0x1c40 arch/x86/kvm/../../../virt/kvm/kvm_main.c:3032 vfs_ioctl fs/ioctl.c:45 [inline] do_vfs_ioctl+0x1b1/0x1530 fs/ioctl.c:685 SYSC_ioctl fs/ioctl.c:700 [inline] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:691 entry_SYSCALL_64_fastpath+0x1f/0xbe -> #1 (&kvm->irqfds.resampler_lock){+.+.}: check_prevs_add kernel/locking/lockdep.c:2020 [inline] validate_chain kernel/locking/lockdep.c:2469 [inline] __lock_acquire+0x3286/0x4620 kernel/locking/lockdep.c:3498 lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893 mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908 irqfd_resampler_shutdown+0xe3/0x6b0 arch/x86/kvm/../../../virt/kvm/eventfd.c:98 irqfd_shutdown+0xd8/0x1a0 arch/x86/kvm/../../../virt/kvm/eventfd.c:137 process_one_work+0xbfd/0x1be0 kernel/workqueue.c:2098 worker_thread+0x223/0x1860 kernel/workqueue.c:2233 kthread+0x39c/0x470 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431 -> #0 ((&irqfd->shutdown)){+.+.}: process_one_work+0xba5/0x1be0 kernel/workqueue.c:2095 worker_thread+0x223/0x1860 kernel/workqueue.c:2233 kthread+0x39c/0x470 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431 0xffffffffffffffff other info that might help us debug this: Chain exists of: (&irqfd->shutdown) --> &kvm->irqfds.resampler_lock --> (complete)&rcu.completion Possible unsafe locking scenario by crosslock: CPU0 CPU1 ---- ---- lock(&kvm->irqfds.resampler_lock); lock((complete)&rcu.completion); lock((&irqfd->shutdown)); unlock((complete)&rcu.completion); *** DEADLOCK *** 3 locks held by kworker/0:1/24: #0: ("events_power_efficient"){.+.+}, at: [] __write_once_size include/linux/compiler.h:305 [inline] #0: ("events_power_efficient"){.+.+}, at: [] atomic64_set arch/x86/include/asm/atomic64_64.h:33 [inline] #0: ("events_power_efficient"){.+.+}, at: [] atomic_long_set include/asm-generic/atomic-long.h:56 [inline] #0: ("events_power_efficient"){.+.+}, at: [] set_work_data kernel/workqueue.c:617 [inline] #0: ("events_power_efficient"){.+.+}, at: [] set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline] #0: ("events_power_efficient"){.+.+}, at: [] process_one_work+0xad4/0x1be0 kernel/workqueue.c:2090 #1: ((&(&sdp->work)->work)){+.+.}, at: [] process_one_work+0xb2c/0x1be0 kernel/workqueue.c:2094 #2: (&x->wait#5){....}, at: [] complete+0x18/0x80 kernel/sched/completion.c:34 stack backtrace: CPU: 0 PID: 24 Comm: kworker/0:1 Not tainted 4.13.0-rc6-next-20170824+ #8 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events_power_efficient srcu_invoke_callbacks Call Trace: __dump_stack lib/dump_stack.c:16 [inline] dump_stack+0x194/0x257 lib/dump_stack.c:52 print_circular_bug+0x503/0x710 kernel/locking/lockdep.c:1259 check_prev_add+0x865/0x1520 kernel/locking/lockdep.c:1894 commit_xhlock kernel/locking/lockdep.c:5002 [inline] commit_xhlocks kernel/locking/lockdep.c:5046 [inline] lock_commit_crosslock+0xe73/0x1d10 kernel/locking/lockdep.c:5085 complete_release_commit include/linux/completion.h:49 [inline] complete+0x24/0x80 kernel/sched/completion.c:39 wakeme_after_rcu+0xd/0x10 kernel/rcu/update.c:376 srcu_invoke_callbacks+0x280/0x4d0 kernel/rcu/srcutree.c:1161 process_one_work+0xbfd/0x1be0 kernel/workqueue.c:2098 worker_thread+0x223/0x1860 kernel/workqueue.c:2233 kthread+0x39c/0x470 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431