====================================================== WARNING: possible circular locking dependency detected 4.13.0-rc6-next-20170824+ #8 Not tainted ------------------------------------------------------ kworker/0:2/1199 is trying to acquire lock: ((shepherd).work){+.+.}, at: [] process_one_work+0xb2c/0x1be0 kernel/workqueue.c:2094 but now in release context of a crosslock acquired at the following: ((complete)wq_barr::done/1){+.+.}, at: [] flush_work+0x621/0x930 kernel/workqueue.c:2868 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 ((complete)wq_barr::done/1){+.+.}: check_prevs_add kernel/locking/lockdep.c:2020 [inline] validate_chain kernel/locking/lockdep.c:2469 [inline] __lock_acquire+0x3286/0x4620 kernel/locking/lockdep.c:3498 lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002 complete_acquire include/linux/completion.h:39 [inline] __wait_for_common kernel/sched/completion.c:108 [inline] wait_for_common kernel/sched/completion.c:122 [inline] wait_for_completion+0xc8/0x770 kernel/sched/completion.c:143 flush_work+0x621/0x930 kernel/workqueue.c:2868 lru_add_drain_all_cpuslocked+0x331/0x520 mm/swap.c:722 lru_add_drain_all+0x13/0x20 mm/swap.c:730 SYSC_mlockall mm/mlock.c:803 [inline] SyS_mlockall+0x2fb/0x670 mm/mlock.c:791 entry_SYSCALL_64_fastpath+0x1f/0xbe -> #2 (lock#5){+.+.}: check_prevs_add kernel/locking/lockdep.c:2020 [inline] validate_chain kernel/locking/lockdep.c:2469 [inline] __lock_acquire+0x3286/0x4620 kernel/locking/lockdep.c:3498 lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002 __mutex_lock_common kernel/locking/mutex.c:756 [inline] __mutex_lock+0x16f/0x1870 kernel/locking/mutex.c:893 mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:908 lru_add_drain_all_cpuslocked+0xb3/0x520 mm/swap.c:704 lru_add_drain_all+0x13/0x20 mm/swap.c:730 SYSC_mlockall mm/mlock.c:803 [inline] SyS_mlockall+0x2fb/0x670 mm/mlock.c:791 entry_SYSCALL_64_fastpath+0x1f/0xbe -> #1 (cpu_hotplug_lock.rw_sem){++++}: check_prevs_add kernel/locking/lockdep.c:2020 [inline] validate_chain kernel/locking/lockdep.c:2469 [inline] __lock_acquire+0x3286/0x4620 kernel/locking/lockdep.c:3498 lock_acquire+0x1d5/0x580 kernel/locking/lockdep.c:4002 percpu_down_read_preempt_disable include/linux/percpu-rwsem.h:35 [inline] percpu_down_read include/linux/percpu-rwsem.h:58 [inline] cpus_read_lock+0x42/0x90 kernel/cpu.c:218 get_online_cpus include/linux/cpu.h:126 [inline] vmstat_shepherd+0x3d/0x1b0 mm/vmstat.c:1707 process_one_work+0xbfd/0x1be0 kernel/workqueue.c:2098 worker_thread+0x223/0x1860 kernel/workqueue.c:2233 kthread+0x39c/0x470 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431 -> #0 ((shepherd).work){+.+.}: process_one_work+0xba5/0x1be0 kernel/workqueue.c:2095 worker_thread+0x223/0x1860 kernel/workqueue.c:2233 kthread+0x39c/0x470 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431 0xffffffffffffffff other info that might help us debug this: Chain exists of: (shepherd).work --> lock#5 --> (complete)wq_barr::done/1 Possible unsafe locking scenario by crosslock: CPU0 CPU1 ---- ---- lock(lock#5); lock((complete)wq_barr::done/1); lock((shepherd).work); unlock((complete)wq_barr::done/1); *** DEADLOCK *** 3 locks held by kworker/0:2/1199: #0: ("mm_percpu_wq"){++++}, at: [] __write_once_size include/linux/compiler.h:305 [inline] #0: ("mm_percpu_wq"){++++}, at: [] atomic64_set arch/x86/include/asm/atomic64_64.h:33 [inline] #0: ("mm_percpu_wq"){++++}, at: [] atomic_long_set include/asm-generic/atomic-long.h:56 [inline] #0: ("mm_percpu_wq"){++++}, at: [] set_work_data kernel/workqueue.c:617 [inline] #0: ("mm_percpu_wq"){++++}, at: [] set_work_pool_and_clear_pending kernel/workqueue.c:644 [inline] #0: ("mm_percpu_wq"){++++}, at: [] process_one_work+0xad4/0x1be0 kernel/workqueue.c:2090 #1: ((&barr->work)){+.+.}, at: [] process_one_work+0xb2c/0x1be0 kernel/workqueue.c:2094 #2: (&x->wait#14){....}, at: [] complete+0x18/0x80 kernel/sched/completion.c:34 stack backtrace: CPU: 0 PID: 1199 Comm: kworker/0:2 Not tainted 4.13.0-rc6-next-20170824+ #8 Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011 Workqueue: mm_percpu_wq wq_barrier_func Call Trace: __dump_stack lib/dump_stack.c:16 [inline] dump_stack+0x194/0x257 lib/dump_stack.c:52 print_circular_bug+0x503/0x710 kernel/locking/lockdep.c:1259 check_prev_add+0x865/0x1520 kernel/locking/lockdep.c:1894 commit_xhlock kernel/locking/lockdep.c:5002 [inline] commit_xhlocks kernel/locking/lockdep.c:5046 [inline] lock_commit_crosslock+0xe73/0x1d10 kernel/locking/lockdep.c:5085 complete_release_commit include/linux/completion.h:49 [inline] complete+0x24/0x80 kernel/sched/completion.c:39 wq_barrier_func+0x16/0x20 kernel/workqueue.c:2437 process_one_work+0xbfd/0x1be0 kernel/workqueue.c:2098 process_scheduled_works kernel/workqueue.c:2159 [inline] worker_thread+0xa4b/0x1860 kernel/workqueue.c:2238 kthread+0x39c/0x470 kernel/kthread.c:231 ret_from_fork+0x2a/0x40 arch/x86/entry/entry_64.S:431