INFO: task kworker/0:4:5805 blocked for more than 120 seconds. Not tainted 4.16.0-rc6+ #366 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/0:4 D21016 5805 2 0x80000000 Workqueue: events cgwb_release_workfn Call Trace: context_switch kernel/sched/core.c:2862 [inline] __schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440 schedule+0xf5/0x430 kernel/sched/core.c:3499 bit_wait+0x18/0x90 kernel/sched/wait_bit.c:250 __wait_on_bit+0x88/0x130 kernel/sched/wait_bit.c:51 out_of_line_wait_on_bit+0x204/0x3a0 kernel/sched/wait_bit.c:64 wait_on_bit include/linux/wait_bit.h:84 [inline] wb_shutdown+0x335/0x430 mm/backing-dev.c:377 cgwb_release_workfn+0x8b/0x61d mm/backing-dev.c:520 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113 worker_thread+0x223/0x1990 kernel/workqueue.c:2247 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Showing all locks held in the system: 4 locks held by kworker/0:0/3: #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&cw->work)){+.+.}, at: [<00000000dcf6dbd2>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000070f2e66a>] get_online_cpus include/linux/cpu.h:124 [inline] #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000070f2e66a>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619 #3: (slab_mutex){+.+.}, at: [<00000000f52b0f8d>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622 4 locks held by kworker/1:1/23: #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<000000007a1f7864>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<000000007a1f7864>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<000000007a1f7864>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<000000007a1f7864>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&css->destroy_work)#3){+.+.}, at: [<00000000dcf6dbd2>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000055ddc49b>] get_online_cpus include/linux/cpu.h:124 [inline] #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000055ddc49b>] memcg_destroy_kmem_caches+0xf/0x80 mm/slab_common.c:771 #3: (slab_mutex){+.+.}, at: [<000000006f52a8e4>] memcg_destroy_kmem_caches+0x24/0x80 mm/slab_common.c:774 2 locks held by khungtaskd/801: #0: (rcu_read_lock){....}, at: [<000000006fe6ff28>] check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline] #0: (rcu_read_lock){....}, at: [<000000006fe6ff28>] watchdog+0x1c5/0xd60 kernel/hung_task.c:249 #1: (tasklist_lock){.+.+}, at: [<00000000a9c071eb>] debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470 3 locks held by kworker/1:2/1784: #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: (slab_caches_to_rcu_destroy_work){+.+.}, at: [<00000000dcf6dbd2>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (slab_mutex){+.+.}, at: [<00000000daf0f76e>] slab_caches_to_rcu_destroy_workfn+0x25/0xc0 mm/slab_common.c:556 1 lock held by rsyslogd/4057: #0: (&f->f_pos_lock){+.+.}, at: [<00000000625fc0b7>] __fdget_pos+0x12b/0x190 fs/file.c:765 2 locks held by getty/4147: #0: (&tty->ldisc_sem){++++}, at: [<0000000046e91db5>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<000000008b2be4fa>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4148: #0: (&tty->ldisc_sem){++++}, at: [<0000000046e91db5>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<000000008b2be4fa>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4149: #0: (&tty->ldisc_sem){++++}, at: [<0000000046e91db5>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<000000008b2be4fa>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4150: #0: (&tty->ldisc_sem){++++}, at: [<0000000046e91db5>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<000000008b2be4fa>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4151: #0: (&tty->ldisc_sem){++++}, at: [<0000000046e91db5>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<000000008b2be4fa>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4152: #0: (&tty->ldisc_sem){++++}, at: [<0000000046e91db5>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<000000008b2be4fa>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4153: #0: (&tty->ldisc_sem){++++}, at: [<0000000046e91db5>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<000000008b2be4fa>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by kworker/0:4/5805: #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<000000007a1f7864>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&wb->release_work)){+.+.}, at: [<00000000dcf6dbd2>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 4 locks held by kworker/1:7/8582: #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<000000007a1f7864>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&cw->work)){+.+.}, at: [<00000000dcf6dbd2>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000070f2e66a>] get_online_cpus include/linux/cpu.h:124 [inline] #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<0000000070f2e66a>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619 #3: (slab_mutex){+.+.}, at: [<00000000f52b0f8d>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622 3 locks held by kworker/u4:12/22028: #0: ((wq_completion)"%s""netns"){+.+.}, at: [<000000007a1f7864>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<000000007a1f7864>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<000000007a1f7864>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<000000007a1f7864>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: (net_cleanup_work){+.+.}, at: [<00000000dcf6dbd2>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (net_mutex){+.+.}, at: [<0000000060c6b004>] cleanup_net+0x242/0xcb0 net/core/net_namespace.c:484 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 801 Comm: khungtaskd Not tainted 4.16.0-rc6+ #366 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x24d lib/dump_stack.c:53 nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103 nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38 trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline] check_hung_task kernel/hung_task.c:132 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline] watchdog+0x90c/0xd60 kernel/hung_task.c:249 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 PID: 5452 Comm: kworker/1:4 Not tainted 4.16.0-rc6+ #366 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events rht_deferred_worker RIP: 0010:lock_is_held_type+0xef/0x210 kernel/locking/lockdep.c:3957 RSP: 0018:ffff8801ca586e58 EFLAGS: 00000807 RAX: 0000000000000000 RBX: 0000000000000082 RCX: ffffffff81480720 RDX: 1ffff10037b0117e RSI: 00000000ffffffff RDI: ffff8801bd808bf4 RBP: ffff8801ca586e78 R08: 1ffff100394b0d43 R09: ffff8801db32bd90 R10: ffff8801ca586ed8 R11: 0000000000000000 R12: ffff8801bd808380 R13: ffff8801db32bd58 R14: 1ffff100394b0ded R15: ffff8801db32bd40 FS: 0000000000000000(0000) GS:ffff8801db300000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000072c000 CR3: 0000000006e22004 CR4: 00000000001606e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: lock_is_held include/linux/lockdep.h:344 [inline] get_pwq.isra.14+0xe9/0x140 kernel/workqueue.c:1065 insert_work+0x292/0x5f0 kernel/workqueue.c:1303 __queue_work+0x591/0x1230 kernel/workqueue.c:1463 queue_work_on+0x16a/0x1c0 kernel/workqueue.c:1488 queue_work include/linux/workqueue.h:488 [inline] schedule_work include/linux/workqueue.h:546 [inline] rht_deferred_worker+0x2ba/0x1cd0 lib/rhashtable.c:436 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113 worker_thread+0x223/0x1990 kernel/workqueue.c:2247 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Code: 0f 1f 44 00 00 65 4c 8b 24 25 c0 ed 01 00 49 8d bc 24 74 08 00 00 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48 c1 ea 03 0f b6 04 02 <48> 89 fa 83 e2 07 83 c2 03 38 c2 7c 08 84 c0 0f 85 cb 00 00 00