bond0 (unregistering): Released all slaves INFO: task kworker/0:5:6412 blocked for more than 120 seconds. Not tainted 4.16.0-rc7+ #367 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/0:5 D21352 6412 2 0x80000000 Workqueue: events cgwb_release_workfn Call Trace: context_switch kernel/sched/core.c:2862 [inline] __schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440 schedule+0xf5/0x430 kernel/sched/core.c:3499 bit_wait+0x18/0x90 kernel/sched/wait_bit.c:250 __wait_on_bit+0x88/0x130 kernel/sched/wait_bit.c:51 out_of_line_wait_on_bit+0x204/0x3a0 kernel/sched/wait_bit.c:64 wait_on_bit include/linux/wait_bit.h:84 [inline] wb_shutdown+0x335/0x430 mm/backing-dev.c:377 cgwb_release_workfn+0x8b/0x61d mm/backing-dev.c:520 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113 worker_thread+0x223/0x1990 kernel/workqueue.c:2247 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Showing all locks held in the system: 5 locks held by kworker/1:1/24: #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&css->destroy_work)){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cgroup_mutex){+.+.}, at: [<00000000e5d72dc2>] css_killed_work_fn+0x93/0x5c0 kernel/cgroup/cgroup.c:4967 #3: (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000b0efa5dc>] get_online_cpus include/linux/cpu.h:124 [inline] #3: (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000b0efa5dc>] memcg_deactivate_kmem_caches+0x21/0xf0 mm/slab_common.c:747 #4: (slab_mutex){+.+.}, at: [<0000000066fbfb7e>] memcg_deactivate_kmem_caches+0x2f/0xf0 mm/slab_common.c:750 3 locks held by kworker/u4:3/516: #0: ((wq_completion)"%s""netns"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: (net_cleanup_work){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (net_mutex){+.+.}, at: [<00000000879c5d71>] cleanup_net+0x242/0xcb0 net/core/net_namespace.c:484 2 locks held by khungtaskd/801: #0: (rcu_read_lock){....}, at: [<00000000230c9a72>] check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline] #0: (rcu_read_lock){....}, at: [<00000000230c9a72>] watchdog+0x1c5/0xd60 kernel/hung_task.c:249 #1: (tasklist_lock){.+.+}, at: [<00000000acfba6a2>] debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470 2 locks held by getty/4160: #0: (&tty->ldisc_sem){++++}, at: [<00000000e55bb947>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000074f4f5ae>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4161: #0: (&tty->ldisc_sem){++++}, at: [<00000000e55bb947>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000074f4f5ae>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4162: #0: (&tty->ldisc_sem){++++}, at: [<00000000e55bb947>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000074f4f5ae>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4163: #0: (&tty->ldisc_sem){++++}, at: [<00000000e55bb947>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000074f4f5ae>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4164: #0: (&tty->ldisc_sem){++++}, at: [<00000000e55bb947>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000074f4f5ae>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4165: #0: (&tty->ldisc_sem){++++}, at: [<00000000e55bb947>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000074f4f5ae>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4166: #0: (&tty->ldisc_sem){++++}, at: [<00000000e55bb947>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<0000000074f4f5ae>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by kworker/0:5/6412: #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&wb->release_work)){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 2 locks held by kworker/0:6/6638: #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&wb->release_work)){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 2 locks held by kworker/0:7/8145: #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&wb->release_work)){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 4 locks held by kworker/1:8/9938: #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&cw->work)){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000f2bd69ec>] get_online_cpus include/linux/cpu.h:124 [inline] #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000f2bd69ec>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619 #3: (slab_mutex){+.+.}, at: [<00000000e383d433>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622 2 locks held by kworker/0:10/17889: #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&wb->release_work)){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 4 locks held by kworker/0:11/17894: #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&cw->work)){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000f2bd69ec>] get_online_cpus include/linux/cpu.h:124 [inline] #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<00000000f2bd69ec>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619 #3: (slab_mutex){+.+.}, at: [<00000000e383d433>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622 3 locks held by kworker/0:12/20000: #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000ee47169e>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: (slab_caches_to_rcu_destroy_work){+.+.}, at: [<000000009f302b8e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (slab_mutex){+.+.}, at: [<0000000086326cce>] slab_caches_to_rcu_destroy_workfn+0x25/0xc0 mm/slab_common.c:556 3 locks held by syz-executor4/25422: #0: (sb_writers#10){.+.+}, at: [<00000000e21fd76d>] file_start_write include/linux/fs.h:2709 [inline] #0: (sb_writers#10){.+.+}, at: [<00000000e21fd76d>] vfs_write+0x407/0x510 fs/read_write.c:543 #1: (&of->mutex){+.+.}, at: [<00000000ed634077>] kernfs_fop_write+0x208/0x440 fs/kernfs/file.c:307 #2: (cgroup_mutex){+.+.}, at: [<0000000066111e39>] cgroup_lock_and_drain_offline+0x282/0x670 kernel/cgroup/cgroup.c:2793 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 801 Comm: khungtaskd Not tainted 4.16.0-rc7+ #367 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x24d lib/dump_stack.c:53 nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103 nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38 trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline] check_hung_task kernel/hung_task.c:132 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline] watchdog+0x90c/0xd60 kernel/hung_task.c:249 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 skipped: idling at native_safe_halt+0x6/0x10 arch/x86/include/asm/irqflags.h:54