device bridge0 left promiscuous mode unregister_netdevice: waiting for lo to become free. Usage count = 3 unregister_netdevice: waiting for lo to become free. Usage count = 3 unregister_netdevice: waiting for lo to become free. Usage count = 3 unregister_netdevice: waiting for lo to become free. Usage count = 3 INFO: task kworker/0:4:5558 blocked for more than 120 seconds. Not tainted 4.16.0-rc7+ #367 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/0:4 D20688 5558 2 0x80000000 Workqueue: events cgwb_release_workfn Call Trace: context_switch kernel/sched/core.c:2862 [inline] __schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440 schedule+0xf5/0x430 kernel/sched/core.c:3499 bit_wait+0x18/0x90 kernel/sched/wait_bit.c:250 __wait_on_bit+0x88/0x130 kernel/sched/wait_bit.c:51 out_of_line_wait_on_bit+0x204/0x3a0 kernel/sched/wait_bit.c:64 wait_on_bit include/linux/wait_bit.h:84 [inline] wb_shutdown+0x335/0x430 mm/backing-dev.c:377 cgwb_release_workfn+0x8b/0x61d mm/backing-dev.c:520 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113 worker_thread+0x223/0x1990 kernel/workqueue.c:2247 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Showing all locks held in the system: 2 locks held by khungtaskd/801: #0: (rcu_read_lock){....}, at: [<000000005c9f3f8e>] check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline] #0: (rcu_read_lock){....}, at: [<000000005c9f3f8e>] watchdog+0x1c5/0xd60 kernel/hung_task.c:249 #1: (tasklist_lock){.+.+}, at: [<00000000ac982510>] debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470 5 locks held by kworker/0:2/1784: #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000028671a7b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000028671a7b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000028671a7b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<0000000028671a7b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&css->destroy_work)){+.+.}, at: [<00000000e1c7b65b>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cgroup_mutex){+.+.}, at: [<00000000a4fe52c2>] css_killed_work_fn+0x93/0x5c0 kernel/cgroup/cgroup.c:4967 #3: (cpu_hotplug_lock.rw_sem){++++}, at: [<000000004a2be277>] get_online_cpus include/linux/cpu.h:124 [inline] #3: (cpu_hotplug_lock.rw_sem){++++}, at: [<000000004a2be277>] memcg_deactivate_kmem_caches+0x21/0xf0 mm/slab_common.c:747 #4: (slab_mutex){+.+.}, at: [<0000000032fffee8>] memcg_deactivate_kmem_caches+0x2f/0xf0 mm/slab_common.c:750 1 lock held by rsyslogd/4064: #0: (&f->f_pos_lock){+.+.}, at: [<000000007596412f>] __fdget_pos+0x12b/0x190 fs/file.c:765 2 locks held by getty/4156: #0: (&tty->ldisc_sem){++++}, at: [<00000000cf6dfcb9>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000906a99d8>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4157: #0: (&tty->ldisc_sem){++++}, at: [<00000000cf6dfcb9>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000906a99d8>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4158: #0: (&tty->ldisc_sem){++++}, at: [<00000000cf6dfcb9>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000906a99d8>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4159: #0: (&tty->ldisc_sem){++++}, at: [<00000000cf6dfcb9>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000906a99d8>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4160: #0: (&tty->ldisc_sem){++++}, at: [<00000000cf6dfcb9>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000906a99d8>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4161: #0: (&tty->ldisc_sem){++++}, at: [<00000000cf6dfcb9>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000906a99d8>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 4 locks held by kworker/0:3/5508: #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&cw->work)){+.+.}, at: [<00000000e1c7b65b>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<000000003db4413e>] get_online_cpus include/linux/cpu.h:124 [inline] #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<000000003db4413e>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619 #3: (slab_mutex){+.+.}, at: [<00000000d189c087>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622 2 locks held by kworker/0:4/5558: #0: ((wq_completion)"events"){+.+.}, at: [<0000000028671a7b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<0000000028671a7b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<0000000028671a7b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<0000000028671a7b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&wb->release_work)){+.+.}, at: [<00000000e1c7b65b>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 4 locks held by kworker/1:11/14971: #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"memcg_kmem_cache"){+.+.}, at: [<0000000028671a7b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&cw->work)){+.+.}, at: [<00000000e1c7b65b>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<000000003db4413e>] get_online_cpus include/linux/cpu.h:124 [inline] #2: (cpu_hotplug_lock.rw_sem){++++}, at: [<000000003db4413e>] memcg_create_kmem_cache+0x16/0x170 mm/slab_common.c:619 #3: (slab_mutex){+.+.}, at: [<00000000d189c087>] memcg_create_kmem_cache+0x24/0x170 mm/slab_common.c:622 4 locks held by kworker/u4:16/7646: #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000028671a7b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000028671a7b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000028671a7b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"%s""netns"){+.+.}, at: [<0000000028671a7b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: (net_cleanup_work){+.+.}, at: [<00000000e1c7b65b>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (net_mutex){+.+.}, at: [<00000000bfa23b77>] cleanup_net+0x242/0xcb0 net/core/net_namespace.c:484 #3: (rcu_sched_state.barrier_mutex){+.+.}, at: [<000000004fa03517>] _rcu_barrier+0x142/0x750 kernel/rcu/tree.c:3517 2 locks held by getty/9914: #0: (&tty->ldisc_sem){++++}, at: [<00000000cf6dfcb9>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000906a99d8>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 1 lock held by syz-executor3/10867: #0: (net_mutex){+.+.}, at: [<00000000c964e6bb>] copy_net_ns+0x1f5/0x580 net/core/net_namespace.c:417 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 801 Comm: khungtaskd Not tainted 4.16.0-rc7+ #367 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x24d lib/dump_stack.c:53 nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103 nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38 trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline] check_hung_task kernel/hung_task.c:132 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline] watchdog+0x90c/0xd60 kernel/hung_task.c:249 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 skipped: idling at native_safe_halt+0x6/0x10 arch/x86/include/asm/irqflags.h:54