IPVS: stopping backup sync thread 3342 ... IPVS: sync thread started: state = BACKUP, mcast_ifn = sit0, syncid = 0, id = 0 INFO: task kworker/1:3:5816 blocked for more than 120 seconds. Not tainted 4.16.0-rc7+ #368 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/1:3 D20944 5816 2 0x80000000 Workqueue: events cgwb_release_workfn Call Trace: context_switch kernel/sched/core.c:2862 [inline] __schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440 schedule+0xf5/0x430 kernel/sched/core.c:3499 bit_wait+0x18/0x90 kernel/sched/wait_bit.c:250 __wait_on_bit+0x88/0x130 kernel/sched/wait_bit.c:51 out_of_line_wait_on_bit+0x204/0x3a0 kernel/sched/wait_bit.c:64 wait_on_bit include/linux/wait_bit.h:84 [inline] wb_shutdown+0x335/0x430 mm/backing-dev.c:377 cgwb_release_workfn+0x8b/0x61d mm/backing-dev.c:520 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113 worker_thread+0x223/0x1990 kernel/workqueue.c:2247 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Showing all locks held in the system: 2 locks held by khungtaskd/868: #0: (rcu_read_lock){....}, at: [<00000000060ea7fa>] check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline] #0: (rcu_read_lock){....}, at: [<00000000060ea7fa>] watchdog+0x1c5/0xd60 kernel/hung_task.c:249 #1: (tasklist_lock){.+.+}, at: [<00000000fb402a71>] debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470 3 locks held by kworker/0:2/1926: #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&css->destroy_work)){+.+.}, at: [<000000004df92a77>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cgroup_mutex){+.+.}, at: [<00000000bcf83e46>] css_killed_work_fn+0x93/0x5c0 kernel/cgroup/cgroup.c:4967 3 locks held by kworker/1:2/2037: #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f166b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&css->destroy_work)#2){+.+.}, at: [<000000004df92a77>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 #2: (cgroup_mutex){+.+.}, at: [<000000009d27968d>] css_release_work_fn+0xea/0x940 kernel/cgroup/cgroup.c:4592 2 locks held by getty/4417: #0: (&tty->ldisc_sem){++++}, at: [<0000000063c2f4e0>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000e4fa60c5>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4418: #0: (&tty->ldisc_sem){++++}, at: [<0000000063c2f4e0>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000e4fa60c5>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4419: #0: (&tty->ldisc_sem){++++}, at: [<0000000063c2f4e0>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000e4fa60c5>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4420: #0: (&tty->ldisc_sem){++++}, at: [<0000000063c2f4e0>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000e4fa60c5>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4421: #0: (&tty->ldisc_sem){++++}, at: [<0000000063c2f4e0>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000e4fa60c5>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4422: #0: (&tty->ldisc_sem){++++}, at: [<0000000063c2f4e0>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000e4fa60c5>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by getty/4423: #0: (&tty->ldisc_sem){++++}, at: [<0000000063c2f4e0>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365 #1: (&ldata->atomic_read_lock){+.+.}, at: [<00000000e4fa60c5>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131 2 locks held by syz-executor1/4489: #0: (sb_writers#11){.+.+}, at: [<00000000bb50cf7a>] sb_start_write include/linux/fs.h:1548 [inline] #0: (sb_writers#11){.+.+}, at: [<00000000bb50cf7a>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386 #1: (&type->i_mutex_dir_key#4/1){+.+.}, at: [<0000000093d94938>] inode_lock_nested include/linux/fs.h:748 [inline] #1: (&type->i_mutex_dir_key#4/1){+.+.}, at: [<0000000093d94938>] do_rmdir+0x380/0x5f0 fs/namei.c:3907 3 locks held by syz-executor7/4494: #0: (sb_writers#10){.+.+}, at: [<00000000bb50cf7a>] sb_start_write include/linux/fs.h:1548 [inline] #0: (sb_writers#10){.+.+}, at: [<00000000bb50cf7a>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386 #1: (&type->i_mutex_dir_key#3/1){+.+.}, at: [<00000000ea56c11e>] inode_lock_nested include/linux/fs.h:748 [inline] #1: (&type->i_mutex_dir_key#3/1){+.+.}, at: [<00000000ea56c11e>] filename_create+0x192/0x520 fs/namei.c:3625 #2: (cgroup_mutex){+.+.}, at: [<00000000e4b00f66>] cgroup_kn_lock_live+0x29e/0x580 kernel/cgroup/cgroup.c:1535 2 locks held by kworker/1:3/5816: #0: ((wq_completion)"events"){+.+.}, at: [<00000000773f166b>] work_static include/linux/workqueue.h:198 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000773f166b>] set_work_data kernel/workqueue.c:619 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000773f166b>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: ((wq_completion)"events"){+.+.}, at: [<00000000773f166b>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084 #1: ((work_completion)(&wb->release_work)){+.+.}, at: [<000000004df92a77>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088 2 locks held by syz-executor6/19992: #0: (sb_writers#10){.+.+}, at: [<00000000bb50cf7a>] sb_start_write include/linux/fs.h:1548 [inline] #0: (sb_writers#10){.+.+}, at: [<00000000bb50cf7a>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386 #1: (&type->i_mutex_dir_key#3/1){+.+.}, at: [<0000000093d94938>] inode_lock_nested include/linux/fs.h:748 [inline] #1: (&type->i_mutex_dir_key#3/1){+.+.}, at: [<0000000093d94938>] do_rmdir+0x380/0x5f0 fs/namei.c:3907 2 locks held by syz-executor3/22128: #0: (sb_writers#10){.+.+}, at: [<00000000bb50cf7a>] sb_start_write include/linux/fs.h:1548 [inline] #0: (sb_writers#10){.+.+}, at: [<00000000bb50cf7a>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386 #1: (&type->i_mutex_dir_key#3/1){+.+.}, at: [<0000000093d94938>] inode_lock_nested include/linux/fs.h:748 [inline] #1: (&type->i_mutex_dir_key#3/1){+.+.}, at: [<0000000093d94938>] do_rmdir+0x380/0x5f0 fs/namei.c:3907 4 locks held by syz-executor2/23568: #0: (sb_writers#11){.+.+}, at: [<00000000bb50cf7a>] sb_start_write include/linux/fs.h:1548 [inline] #0: (sb_writers#11){.+.+}, at: [<00000000bb50cf7a>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386 #1: (&type->i_mutex_dir_key#4/1){+.+.}, at: [<00000000ea56c11e>] inode_lock_nested include/linux/fs.h:748 [inline] #1: (&type->i_mutex_dir_key#4/1){+.+.}, at: [<00000000ea56c11e>] filename_create+0x192/0x520 fs/namei.c:3625 #2: (cgroup_mutex){+.+.}, at: [<00000000e4b00f66>] cgroup_kn_lock_live+0x29e/0x580 kernel/cgroup/cgroup.c:1535 #3: (rtnl_mutex){+.+.}, at: [<00000000b2a5ae59>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74 1 lock held by syz-executor4/3341: #0: (ipvs->sync_mutex){+.+.}, at: [<00000000413c3f40>] do_ip_vs_set_ctl+0x277/0x1cc0 net/netfilter/ipvs/ip_vs_ctl.c:2393 2 locks held by syz-executor4/3343: #0: (rtnl_mutex){+.+.}, at: [<00000000b2a5ae59>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74 #1: (ipvs->sync_mutex){+.+.}, at: [<000000006fa23433>] do_ip_vs_set_ctl+0x10f8/0x1cc0 net/netfilter/ipvs/ip_vs_ctl.c:2388 1 lock held by syz-executor5/3339: #0: (&type->i_mutex_dir_key#4){++++}, at: [<000000004d848ed8>] inode_lock_shared include/linux/fs.h:723 [inline] #0: (&type->i_mutex_dir_key#4){++++}, at: [<000000004d848ed8>] lookup_slow+0x18e/0x4d0 fs/namei.c:1612 1 lock held by ipvs-b:6:0/3342: #0: (rtnl_mutex){+.+.}, at: [<00000000b2a5ae59>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74 1 lock held by syz-executor0/3345: #0: (&type->i_mutex_dir_key#4){++++}, at: [<000000004d848ed8>] inode_lock_shared include/linux/fs.h:723 [inline] #0: (&type->i_mutex_dir_key#4){++++}, at: [<000000004d848ed8>] lookup_slow+0x18e/0x4d0 fs/namei.c:1612 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 868 Comm: khungtaskd Not tainted 4.16.0-rc7+ #368 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:17 [inline] dump_stack+0x194/0x24d lib/dump_stack.c:53 nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103 nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38 trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline] check_hung_task kernel/hung_task.c:132 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline] watchdog+0x90c/0xd60 kernel/hung_task.c:249 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 5940 Comm: kworker/0:5 Not tainted 4.16.0-rc7+ #368 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: events rht_deferred_worker RIP: 0010:__list_add_valid+0x4c/0xd0 lib/list_debug.c:26 RSP: 0018:ffff8801b04dee80 EFLAGS: 00000046 RAX: dffffc0000000000 RBX: ffff8801db22bd90 RCX: ffffffff883d0a90 RDX: 1ffff1003b6457b2 RSI: ffff8801db22bd90 RDI: ffff8801db22bd98 RBP: ffff8801b04dee98 R08: 1ffff1003609bd43 R09: ffff8801db22bd90 R10: ffff8801b04deed8 R11: 0000000000000000 R12: ffffffff883d0a90 R13: ffff8801db230200 R14: ffffffff883d0a88 R15: ffff8801db22bd40 FS: 0000000000000000(0000) GS:ffff8801db200000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00000000018c8000 CR3: 0000000007a22002 CR4: 00000000001606f0 DR0: 0000000020000000 DR1: 0000000020000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600 Call Trace: __list_add include/linux/list.h:60 [inline] list_add_tail include/linux/list.h:93 [inline] insert_work+0x1ad/0x5f0 kernel/workqueue.c:1302 __queue_work+0x591/0x1230 kernel/workqueue.c:1463 queue_work_on+0x16a/0x1c0 kernel/workqueue.c:1488 queue_work include/linux/workqueue.h:488 [inline] schedule_work include/linux/workqueue.h:546 [inline] rht_deferred_worker+0x2ba/0x1cd0 lib/rhashtable.c:436 process_one_work+0xc47/0x1bb0 kernel/workqueue.c:2113 worker_thread+0x223/0x1990 kernel/workqueue.c:2247 kthread+0x33c/0x400 kernel/kthread.c:238 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406 Code: 08 48 c1 ea 03 80 3c 02 00 75 7c 48 8b 53 08 48 39 f2 75 37 48 89 f2 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03 80 3c 02 00 75 6e <48> 8b 16 48 39 da 75 29 49 39 f4 74 38 49 39 dc 74 33 48 83 c4