INFO: task kworker/u4:0:7 blocked for more than 143 seconds. Not tainted 5.1.0-rc6+ #88 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u4:0 D25608 7 2 0x80000000 Workqueue: events_unbound fsnotify_connector_destroy_workfn Call Trace: context_switch kernel/sched/core.c:2877 [inline] __schedule+0x813/0x1cc0 kernel/sched/core.c:3518 schedule+0x92/0x180 kernel/sched/core.c:3562 schedule_timeout+0x8ca/0xfd0 kernel/time/timer.c:1779 do_wait_for_common kernel/sched/completion.c:83 [inline] __wait_for_common kernel/sched/completion.c:104 [inline] wait_for_common kernel/sched/completion.c:115 [inline] wait_for_completion+0x29c/0x440 kernel/sched/completion.c:136 __synchronize_srcu+0x197/0x250 kernel/rcu/srcutree.c:925 synchronize_srcu+0x2dc/0x3e8 kernel/rcu/srcutree.c:1003 fsnotify_connector_destroy_workfn+0x4e/0xa0 fs/notify/mark.c:177 process_one_work+0x98e/0x1790 kernel/workqueue.c:2269 worker_thread+0x98/0xe40 kernel/workqueue.c:2415 kthread+0x357/0x430 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 INFO: task kworker/u4:4:2723 blocked for more than 143 seconds. Not tainted 5.1.0-rc6+ #88 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. kworker/u4:4 D25560 2723 2 0x80000000 Workqueue: events_unbound fsnotify_mark_destroy_workfn Call Trace: context_switch kernel/sched/core.c:2877 [inline] __schedule+0x813/0x1cc0 kernel/sched/core.c:3518 schedule+0x92/0x180 kernel/sched/core.c:3562 schedule_timeout+0x8ca/0xfd0 kernel/time/timer.c:1779 do_wait_for_common kernel/sched/completion.c:83 [inline] __wait_for_common kernel/sched/completion.c:104 [inline] wait_for_common kernel/sched/completion.c:115 [inline] wait_for_completion+0x29c/0x440 kernel/sched/completion.c:136 __synchronize_srcu+0x197/0x250 kernel/rcu/srcutree.c:925 synchronize_srcu_expedited kernel/rcu/srcutree.c:950 [inline] synchronize_srcu+0x239/0x3e8 kernel/rcu/srcutree.c:1001 fsnotify_mark_destroy_workfn+0x110/0x3b0 fs/notify/mark.c:827 process_one_work+0x98e/0x1790 kernel/workqueue.c:2269 worker_thread+0x98/0xe40 kernel/workqueue.c:2415 kthread+0x357/0x430 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 Showing all locks held in the system: 2 locks held by kworker/u4:0/7: #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: __write_once_size include/linux/compiler.h:220 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: atomic64_set include/asm-generic/atomic-instrumented.h:855 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: atomic_long_set include/asm-generic/atomic-long.h:40 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: set_work_data kernel/workqueue.c:619 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: process_one_work+0x87e/0x1790 kernel/workqueue.c:2240 #1: 00000000bda4fab0 (connector_reaper_work){+.+.}, at: process_one_work+0x8b4/0x1790 kernel/workqueue.c:2244 3 locks held by kworker/1:0/17: #0: 00000000c44d33a9 ((wq_completion)events){+.+.}, at: __write_once_size include/linux/compiler.h:220 [inline] #0: 00000000c44d33a9 ((wq_completion)events){+.+.}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: 00000000c44d33a9 ((wq_completion)events){+.+.}, at: atomic64_set include/asm-generic/atomic-instrumented.h:855 [inline] #0: 00000000c44d33a9 ((wq_completion)events){+.+.}, at: atomic_long_set include/asm-generic/atomic-long.h:40 [inline] #0: 00000000c44d33a9 ((wq_completion)events){+.+.}, at: set_work_data kernel/workqueue.c:619 [inline] #0: 00000000c44d33a9 ((wq_completion)events){+.+.}, at: set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: 00000000c44d33a9 ((wq_completion)events){+.+.}, at: process_one_work+0x87e/0x1790 kernel/workqueue.c:2240 #1: 00000000e1ee0cf5 (deferred_process_work){+.+.}, at: process_one_work+0x8b4/0x1790 kernel/workqueue.c:2244 #2: 0000000072b9e318 (rtnl_mutex){+.+.}, at: rtnl_lock+0x17/0x20 net/core/rtnetlink.c:76 1 lock held by khungtaskd/1041: #0: 00000000778fc731 (rcu_read_lock){....}, at: debug_show_all_locks+0x5f/0x27e kernel/locking/lockdep.c:5057 2 locks held by kworker/u4:4/2723: #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: __write_once_size include/linux/compiler.h:220 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: atomic64_set include/asm-generic/atomic-instrumented.h:855 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: atomic_long_set include/asm-generic/atomic-long.h:40 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: set_work_data kernel/workqueue.c:619 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: 00000000373ededf ((wq_completion)events_unbound){+.+.}, at: process_one_work+0x87e/0x1790 kernel/workqueue.c:2240 #1: 00000000f1cad8ac ((reaper_work).work){+.+.}, at: process_one_work+0x8b4/0x1790 kernel/workqueue.c:2244 3 locks held by rs:main Q:Reg/7978: #0: 000000004a9a3698 (&f->f_pos_lock){+.+.}, at: __fdget_pos+0xee/0x110 fs/file.c:801 #1: 000000008df9371f (sb_writers#4){.+.+}, at: file_start_write include/linux/fs.h:2825 [inline] #1: 000000008df9371f (sb_writers#4){.+.+}, at: vfs_write+0x429/0x580 fs/read_write.c:548 #2: 0000000027c9d587 (&rq->lock){-.-.}, at: rq_lock kernel/sched/sched.h:1168 [inline] #2: 0000000027c9d587 (&rq->lock){-.-.}, at: __schedule+0x1f8/0x1cc0 kernel/sched/core.c:3456 1 lock held by rsyslogd/7981: #0: 00000000d29364f2 (&f->f_pos_lock){+.+.}, at: __fdget_pos+0xee/0x110 fs/file.c:801 2 locks held by getty/8102: #0: 0000000007cf9abe (&tty->ldisc_sem){++++}, at: ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341 #1: 00000000feabfe37 (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156 2 locks held by getty/8103: #0: 000000009fb95b9f (&tty->ldisc_sem){++++}, at: ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341 #1: 00000000f8c7852b (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156 2 locks held by getty/8104: #0: 000000007e6cc067 (&tty->ldisc_sem){++++}, at: ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341 #1: 00000000c5388263 (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156 2 locks held by getty/8105: #0: 000000004eb5d634 (&tty->ldisc_sem){++++}, at: ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341 #1: 000000002608fbdd (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156 2 locks held by getty/8106: #0: 00000000fc93e639 (&tty->ldisc_sem){++++}, at: ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341 #1: 000000008f70b17d (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156 2 locks held by getty/8107: #0: 00000000d3d70431 (&tty->ldisc_sem){++++}, at: ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341 #1: 000000002a178bec (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156 2 locks held by getty/8108: #0: 000000006781ab67 (&tty->ldisc_sem){++++}, at: ldsem_down_read+0x33/0x40 drivers/tty/tty_ldsem.c:341 #1: 00000000021b63e7 (&ldata->atomic_read_lock){+.+.}, at: n_tty_read+0x232/0x1b70 drivers/tty/n_tty.c:2156 3 locks held by kworker/0:4/8156: 1 lock held by syz-executor.1/10583: #0: 000000003fb85c3c (event_mutex){+.+.}, at: perf_trace_destroy+0x28/0x100 kernel/trace/trace_event_perf.c:236 1 lock held by syz-executor.4/10717: #0: 000000003fb85c3c (event_mutex){+.+.}, at: perf_trace_destroy+0x28/0x100 kernel/trace/trace_event_perf.c:236 5 locks held by kworker/u4:3/23084: #0: 000000000d1f0fba ((wq_completion)netns){+.+.}, at: __write_once_size include/linux/compiler.h:220 [inline] #0: 000000000d1f0fba ((wq_completion)netns){+.+.}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: 000000000d1f0fba ((wq_completion)netns){+.+.}, at: atomic64_set include/asm-generic/atomic-instrumented.h:855 [inline] #0: 000000000d1f0fba ((wq_completion)netns){+.+.}, at: atomic_long_set include/asm-generic/atomic-long.h:40 [inline] #0: 000000000d1f0fba ((wq_completion)netns){+.+.}, at: set_work_data kernel/workqueue.c:619 [inline] #0: 000000000d1f0fba ((wq_completion)netns){+.+.}, at: set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline] #0: 000000000d1f0fba ((wq_completion)netns){+.+.}, at: process_one_work+0x87e/0x1790 kernel/workqueue.c:2240 #1: 000000004cc69f08 (net_cleanup_work){+.+.}, at: process_one_work+0x8b4/0x1790 kernel/workqueue.c:2244 #2: 00000000e2f2d0ea (pernet_ops_rwsem){++++}, at: cleanup_net+0xae/0x960 net/core/net_namespace.c:519 #3: 0000000072b9e318 (rtnl_mutex){+.+.}, at: rtnl_lock+0x17/0x20 net/core/rtnetlink.c:76 #4: 00000000aa1acd0e (rcu_state.exp_mutex){+.+.}, at: exp_funnel_lock kernel/rcu/tree_exp.h:285 [inline] #4: 00000000aa1acd0e (rcu_state.exp_mutex){+.+.}, at: synchronize_rcu_expedited+0x4ab/0x5b0 kernel/rcu/tree_exp.h:758 1 lock held by syz-executor.4/12266: #0: 000000009e44ada6 (&rtc->ops_lock){+.+.}, at: rtc_dev_ioctl+0xf3/0x9b0 drivers/rtc/dev.c:214 1 lock held by syz-executor.1/13027: #0: 0000000072b9e318 (rtnl_mutex){+.+.}, at: rtnl_lock+0x17/0x20 net/core/rtnetlink.c:76 1 lock held by syz-executor.4/13365: #0: 0000000072b9e318 (rtnl_mutex){+.+.}, at: rtnl_lock net/core/rtnetlink.c:76 [inline] #0: 0000000072b9e318 (rtnl_mutex){+.+.}, at: rtnetlink_rcv_msg+0x40a/0xb00 net/core/rtnetlink.c:5189 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 1041 Comm: khungtaskd Not tainted 5.1.0-rc6+ #88 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x172/0x1f0 lib/dump_stack.c:113 nmi_cpu_backtrace.cold+0x63/0xa4 lib/nmi_backtrace.c:101 nmi_trigger_cpumask_backtrace+0x1be/0x236 lib/nmi_backtrace.c:62 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:204 [inline] watchdog+0x9b7/0xec0 kernel/hung_task.c:288 kthread+0x357/0x430 kernel/kthread.c:253 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:352 Sending NMI from CPU 1 to CPUs 0: