INFO: task kworker/1:0:20 blocked for more than 143 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:0 state:D stack:25768 pid: 20 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:2:137 blocked for more than 143 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:2 state:D stack:25584 pid: 137 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:5:3679 blocked for more than 144 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:5 state:D stack:26392 pid: 3679 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:3:18327 blocked for more than 144 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:3 state:D stack:27256 pid:18327 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:3:5759 blocked for more than 145 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:3 state:D stack:26024 pid: 5759 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:4:1501 blocked for more than 145 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:4 state:D stack:26976 pid: 1501 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:4:13214 blocked for more than 145 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:4 state:D stack:26416 pid:13214 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:1:22928 blocked for more than 146 seconds. Not tainted 5.16.0-rc7-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:1 state:D stack:29112 pid:22928 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4972 [inline] __schedule+0xa9a/0x4900 kernel/sched/core.c:6253 schedule+0xd2/0x260 kernel/sched/core.c:6326 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6385 __mutex_lock_common kernel/locking/mutex.c:680 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Showing all locks held in the system: 3 locks held by kworker/1:0/20: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90000da7db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 1 lock held by khungtaskd/27: #0: ffffffff8bb83da0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6458 5 locks held by kworker/u4:3/54: #0: ffff888144b93138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888144b93138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888144b93138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888144b93138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888144b93138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888144b93138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90001a2fdb0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d2fb750 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:555 #3: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #3: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x192/0xbc0 net/openvswitch/datapath.c:2606 #4: ffffffff8bb8d030 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x44/0x430 kernel/rcu/tree.c:3985 3 locks held by kworker/1:2/137: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc900027bfdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 2 locks held by getty/3280: #0: ffff88814b542098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:252 #1: ffffc90002b962e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2113 3 locks held by kworker/0:5/3679: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90002b5fdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/0:3/18327: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90002bf7db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/1:3/5759: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90002b4fdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/1:4/1501: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90010537db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 2 locks held by kworker/u4:5/12892: 3 locks held by kworker/0:4/13214: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90000cc7db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/1:1/22928: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1660 kernel/workqueue.c:2269 #1: ffffc90002bafdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1660 kernel/workqueue.c:2273 #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 2 locks held by syz-executor.2/23004: #0: ffffffff8d3a1510 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802 #1: ffffffff8d76e228 (ovs_mutex){+.+.}-{3:3}, at: ovs_meter_cmd_get+0x142/0x6b0 net/openvswitch/meter.c:504 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.16.0-rc7-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline] watchdog+0xc1d/0xf50 kernel/hung_task.c:295 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 12892 Comm: kworker/u4:5 Not tainted 5.16.0-rc7-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: bat_events batadv_nc_worker RIP: 0010:hlock_class kernel/locking/lockdep.c:203 [inline] RIP: 0010:hlock_class kernel/locking/lockdep.c:192 [inline] RIP: 0010:check_wait_context kernel/locking/lockdep.c:4725 [inline] RIP: 0010:__lock_acquire+0x630/0x5470 kernel/locking/lockdep.c:4977 Code: 39 fe 0f 8d 30 01 00 00 48 c7 c0 08 8f 91 8d 4c 89 64 24 58 45 89 f4 49 bf 00 00 00 00 00 fc ff df 48 c1 e8 03 4c 8b 74 24 28 <4c> 01 f8 48 89 44 24 60 eb 66 48 8d 04 5b 48 c1 e0 06 48 05 20 6e RSP: 0018:ffffc9000a88f858 EFLAGS: 00000806 RAX: 1ffffffff1b231e1 RBX: ffff88807102a744 RCX: 0000000000000001 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8ff76ed9 RBP: 0000000000000004 R08: 0000000000000000 R09: dffffc0000000000 R10: fffffbfff1feed40 R11: 0000000000000001 R12: 0000000000000000 R13: ffff888071029d00 R14: ffff88807102a760 R15: dffffc0000000000 FS: 0000000000000000(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000555bf7742140 CR3: 000000007f616000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: lock_acquire kernel/locking/lockdep.c:5637 [inline] lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5602 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x39/0x50 kernel/locking/spinlock.c:162 debug_object_activate+0x12e/0x3e0 lib/debugobjects.c:661 debug_timer_activate kernel/time/timer.c:729 [inline] __mod_timer+0x77d/0xe30 kernel/time/timer.c:1050 __queue_delayed_work+0x1a7/0x270 kernel/workqueue.c:1678 queue_delayed_work_on+0x105/0x120 kernel/workqueue.c:1703 process_one_work+0x9b2/0x1660 kernel/workqueue.c:2298 worker_thread+0x65d/0x1130 kernel/workqueue.c:2445 kthread+0x405/0x4f0 kernel/kthread.c:327 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 ---------------- Code disassembly (best guess): 0: 39 fe cmp %edi,%esi 2: 0f 8d 30 01 00 00 jge 0x138 8: 48 c7 c0 08 8f 91 8d mov $0xffffffff8d918f08,%rax f: 4c 89 64 24 58 mov %r12,0x58(%rsp) 14: 45 89 f4 mov %r14d,%r12d 17: 49 bf 00 00 00 00 00 movabs $0xdffffc0000000000,%r15 1e: fc ff df 21: 48 c1 e8 03 shr $0x3,%rax 25: 4c 8b 74 24 28 mov 0x28(%rsp),%r14 * 2a: 4c 01 f8 add %r15,%rax <-- trapping instruction 2d: 48 89 44 24 60 mov %rax,0x60(%rsp) 32: eb 66 jmp 0x9a 34: 48 8d 04 5b lea (%rbx,%rbx,2),%rax 38: 48 c1 e0 06 shl $0x6,%rax 3c: 48 rex.W 3d: 05 .byte 0x5 3e: 20 .byte 0x20 3f: 6e outsb %ds:(%rsi),(%dx)