INFO: task kworker/0:5:9935 blocked for more than 143 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:5 state:D stack:26584 pid: 9935 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:7:16775 blocked for more than 143 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:7 state:D stack:26048 pid:16775 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:3:15144 blocked for more than 144 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:3 state:D stack:27056 pid:15144 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:4:19169 blocked for more than 144 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:4 state:D stack:27360 pid:19169 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:8:20384 blocked for more than 145 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:8 state:D stack:26584 pid:20384 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/1:6:12180 blocked for more than 146 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:6 state:D stack:27664 pid:12180 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:2:5929 blocked for more than 146 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:2 state:D stack:28472 pid: 5929 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:9:5981 blocked for more than 146 seconds. Not tainted 5.11.0-rc4-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:9 state:D stack:27480 pid: 5981 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4313 [inline] __schedule+0x90c/0x21a0 kernel/sched/core.c:5064 schedule+0xcf/0x270 kernel/sched/core.c:5143 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5202 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x15f0 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 Showing all locks held in the system: 1 lock held by khungtaskd/1660: #0: ffffffff8b373960 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6254 1 lock held by in:imklog/8332: #0: ffff8880219f7770 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:947 1 lock held by syz-executor.2/8468: #0: ffff888020bc0308 (&xt[i].mutex){+.+.}-{3:3}, at: xt_find_table_lock+0x41/0x540 net/netfilter/x_tables.c:1206 1 lock held by syz-executor.3/8470: #0: ffff888020bc0d88 (&xt[i].mutex){+.+.}-{3:3}, at: xt_find_table_lock+0x41/0x540 net/netfilter/x_tables.c:1206 3 locks held by kworker/0:5/9935: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc9000162fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/0:7/16775: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc90001d8fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 1 lock held by syz-executor.5/27226: #0: ffff888020bc0308 (&xt[i].mutex){+.+.}-{3:3}, at: xt_find_table_lock+0x41/0x540 net/netfilter/x_tables.c:1206 1 lock held by syz-executor.0/906: #0: ffff888020bc0308 (&xt[i].mutex){+.+.}-{3:3}, at: xt_find_table_lock+0x41/0x540 net/netfilter/x_tables.c:1206 3 locks held by kworker/0:3/15144: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc900032f7da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/0:4/19169: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc9000236fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/0:8/20384: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc9000a947da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 8 locks held by kworker/u4:0/29983: 6 locks held by kworker/u4:3/1963: #0: ffff8881407a3138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff8881407a3138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff8881407a3138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff8881407a3138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff8881407a3138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff8881407a3138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc9000286fda8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8ca46fd0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb10 net/core/net_namespace.c:566 #3: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #3: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x1de/0xba0 net/openvswitch/datapath.c:2530 #4: ffffffff8ca5a4c8 (rtnl_mutex){+.+.}-{3:3}, at: internal_dev_destroy+0x6f/0x150 net/openvswitch/vport-internal_dev.c:183 #5: ffffffff8b37c228 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline] #5: ffffffff8b37c228 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x27e/0x610 kernel/rcu/tree_exp.h:836 3 locks held by kworker/1:6/12180: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc9000a29fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 4 locks held by kworker/u4:2/19844: 2 locks held by syz-executor.1/4843: #0: ffff888020bc0308 (&xt[i].mutex){+.+.}-{3:3}, at: xt_find_table_lock+0x41/0x540 net/netfilter/x_tables.c:1206 #1: ffffffff8b37c228 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline] #1: ffffffff8b37c228 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x27e/0x610 kernel/rcu/tree_exp.h:836 2 locks held by kworker/1:1/5551: #0: ffff88801007c538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff88801007c538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff88801007c538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff88801007c538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff88801007c538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff88801007c538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc900022afda8 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 3 locks held by kworker/0:2/5929: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc90001dafda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 1 lock held by syz-executor.4/5977: #0: ffff888020bc0308 (&xt[i].mutex){+.+.}-{3:3}, at: xt_find_table_lock+0x41/0x540 net/netfilter/x_tables.c:1206 3 locks held by kworker/0:9/5981: #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010062d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x15f0 kernel/workqueue.c:2246 #1: ffffc90001fcfda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x15f0 kernel/workqueue.c:2250 #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8cea2de8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 1660 Comm: khungtaskd Not tainted 5.11.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:120 nmi_cpu_backtrace.cold+0x44/0xd7 lib/nmi_backtrace.c:105 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:209 [inline] watchdog+0xd43/0xfa0 kernel/hung_task.c:294 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 10807 Comm: syz-executor.1 Not tainted 5.11.0-rc4-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 RIP: 0010:arch_atomic_read arch/x86/include/asm/atomic.h:29 [inline] RIP: 0010:rcu_dynticks_curr_cpu_in_eqs kernel/rcu/tree.c:321 [inline] RIP: 0010:rcu_is_watching+0x70/0xc0 kernel/rcu/tree.c:1113 Code: 8a 48 b8 00 00 00 00 00 fc ff df 48 8d bb 28 01 00 00 48 89 fa 48 c1 ea 03 0f b6 14 02 48 89 f8 83 e0 07 83 c0 03 38 d0 7c 04 <84> d2 75 1f 8b 83 28 01 00 00 d1 e8 83 e0 01 65 ff 0d 9a a0 a2 7e RSP: 0018:ffffc900017ef5b8 EFLAGS: 00000006 RAX: 0000000000000003 RBX: ffff8880b9e35a80 RCX: ffffffff81acd7cb RDX: 0000000000000000 RSI: 0000000000000002 RDI: ffff8880b9e35ba8 RBP: 0000000000000000 R08: 0000000000000000 R09: ffffffff8d03b68f R10: fffffbfff1a076d1 R11: 0000000000000000 R12: 0000000000000001 R13: ffffc900017ef6d0 R14: dffffc0000000000 R15: 000000000002fd64 FS: 0000000000000000(0000) GS:ffff8880b9e00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000000056c000 CR3: 000000006087a000 CR4: 00000000001506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600 Call Trace: rcu_read_lock_held_common kernel/rcu/update.c:106 [inline] rcu_read_lock_sched_held+0x1c/0x70 kernel/rcu/update.c:121 trace_mm_page_free_batched include/trace/events/kmem.h:174 [inline] free_unref_page_list+0x552/0x750 mm/page_alloc.c:3276 release_pages+0x84c/0x1d20 mm/swap.c:934 tlb_batch_pages_flush mm/mmu_gather.c:49 [inline] tlb_flush_mmu_free mm/mmu_gather.c:242 [inline] tlb_flush_mmu+0xe9/0x6b0 mm/mmu_gather.c:249 zap_pte_range mm/memory.c:1330 [inline] zap_pmd_range mm/memory.c:1368 [inline] zap_pud_range mm/memory.c:1397 [inline] zap_p4d_range mm/memory.c:1418 [inline] unmap_page_range+0x1a75/0x2640 mm/memory.c:1439 unmap_single_vma+0x198/0x300 mm/memory.c:1484 unmap_vmas+0x168/0x2e0 mm/memory.c:1516 exit_mmap+0x2b1/0x5a0 mm/mmap.c:3220 __mmput+0x122/0x470 kernel/fork.c:1083 mmput+0x53/0x60 kernel/fork.c:1104 exit_mm kernel/exit.c:501 [inline] do_exit+0xb6a/0x2ae0 kernel/exit.c:812 do_group_exit+0x125/0x310 kernel/exit.c:922 get_signal+0x427/0x20f0 kernel/signal.c:2773 arch_do_signal_or_restart+0x2a8/0x1eb0 arch/x86/kernel/signal.c:811 handle_signal_work kernel/entry/common.c:147 [inline] exit_to_user_mode_loop kernel/entry/common.c:171 [inline] exit_to_user_mode_prepare+0x148/0x250 kernel/entry/common.c:201 __syscall_exit_to_user_mode_work kernel/entry/common.c:291 [inline] syscall_exit_to_user_mode+0x19/0x50 kernel/entry/common.c:302 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x465b09 Code: Unable to access opcode bytes at RIP 0x465adf. RSP: 002b:00007f47996ee218 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca RAX: fffffffffffffe00 RBX: 000000000056c010 RCX: 0000000000465b09 RDX: 0000000000000000 RSI: 0000000000000080 RDI: 000000000056c010 RBP: 000000000056c008 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 000000000056c014 R13: 00007fff3216471f R14: 00007f47996ee300 R15: 0000000000022000