INFO: task kworker/1:4:9659 blocked for more than 143 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:4 state:D stack:24944 pid: 9659 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task kworker/1:7:9745 blocked for more than 143 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:7 state:D stack:25728 pid: 9745 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task kworker/0:6:9746 blocked for more than 143 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:6 state:D stack:25936 pid: 9746 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task kworker/0:9:26283 blocked for more than 143 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:9 state:D stack:25936 pid:26283 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task kworker/1:0:2658 blocked for more than 143 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:0 state:D stack:26096 pid: 2658 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task kworker/1:1:7653 blocked for more than 144 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:1 state:D stack:27664 pid: 7653 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task kworker/0:2:20678 blocked for more than 144 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:2 state:D stack:27664 pid:20678 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 INFO: task kworker/0:3:21237 blocked for more than 144 seconds. Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:3 state:D stack:27368 pid:21237 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4329 [inline] __schedule+0x911/0x2160 kernel/sched/core.c:5079 schedule+0xcf/0x270 kernel/sched/core.c:5158 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5217 __mutex_lock_common kernel/locking/mutex.c:1026 [inline] __mutex_lock+0x81f/0x1120 kernel/locking/mutex.c:1096 ovs_lock net/openvswitch/datapath.c:105 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 Showing all locks held in the system: 5 locks held by kworker/u4:3/146: #0: ffff88814017b138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff88814017b138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff88814017b138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff88814017b138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff88814017b138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff88814017b138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc9000109fda8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8d67e410 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb10 net/core/net_namespace.c:557 #3: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #3: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x1de/0xba0 net/openvswitch/datapath.c:2530 #4: ffffffff8bf7e2f0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x44/0x430 kernel/rcu/tree.c:3997 1 lock held by khungtaskd/1635: #0: ffffffff8bf75220 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6333 2 locks held by in:imklog/8130: #0: ffff888028b7dc70 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:990 #1: ffffffff8bf641b8 (syslog_lock){....}-{2:2}, at: is_bpf_text_address+0x0/0x160 kernel/bpf/core.c:691 3 locks held by kworker/1:4/9659: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc90009ca7da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/1:7/9745: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc9000c2b7da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/0:6/9746: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc9000c247da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/0:9/26283: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc9000e4c7da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/1:0/2658: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc90009f07da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/1:1/7653: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc90008d47da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/0:2/20678: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc9000fe77da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 3 locks held by kworker/0:3/21237: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc90002b57da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #2: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2382 2 locks held by kworker/0:4/6951: #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010864d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x871/0x1600 kernel/workqueue.c:2246 #1: ffffc9000292fda8 ((kfence_timer).work){+.+.}-{0:0}, at: process_one_work+0x8a5/0x1600 kernel/workqueue.c:2250 2 locks held by syz-executor.0/7899: #0: ffffffff8d722a50 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:810 #1: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline] #1: ffffffff8dadfea8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x4b3/0xeb0 net/openvswitch/datapath.c:1707 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 1635 Comm: khungtaskd Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x141/0x1d7 lib/dump_stack.c:120 nmi_cpu_backtrace.cold+0x44/0xd7 lib/nmi_backtrace.c:105 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:253 [inline] watchdog+0xd8e/0xf40 kernel/hung_task.c:338 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 PID: 89 Comm: kworker/u4:2 Not tainted 5.12.0-rc6-next-20210407-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: phy14 ieee80211_iface_work RIP: 0010:in_lock_functions+0xb/0x20 kernel/locking/spinlock.c:397 Code: 80 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 c3 cc cc cc cc cc cc cc cc cc cc cc cc cc cc cc 31 c0 48 81 ff b8 3b 11 89 72 0c <31> c0 48 81 ff 01 46 11 89 0f 92 c0 c3 cc cc cc cc cc cc cc cc 41 RSP: 0018:ffffc9000114f260 EFLAGS: 00000002 RAX: 0000000000000000 RBX: ffffffff89113bce RCX: 1ffffffff1fd14e4 RDX: 0000000000000000 RSI: ffff8880108403c0 RDI: ffffffff89113bce RBP: ffff8880108403c0 R08: 0000000000000000 R09: 0000000000200020 R10: ffffc9000114f420 R11: 0000000000000000 R12: 0000000000000000 R13: ffff8880108403c0 R14: ffff888010841640 R15: ffff888010841640 FS: 0000000000000000(0000) GS:ffff8880b9d00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f08e7752000 CR3: 000000001378f000 CR4: 00000000001506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: get_lock_parent_ip include/linux/ftrace.h:841 [inline] preempt_latency_start kernel/sched/core.c:4715 [inline] preempt_latency_start kernel/sched/core.c:4712 [inline] preempt_count_add+0x74/0x140 kernel/sched/core.c:4740 __raw_spin_lock include/linux/spinlock_api_smp.h:141 [inline] _raw_spin_lock+0xe/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:359 [inline] get_partial_node.part.0+0x2c/0x300 mm/slub.c:2005 get_partial_node mm/slub.c:2002 [inline] get_partial mm/slub.c:2110 [inline] new_slab_objects mm/slub.c:2600 [inline] ___slab_alloc+0x36f/0x810 mm/slub.c:2768 __slab_alloc.constprop.0+0xa7/0xf0 mm/slub.c:2808 slab_alloc_node mm/slub.c:2890 [inline] slab_alloc mm/slub.c:2932 [inline] __kmalloc+0x308/0x320 mm/slub.c:4083 kmalloc include/linux/slab.h:563 [inline] ieee802_11_parse_elems_crc+0x121/0xfe0 net/mac80211/util.c:1473 ieee802_11_parse_elems net/mac80211/ieee80211_i.h:2041 [inline] ieee80211_rx_mgmt_probe_beacon+0x188/0x16b0 net/mac80211/ibss.c:1612 ieee80211_ibss_rx_queued_mgmt+0xe43/0x1870 net/mac80211/ibss.c:1642 ieee80211_iface_work+0x761/0x9e0 net/mac80211/iface.c:1440 process_one_work+0x98d/0x1600 kernel/workqueue.c:2275 worker_thread+0x64c/0x1120 kernel/workqueue.c:2421 kthread+0x3b1/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294