INFO: task kworker/0:0:5 blocked for more than 143 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:0 state:D stack:26272 pid: 5 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:1:25 blocked for more than 143 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:1 state:D stack:26672 pid: 25 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:7:3679 blocked for more than 143 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:7 state:D stack:25672 pid: 3679 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:8:3680 blocked for more than 143 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:8 state:D stack:26552 pid: 3680 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:15:9291 blocked for more than 144 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:15 state:D stack:25640 pid: 9291 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:10:23675 blocked for more than 144 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:10 state:D stack:26072 pid:23675 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:1:21835 blocked for more than 144 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:1 state:D stack:25904 pid:21835 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:2:30154 blocked for more than 144 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:2 state:D stack:26904 pid:30154 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:5:4656 blocked for more than 144 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:5 state:D stack:25976 pid: 4656 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task syz-executor.5:4784 blocked for more than 144 seconds. Not tainted 5.16.0-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.5 state:D stack:27232 pid: 4784 ppid: 4763 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_cmd_set+0x50/0x2c0 net/openvswitch/datapath.c:1912 genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:731 genl_family_rcv_msg net/netlink/genetlink.c:775 [inline] genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:792 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2494 genl_rcv+0x24/0x40 net/netlink/genetlink.c:803 netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline] netlink_unicast+0x539/0x7e0 net/netlink/af_netlink.c:1343 netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2413 ___sys_sendmsg+0xf3/0x170 net/socket.c:2467 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2496 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f8bc1ad2059 RSP: 002b:00007f8bc0447168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f8bc1be4f60 RCX: 00007f8bc1ad2059 RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000003 RBP: 00007f8bc1b2c08d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fff047270df R14: 00007f8bc0447300 R15: 0000000000022000 Showing all locks held in the system: 3 locks held by kworker/0:0/5: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90000ca7db8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/1:1/25: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90000dffdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 1 lock held by khungtaskd/27: #0: ffffffff8bb85160 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460 2 locks held by getty/3280: #0: ffff888023853098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244 #1: ffffc90002b632e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2077 3 locks held by kworker/0:7/3679: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90002bcfdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/0:8/3680: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90002bdfdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 2 locks held by kworker/1:12/8983: #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90002effdb8 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 3 locks held by kworker/1:15/9291: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90002f6fdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/0:10/23675: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90004eafdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 6 locks held by kworker/u4:12/17026: #0: ffff888011a73138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888011a73138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888011a73138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888011a73138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888011a73138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888011a73138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc9000e76fdb8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d323510 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:560 #3: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #3: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x192/0xbc0 net/openvswitch/datapath.c:2606 #4: ffffffff8d338028 (rtnl_mutex){+.+.}-{3:3}, at: internal_dev_destroy+0x6f/0x150 net/openvswitch/vport-internal_dev.c:183 #5: ffffffff8bb8ed68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline] #5: ffffffff8bb8ed68 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x4fa/0x620 kernel/rcu/tree_exp.h:840 3 locks held by kworker/0:1/21835: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc9000bf37db8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/1:2/30154: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc900030bfdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/0:5/4656: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc9000328fdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 2 locks held by syz-executor.5/4784: #0: ffffffff8d3cb8b0 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802 #1: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #1: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_set+0x50/0x2c0 net/openvswitch/datapath.c:1912 2 locks held by syz-executor.5/4786: #0: ffffffff8d3cb8b0 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802 #1: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #1: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_set+0x50/0x2c0 net/openvswitch/datapath.c:1912 2 locks held by syz-executor.4/4808: #0: ffffffff8d3cb8b0 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802 #1: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #1: ffffffff8d79cd28 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x51b/0xeb0 net/openvswitch/datapath.c:1783 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.16.0-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:210 [inline] watchdog+0xc1d/0xf50 kernel/hung_task.c:295 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 18148 Comm: kworker/u4:9 Not tainted 5.16.0-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: phy63 ieee80211_iface_work RIP: 0010:check_kcov_mode+0x31/0x40 kernel/kcov.c:178 Code: 89 c2 81 e2 00 01 00 00 a9 00 01 ff 00 74 10 31 c0 85 d2 74 15 8b 96 a4 15 00 00 85 d2 74 0b 8b 86 80 15 00 00 39 f8 0f 94 c0 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 00 31 c0 65 8b 15 f7 20 RSP: 0018:ffffc90011167610 EFLAGS: 00000293 RAX: 0000000000000000 RBX: dffffc0000000000 RCX: ffff88823bdfd000 RDX: 0000000000000000 RSI: ffff888010e45700 RDI: 0000000000000003 RBP: ffff88823bdfd000 R08: ffff88823bdfc922 R09: 0000000000000001 R10: ffffffff81c0081e R11: 0000000000000000 R12: ffff88823bdfc000 R13: 00000000ffffffab R14: ffffffff9058eb28 R15: ffff88823bdfc922 FS: 0000000000000000(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000000c00b44e8c0 CR3: 000000000b88e000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600 Call Trace: write_comp_data kernel/kcov.c:221 [inline] __sanitizer_cov_trace_cmp8+0x1d/0x70 kernel/kcov.c:267 for_each_canary mm/kfence/core.c:351 [inline] kfence_guarded_alloc mm/kfence/core.c:437 [inline] __kfence_alloc+0xf2e/0x1490 mm/kfence/core.c:910 kfence_alloc include/linux/kfence.h:126 [inline] slab_alloc_node mm/slub.c:3148 [inline] slab_alloc mm/slub.c:3238 [inline] kmem_cache_alloc_trace+0x2a5/0x2c0 mm/slub.c:3255 kmalloc include/linux/slab.h:581 [inline] kzalloc include/linux/slab.h:715 [inline] ieee802_11_parse_elems_crc+0xd5/0x1050 net/mac80211/util.c:1489 ieee802_11_parse_elems net/mac80211/ieee80211_i.h:2228 [inline] ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1605 [inline] ieee80211_ibss_rx_queued_mgmt+0xd0e/0x3150 net/mac80211/ibss.c:1639 ieee80211_iface_process_skb net/mac80211/iface.c:1527 [inline] ieee80211_iface_work+0xa69/0xd00 net/mac80211/iface.c:1581 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 ---------------- Code disassembly (best guess): 0: 89 c2 mov %eax,%edx 2: 81 e2 00 01 00 00 and $0x100,%edx 8: a9 00 01 ff 00 test $0xff0100,%eax d: 74 10 je 0x1f f: 31 c0 xor %eax,%eax 11: 85 d2 test %edx,%edx 13: 74 15 je 0x2a 15: 8b 96 a4 15 00 00 mov 0x15a4(%rsi),%edx 1b: 85 d2 test %edx,%edx 1d: 74 0b je 0x2a 1f: 8b 86 80 15 00 00 mov 0x1580(%rsi),%eax 25: 39 f8 cmp %edi,%eax 27: 0f 94 c0 sete %al * 2a: c3 retq <-- trapping instruction 2b: 66 66 2e 0f 1f 84 00 data16 nopw %cs:0x0(%rax,%rax,1) 32: 00 00 00 00 36: 0f 1f 00 nopl (%rax) 39: 31 c0 xor %eax,%eax 3b: 65 gs 3c: 8b .byte 0x8b 3d: 15 .byte 0x15 3e: f7 20 mull (%rax)