INFO: task kworker/0:2:136 blocked for more than 143 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:2 state:D stack:24664 pid: 136 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/1:2:922 blocked for more than 143 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:2 state:D stack:23152 pid: 922 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/1:17:16288 blocked for more than 143 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:17 state:D stack:26248 pid:16288 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/1:18:16290 blocked for more than 144 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:18 state:D stack:26176 pid:16290 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/1:1:31236 blocked for more than 144 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:1 state:D stack:27664 pid:31236 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/0:4:1615 blocked for more than 144 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:4 state:D stack:24224 pid: 1615 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/0:3:13865 blocked for more than 145 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:3 state:D stack:25976 pid:13865 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/1:3:31039 blocked for more than 145 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:3 state:D stack:27440 pid:31039 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/0:6:7762 blocked for more than 145 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:6 state:D stack:28456 pid: 7762 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task kworker/1:4:11074 blocked for more than 145 seconds.
Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:4 state:D stack:26544 pid:11074 ppid: 2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
process_one_work+0x9b2/0x1690 kernel/workqueue.c:2299
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
INFO: task syz-executor.2:29035 can't die for more than 146 seconds.
task:syz-executor.2 state:D stack:26320 pid:29035 ppid: 17379 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_cmd_new+0x519/0xeb0 net/openvswitch/datapath.c:1783
genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:731
genl_family_rcv_msg net/netlink/genetlink.c:775 [inline]
genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:792
netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2487
genl_rcv+0x24/0x40 net/netlink/genetlink.c:803
netlink_unicast_kernel net/netlink/af_netlink.c:1315 [inline]
netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1341
netlink_sendmsg+0x86d/0xda0 net/netlink/af_netlink.c:1912
sock_sendmsg_nosec net/socket.c:704 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:724
____sys_sendmsg+0x6e8/0x810 net/socket.c:2409
___sys_sendmsg+0xf3/0x170 net/socket.c:2463
__sys_sendmsg+0xe5/0x1b0 net/socket.c:2492
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7fbe6dcc2ae9
RSP: 002b:00007fbe6cc38188 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007fbe6ddd5f60 RCX: 00007fbe6dcc2ae9
RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000006
RBP: 00007fbe6dd1cff7 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff21f24dff R14: 00007fbe6cc38300 R15: 0000000000022000
INFO: task syz-executor.2:29058 can't die for more than 146 seconds.
task:syz-executor.2 state:D stack:27600 pid:29058 ppid: 17379 flags:0x00004004
Call Trace:
context_switch kernel/sched/core.c:4983 [inline]
__schedule+0xab2/0x4d90 kernel/sched/core.c:6293
schedule+0xd2/0x260 kernel/sched/core.c:6366
schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6425
__mutex_lock_common kernel/locking/mutex.c:680 [inline]
__mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:740
ovs_lock net/openvswitch/datapath.c:106 [inline]
ovs_dp_cmd_new+0x519/0xeb0 net/openvswitch/datapath.c:1783
genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:731
genl_family_rcv_msg net/netlink/genetlink.c:775 [inline]
genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:792
netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2487
genl_rcv+0x24/0x40 net/netlink/genetlink.c:803
netlink_unicast_kernel net/netlink/af_netlink.c:1315 [inline]
netlink_unicast+0x533/0x7d0 net/netlink/af_netlink.c:1341
netlink_sendmsg+0x86d/0xda0 net/netlink/af_netlink.c:1912
sock_sendmsg_nosec net/socket.c:704 [inline]
sock_sendmsg+0xcf/0x120 net/socket.c:724
____sys_sendmsg+0x6e8/0x810 net/socket.c:2409
___sys_sendmsg+0xf3/0x170 net/socket.c:2463
__sys_sendmsg+0xe5/0x1b0 net/socket.c:2492
do_syscall_x64 arch/x86/entry/common.c:50 [inline]
do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
entry_SYSCALL_64_after_hwframe+0x44/0xae
RIP: 0033:0x7fbe6dcc2ae9
RSP: 002b:00007fbe6cbf6188 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007fbe6ddd60f0 RCX: 00007fbe6dcc2ae9
RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000006
RBP: 00007fbe6dd1cff7 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff21f24dff R14: 00007fbe6cbf6300 R15: 0000000000022000
Showing all locks held in the system:
1 lock held by ksoftirqd/0/13:
#0: ffff8880b9c39d58 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2b/0x120 kernel/sched/core.c:489
1 lock held by khungtaskd/27:
#0: ffffffff8bb83220 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6458
3 locks held by kworker/0:2/136:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc9000285fdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
3 locks held by kworker/1:2/922:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc900048efdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
1 lock held by in:imklog/6224:
#0: ffff888023d680f0 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:990
3 locks held by kworker/1:17/16288:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc90004b5fdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
3 locks held by kworker/1:18/16290:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc900040bfdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
1 lock held by systemd-udevd/21045:
2 locks held by systemd-udevd/21054:
#0: ffff88801a983918 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_put+0x97/0x9e0 block/bdev.c:913
#1: ffff88801a96f360 (&lo->lo_mutex){+.+.}-{3:3}, at: lo_release+0x4d/0x1f0 drivers/block/loop.c:1737
3 locks held by kworker/1:1/31236:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc9000b0e7db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
3 locks held by kworker/0:4/1615:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc90010167db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
2 locks held by agetty/13665:
#0: ffff8880b359c098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:252
#1: ffffc90011b0b2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2113
3 locks held by kworker/0:3/13865:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc9001219fdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
5 locks held by kworker/u4:8/25613:
#0: ffff8880139a3138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff8880139a3138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff8880139a3138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff8880139a3138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff8880139a3138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff8880139a3138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc90004c1fdb0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d2f8850 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:555
#3: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#3: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x192/0xbc0 net/openvswitch/datapath.c:2606
#4: ffffffff8bb8cb30 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x44/0x440 kernel/rcu/tree.c:4026
3 locks held by kworker/1:3/31039:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc9000a67fdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
3 locks held by kworker/0:6/7762:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc90010077db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
3 locks held by kworker/1:4/11074:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc9001243fdb0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
2 locks held by syz-executor.2/29035:
#0: ffffffff8d39f890 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802
#1: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#1: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x519/0xeb0 net/openvswitch/datapath.c:1783
2 locks held by syz-executor.2/29058:
#0: ffffffff8d39f890 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802
#1: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#1: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x519/0xeb0 net/openvswitch/datapath.c:1783
3 locks held by kworker/1:5/29082:
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1198 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:635 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:662 [inline]
#0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x896/0x1690 kernel/workqueue.c:2270
#1: ffffc90013427db0 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ca/0x1690 kernel/workqueue.c:2274
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline]
#2: ffffffff8d76c528 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458
=============================================
NMI backtrace for cpu 0
CPU: 0 PID: 27 Comm: khungtaskd Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:88 [inline]
dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111
nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:256 [inline]
watchdog+0xcb7/0xed0 kernel/hung_task.c:413
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 1257 Comm: kworker/u4:5 Not tainted 5.16.0-rc2-next-20211125-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: phy22 ieee80211_iface_work
RIP: 0010:strlen+0x58/0x90 lib/string.c:487
Code: 00 00 74 39 48 bb 00 00 00 00 00 fc ff df 48 89 e8 48 83 c0 01 48 89 c2 48 89 c1 48 c1 ea 03 83 e1 07 0f b6 14 1a 38 ca 7f 04 <84> d2 75 1f 80 38 00 75 de 48 83 c4 08 48 29 e8 5b 5d c3 48 83 c4
RSP: 0018:ffffc90005837980 EFLAGS: 00000097
RAX: ffffffff89abf0e9 RBX: dffffc0000000000 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffff8880b9d279c8 RDI: ffffffff89abf0e0
RBP: ffffffff89abf0e0 R08: 0000000000000000 R09: ffffffff8d914ad7
R10: fffffbfff1b2295a R11: 0000000000000001 R12: ffff8880b9d279c8
R13: ffffffff8ba80200 R14: ffff8880b9d279c8 R15: ffffc90005837a50
FS: 0000000000000000(0000) GS:ffff8880b9d00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f8d14043280 CR3: 000000000b88e000 CR4: 00000000003506e0
Call Trace:
strlen include/linux/fortify-string.h:102 [inline]
trace_event_get_offsets_lock include/trace/events/lock.h:39 [inline]
perf_trace_lock+0xb1/0x4d0 include/trace/events/lock.h:39
trace_lock_release include/trace/events/lock.h:58 [inline]
lock_release+0x4a8/0x720 kernel/locking/lockdep.c:5648
do_write_seqcount_end include/linux/seqlock.h:565 [inline]
psi_group_change+0x4e9/0xc70 kernel/sched/psi.c:753
psi_enqueue kernel/sched/stats.h:134 [inline]
enqueue_task+0x1b6/0x3c0 kernel/sched/core.c:2006
activate_task kernel/sched/core.c:2035 [inline]
ttwu_do_activate+0x157/0x330 kernel/sched/core.c:3611
ttwu_queue kernel/sched/core.c:3807 [inline]
try_to_wake_up+0x508/0x1510 kernel/sched/core.c:4130
wake_up_worker kernel/workqueue.c:856 [inline]
process_one_work+0x7cf/0x1690 kernel/workqueue.c:2262
worker_thread+0x658/0x11f0 kernel/workqueue.c:2446
kthread+0x405/0x4f0 kernel/kthread.c:345
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
----------------
Code disassembly (best guess):
0: 00 00 add %al,(%rax)
2: 74 39 je 0x3d
4: 48 bb 00 00 00 00 00 movabs $0xdffffc0000000000,%rbx
b: fc ff df
e: 48 89 e8 mov %rbp,%rax
11: 48 83 c0 01 add $0x1,%rax
15: 48 89 c2 mov %rax,%rdx
18: 48 89 c1 mov %rax,%rcx
1b: 48 c1 ea 03 shr $0x3,%rdx
1f: 83 e1 07 and $0x7,%ecx
22: 0f b6 14 1a movzbl (%rdx,%rbx,1),%edx
26: 38 ca cmp %cl,%dl
28: 7f 04 jg 0x2e
* 2a: 84 d2 test %dl,%dl <-- trapping instruction
2c: 75 1f jne 0x4d
2e: 80 38 00 cmpb $0x0,(%rax)
31: 75 de jne 0x11
33: 48 83 c4 08 add $0x8,%rsp
37: 48 29 e8 sub %rbp,%rax
3a: 5b pop %rbx
3b: 5d pop %rbp
3c: c3 retq
3d: 48 rex.W
3e: 83 .byte 0x83
3f: c4 .byte 0xc4