INFO: task kworker/1:1:23 blocked for more than 143 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:1 state:D stack:26480 pid: 23 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:8:6662 blocked for more than 143 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:8 state:D stack:24616 pid: 6662 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:0:30638 blocked for more than 143 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:0 state:D stack:26656 pid:30638 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:4:22445 blocked for more than 144 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:4 state:D stack:25104 pid:22445 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:6:1436 blocked for more than 144 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:6 state:D stack:21952 pid: 1436 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:0:640 blocked for more than 144 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:0 state:D stack:27712 pid: 640 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/1:2:17137 blocked for more than 145 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:2 state:D stack:27800 pid:17137 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task kworker/0:2:25086 blocked for more than 145 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:2 state:D stack:28432 pid:25086 ppid: 2 flags:0x00004000 Workqueue: events ovs_dp_masks_rebalance Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 INFO: task syz-executor.4:807 can't die for more than 145 seconds. task:syz-executor.4 state:D stack:27232 pid: 807 ppid: 1 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_cmd_new+0x51b/0xeb0 net/openvswitch/datapath.c:1783 genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:731 genl_family_rcv_msg net/netlink/genetlink.c:775 [inline] genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:792 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2494 genl_rcv+0x24/0x40 net/netlink/genetlink.c:803 netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline] netlink_unicast+0x539/0x7e0 net/netlink/af_netlink.c:1343 netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2413 ___sys_sendmsg+0xf3/0x170 net/socket.c:2467 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2496 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f9ef94d7eb9 RSP: 002b:00007f9ef7e0b168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f9ef95eb100 RCX: 00007f9ef94d7eb9 RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000003 RBP: 00007f9ef953208d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffe3ed2136f R14: 00007f9ef7e0b300 R15: 0000000000022000 INFO: task syz-executor.4:807 blocked for more than 146 seconds. Not tainted 5.16.0-next-20220112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.4 state:D stack:27232 pid: 807 ppid: 1 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4986 [inline] __schedule+0xab2/0x4e90 kernel/sched/core.c:6296 schedule+0xd2/0x260 kernel/sched/core.c:6369 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6428 __mutex_lock_common kernel/locking/mutex.c:673 [inline] __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733 ovs_lock net/openvswitch/datapath.c:106 [inline] ovs_dp_cmd_new+0x51b/0xeb0 net/openvswitch/datapath.c:1783 genl_family_rcv_msg_doit+0x228/0x320 net/netlink/genetlink.c:731 genl_family_rcv_msg net/netlink/genetlink.c:775 [inline] genl_rcv_msg+0x328/0x580 net/netlink/genetlink.c:792 netlink_rcv_skb+0x153/0x420 net/netlink/af_netlink.c:2494 genl_rcv+0x24/0x40 net/netlink/genetlink.c:803 netlink_unicast_kernel net/netlink/af_netlink.c:1317 [inline] netlink_unicast+0x539/0x7e0 net/netlink/af_netlink.c:1343 netlink_sendmsg+0x904/0xe00 net/netlink/af_netlink.c:1919 sock_sendmsg_nosec net/socket.c:705 [inline] sock_sendmsg+0xcf/0x120 net/socket.c:725 ____sys_sendmsg+0x6e8/0x810 net/socket.c:2413 ___sys_sendmsg+0xf3/0x170 net/socket.c:2467 __sys_sendmsg+0xe5/0x1b0 net/socket.c:2496 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x44/0xae RIP: 0033:0x7f9ef94d7eb9 RSP: 002b:00007f9ef7e0b168 EFLAGS: 00000246 ORIG_RAX: 000000000000002e RAX: ffffffffffffffda RBX: 00007f9ef95eb100 RCX: 00007f9ef94d7eb9 RDX: 0000000000000000 RSI: 0000000020000180 RDI: 0000000000000003 RBP: 00007f9ef953208d R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffe3ed2136f R14: 00007f9ef7e0b300 R15: 0000000000022000 Showing all locks held in the system: 3 locks held by kworker/1:1/23: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90000ddfdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 1 lock held by khungtaskd/27: #0: ffffffff8bb839e0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6460 1 lock held by udevd/2975: 2 locks held by getty/3285: #0: ffff88814b820098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244 #1: ffffc90002b962e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcf0/0x1230 drivers/tty/n_tty.c:2077 3 locks held by kworker/0:8/6662: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90003f4fdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/0:0/30638: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90011e17db8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/1:4/22445: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc9001278fdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 2 locks held by kworker/u4:17/28946: 5 locks held by kworker/u4:18/28947: #0: ffff88814070a138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff88814070a138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff88814070a138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff88814070a138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff88814070a138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff88814070a138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc9000ca9fdb8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d325450 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:557 #3: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #3: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x192/0xbc0 net/openvswitch/datapath.c:2606 #4: ffffffff8bb8d4f0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x44/0x430 kernel/rcu/tree.c:4026 2 locks held by kworker/u4:22/28951: 3 locks held by kworker/1:6/1436: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc900028afdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 5 locks held by kworker/u4:0/10176: 3 locks held by kworker/1:0/640: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90002c5fdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/1:2/17137: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc90004aefdb8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 3 locks held by kworker/0:2/25086: #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:631 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:658 [inline] #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x890/0x1650 kernel/workqueue.c:2278 #1: ffffc900076c7db8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8c4/0x1650 kernel/workqueue.c:2282 #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #2: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2458 2 locks held by syz-executor.4/807: #0: ffffffff8d3cd8f0 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802 #1: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:106 [inline] #1: ffffffff8d79e8a8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x51b/0xeb0 net/openvswitch/datapath.c:1783 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 27 Comm: khungtaskd Not tainted 5.16.0-next-20220112-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:256 [inline] watchdog+0xcb7/0xed0 kernel/hung_task.c:413 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 28951 Comm: kworker/u4:22 Not tainted 5.16.0-next-20220112-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: bat_events batadv_purge_orig RIP: 0010:mark_lock+0x160/0x17b0 kernel/locking/lockdep.c:4624 Code: 00 0f 84 79 01 00 00 48 b8 00 00 00 00 00 fc ff df 48 01 c3 48 c7 03 00 00 00 00 c7 43 08 00 00 00 00 48 c7 43 10 00 00 00 00 <48> 8b 84 24 10 01 00 00 65 48 2b 04 25 28 00 00 00 0f 85 c3 11 00 RSP: 0018:ffffc9000cae7a28 EFLAGS: 00000082 RAX: dffffc0000000000 RBX: fffff5200195cf4c RCX: 1ffffffff2004eee RDX: dffffc0000000000 RSI: 0000000000000040 RDI: ffffffff90027770 RBP: 0000000000000040 R08: 0000000000000000 R09: ffffffff8ffd2a27 R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000006 R13: ffff88816a9a6228 R14: 000000000000070d R15: ffff88816a9a6248 FS: 0000000000000000(0000) GS:ffff8880b9c00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f0239f05680 CR3: 00000000226a0000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: mark_held_locks+0x9f/0xe0 kernel/locking/lockdep.c:4206 __trace_hardirqs_on_caller kernel/locking/lockdep.c:4232 [inline] lockdep_hardirqs_on_prepare kernel/locking/lockdep.c:4292 [inline] lockdep_hardirqs_on_prepare+0x28b/0x400 kernel/locking/lockdep.c:4244 trace_hardirqs_on+0x5b/0x1c0 kernel/trace/trace_preemptirq.c:49 __local_bh_enable_ip+0xa0/0x120 kernel/softirq.c:388 spin_unlock_bh include/linux/spinlock.h:399 [inline] batadv_purge_orig_ref+0xeb7/0x1550 net/batman-adv/originator.c:1259 batadv_purge_orig+0x17/0x60 net/batman-adv/originator.c:1272 process_one_work+0x9ac/0x1650 kernel/workqueue.c:2307 worker_thread+0x657/0x1110 kernel/workqueue.c:2454 kthread+0x2e9/0x3a0 kernel/kthread.c:377 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 ---------------- Code disassembly (best guess): 0: 00 0f add %cl,(%rdi) 2: 84 79 01 test %bh,0x1(%rcx) 5: 00 00 add %al,(%rax) 7: 48 b8 00 00 00 00 00 movabs $0xdffffc0000000000,%rax e: fc ff df 11: 48 01 c3 add %rax,%rbx 14: 48 c7 03 00 00 00 00 movq $0x0,(%rbx) 1b: c7 43 08 00 00 00 00 movl $0x0,0x8(%rbx) 22: 48 c7 43 10 00 00 00 movq $0x0,0x10(%rbx) 29: 00 * 2a: 48 8b 84 24 10 01 00 mov 0x110(%rsp),%rax <-- trapping instruction 31: 00 32: 65 48 2b 04 25 28 00 sub %gs:0x28,%rax 39: 00 00 3b: 0f .byte 0xf 3c: 85 c3 test %eax,%ebx 3e: 11 00 adc %eax,(%rax)