syzbot


INFO: task hung in ovs_dp_masks_rebalance

Status: fixed on 2020/09/16 22:51
Subsystems: openvswitch
[Documentation on labels]
Reported-by: syzbot+bad6507e5db05017b008@syzkaller.appspotmail.com
Fix commit: a65878d6f00b net: openvswitch: fixes potential deadlock in dp cleanup code
First crash: 1374d, last: 1327d
Discussions (2)
Title Replies (including bot) Last reply
[PATCH net-next] net: openvswitch: fixes potential deadlock in dp cleanup code 2 (2) 2020/07/24 23:59
INFO: task hung in ovs_dp_masks_rebalance 1 (2) 2020/07/23 08:30
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in ovs_dp_masks_rebalance (3) openvswitch 1 517d 517d 0/26 auto-obsoleted due to no activity on 2023/04/22 10:20
upstream INFO: task hung in ovs_dp_masks_rebalance (2) openvswitch 68 675d 962d 0/26 auto-obsoleted due to no activity on 2022/10/15 22:59
upstream INFO: task hung in ovs_dp_masks_rebalance (4) openvswitch 2 285d 305d 0/26 auto-obsoleted due to no activity on 2023/10/11 14:44

Sample crash report:
INFO: task kworker/1:4:8126 blocked for more than 143 seconds.
      Not tainted 5.9.0-rc1-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:4     state:D stack:26232 pid: 8126 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 context_switch kernel/sched/core.c:3778 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4527
 schedule+0xd0/0x2a0 kernel/sched/core.c:4602
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4661
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 ovs_lock net/openvswitch/datapath.c:105 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
 process_one_work+0x94c/0x1670 kernel/workqueue.c:2269
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2415
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task kworker/1:8:11438 blocked for more than 143 seconds.
      Not tainted 5.9.0-rc1-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:8     state:D stack:26768 pid:11438 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 context_switch kernel/sched/core.c:3778 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4527
 schedule+0xd0/0x2a0 kernel/sched/core.c:4602
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4661
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 ovs_lock net/openvswitch/datapath.c:105 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
 process_one_work+0x94c/0x1670 kernel/workqueue.c:2269
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2415
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task kworker/0:3:30434 blocked for more than 143 seconds.
      Not tainted 5.9.0-rc1-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:3     state:D stack:27352 pid:30434 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 context_switch kernel/sched/core.c:3778 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4527
 schedule+0xd0/0x2a0 kernel/sched/core.c:4602
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4661
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 ovs_lock net/openvswitch/datapath.c:105 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
 process_one_work+0x94c/0x1670 kernel/workqueue.c:2269
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2415
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task kworker/0:0:20548 blocked for more than 143 seconds.
      Not tainted 5.9.0-rc1-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:0     state:D stack:27640 pid:20548 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 context_switch kernel/sched/core.c:3778 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4527
 schedule+0xd0/0x2a0 kernel/sched/core.c:4602
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4661
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 ovs_lock net/openvswitch/datapath.c:105 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
 process_one_work+0x94c/0x1670 kernel/workqueue.c:2269
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2415
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task kworker/1:2:7437 blocked for more than 144 seconds.
      Not tainted 5.9.0-rc1-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:2     state:D stack:27768 pid: 7437 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 context_switch kernel/sched/core.c:3778 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4527
 schedule+0xd0/0x2a0 kernel/sched/core.c:4602
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4661
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 ovs_lock net/openvswitch/datapath.c:105 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
 process_one_work+0x94c/0x1670 kernel/workqueue.c:2269
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2415
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task kworker/0:2:18538 blocked for more than 144 seconds.
      Not tainted 5.9.0-rc1-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:2     state:D stack:28112 pid:18538 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 context_switch kernel/sched/core.c:3778 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4527
 schedule+0xd0/0x2a0 kernel/sched/core.c:4602
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4661
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 ovs_lock net/openvswitch/datapath.c:105 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
 process_one_work+0x94c/0x1670 kernel/workqueue.c:2269
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2415
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
INFO: task kworker/1:5:32720 blocked for more than 144 seconds.
      Not tainted 5.9.0-rc1-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/1:5     state:D stack:29104 pid:32720 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 context_switch kernel/sched/core.c:3778 [inline]
 __schedule+0x8e5/0x21e0 kernel/sched/core.c:4527
 schedule+0xd0/0x2a0 kernel/sched/core.c:4602
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:4661
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x3e2/0x10e0 kernel/locking/mutex.c:1103
 ovs_lock net/openvswitch/datapath.c:105 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
 process_one_work+0x94c/0x1670 kernel/workqueue.c:2269
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2415
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294

Showing all locks held in the system:
2 locks held by kworker/0:1/12:
 #0: ffff8880ae635e18 (&rq->lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1292 [inline]
 #0: ffff8880ae635e18 (&rq->lock){-.-.}-{2:2}, at: __schedule+0x232/0x21e0 kernel/sched/core.c:4445
 #1: ffff8880ae620ec8 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x2fb/0x400 kernel/sched/psi.c:833
3 locks held by kworker/1:0/17:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90000d8fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
5 locks held by kworker/u4:2/34:
 #0: ffff8880a97b5138 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880a97b5138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880a97b5138 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880a97b5138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880a97b5138 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880a97b5138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90000de7da8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8a7da670 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xa00 net/core/net_namespace.c:565
 #3: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #3: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x1de/0xba0 net/openvswitch/datapath.c:2519
 #4: ffffffff89bdae70 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x44/0x4a0 kernel/rcu/tree.c:3744
1 lock held by khungtaskd/1169:
 #0: ffffffff89bd6900 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:5825
2 locks held by in:imklog/6530:
 #0: ffff88809e8aa3b0 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:930
 #1: ffff8880ae635e18 (&rq->lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1292 [inline]
 #1: ffff8880ae635e18 (&rq->lock){-.-.}-{2:2}, at: __schedule+0x232/0x21e0 kernel/sched/core.c:4445
3 locks held by kworker/1:4/8126:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90015d7fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/0:4/8133:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90015cafda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/1:8/11438:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90008357da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/0:3/30434:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc900072bfda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/0:0/20548:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc900086c7da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/1:2/7437:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90006617da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/0:2/18538:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90015dffda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/1:3/30420:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc900060e7da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/1:5/32720:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90006107da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/0:5/2283:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc90006007da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/0:6/2522:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc9000840fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
3 locks held by kworker/0:10/3687:
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff8880aa063d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x82b/0x1670 kernel/workqueue.c:2240
 #1: ffffc9000847fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x85f/0x1670 kernel/workqueue.c:2244
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #2: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2377
2 locks held by syz-executor.3/4818:
 #0: ffffffff8a845510 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:741
 #1: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #1: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x4db/0xea0 net/openvswitch/datapath.c:1707
2 locks held by syz-executor.3/4845:
 #0: ffffffff8a845510 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:741
 #1: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:105 [inline]
 #1: ffffffff8aa8e368 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x4db/0xea0 net/openvswitch/datapath.c:1707

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 1169 Comm: khungtaskd Not tainted 5.9.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:77 [inline]
 dump_stack+0x18f/0x20d lib/dump_stack.c:118
 nmi_cpu_backtrace.cold+0x70/0xb1 lib/nmi_backtrace.c:101
 nmi_trigger_cpumask_backtrace+0x1b3/0x223 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:209 [inline]
 watchdog+0xd7d/0x1000 kernel/hung_task.c:295
 kthread+0x3b5/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:294
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 3897 Comm: systemd-journal Not tainted 5.9.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:lookup_chain_cache kernel/locking/lockdep.c:3109 [inline]
RIP: 0010:lookup_chain_cache_add kernel/locking/lockdep.c:3128 [inline]
RIP: 0010:validate_chain kernel/locking/lockdep.c:3183 [inline]
RIP: 0010:__lock_acquire+0x1750/0x5640 kernel/locking/lockdep.c:4426
Code: 74 5d 49 bd 00 00 00 00 00 fc ff df 48 8b 54 24 08 eb 06 49 83 ec 08 74 46 49 8d 7c 24 18 48 89 f8 48 c1 e8 03 42 80 3c 28 00 <0f> 85 43 2a 00 00 49 8b 44 24 18 48 39 c2 0f 84 26 f5 ff ff 49 8d
RSP: 0018:ffffc90000007958 EFLAGS: 00000046
RAX: 1ffffffff18006a3 RBX: 00000000000077c9 RCX: ffffffff815a14fb
RDX: c999a7ca4e5000a8 RSI: 0000000000000008 RDI: ffffffff8c003518
RBP: ffff8880937fcb90 R08: 0000000000000000 R09: ffffffff8c5f39e7
R10: fffffbfff18be73c R11: 0000000000000000 R12: ffffffff8c003500
R13: dffffc0000000000 R14: ffff8880937fc280 R15: 0000000000000000
FS:  00007fe7a896b8c0(0000) GS:ffff8880ae600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe7a4868000 CR3: 0000000093b25000 CR4: 00000000001506f0
DR0: 0000000080000001 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000600
Call Trace:
 <IRQ>
 lock_acquire+0x1f1/0xad0 kernel/locking/lockdep.c:5005
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0x8c/0xc0 kernel/locking/spinlock.c:159
 debug_object_activate+0x12e/0x3e0 lib/debugobjects.c:636
 debug_work_activate kernel/workqueue.c:490 [inline]
 __queue_work+0x92/0xf20 kernel/workqueue.c:1409
 call_timer_fn+0x1ac/0x760 kernel/time/timer.c:1413
 expire_timers kernel/time/timer.c:1453 [inline]
 __run_timers.part.0+0x4a6/0xaa0 kernel/time/timer.c:1755
 __run_timers kernel/time/timer.c:1736 [inline]
 run_timer_softirq+0xae/0x1a0 kernel/time/timer.c:1768
 __do_softirq+0x2de/0xa24 kernel/softirq.c:298
 asm_call_on_stack+0xf/0x20 arch/x86/entry/entry_64.S:706
 </IRQ>
 __run_on_irqstack arch/x86/include/asm/irq_stack.h:22 [inline]
 run_on_irqstack_cond arch/x86/include/asm/irq_stack.h:48 [inline]
 do_softirq_own_stack+0x9d/0xd0 arch/x86/kernel/irq_64.c:77
 invoke_softirq kernel/softirq.c:393 [inline]
 __irq_exit_rcu kernel/softirq.c:423 [inline]
 irq_exit_rcu+0x1f3/0x230 kernel/softirq.c:435
 sysvec_apic_timer_interrupt+0x51/0xf0 arch/x86/kernel/apic/apic.c:1091
 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:581
RIP: 0010:arch_local_irq_restore arch/x86/include/asm/paravirt.h:770 [inline]
RIP: 0010:kmem_cache_free.part.0+0x8c/0x1f0 mm/slab.c:3694
Code: e8 89 25 00 00 84 c0 74 76 41 f7 c4 00 02 00 00 74 4e e8 77 f7 c5 ff 48 83 3d f7 2c 02 08 00 0f 84 2f 01 00 00 4c 89 e7 57 9d <0f> 1f 44 00 00 4c 8b 64 24 20 0f 1f 44 00 00 65 8b 05 6e 84 4d 7e
RSP: 0018:ffffc90005607e20 EFLAGS: 00000286
RAX: 000000000082d619 RBX: ffff88809bf8e1c0 RCX: 1ffffffff1563fd1
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000286
RBP: ffff8880aa241e00 R08: 0000000000000001 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000286
R13: ffffffff8358c324 R14: 0000000000000000 R15: ffff8880947a3cd8
 security_file_free+0xa4/0xd0 security/security.c:1474
 file_free fs/file_table.c:55 [inline]
 __fput+0x3d7/0x920 fs/file_table.c:299
 task_work_run+0xdd/0x190 kernel/task_work.c:141
 tracehook_notify_resume include/linux/tracehook.h:188 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:140 [inline]
 exit_to_user_mode_prepare+0x195/0x1c0 kernel/entry/common.c:167
 syscall_exit_to_user_mode+0x59/0x2b0 kernel/entry/common.c:242
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7fe7a7efb840
Code: 73 01 c3 48 8b 0d 68 77 20 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d 89 bb 20 00 00 75 10 b8 02 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 1e f6 ff ff 48 89 04 24
RSP: 002b:00007fff2bd24d28 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: fffffffffffffffe RBX: 00007fff2bd25030 RCX: 00007fe7a7efb840
RDX: 00000000000001a0 RSI: 0000000000080042 RDI: 00005628e95292b0
RBP: 000000000000000d R08: 0000000000000000 R09: 00000000ffffffff
R10: 0000000000000069 R11: 0000000000000246 R12: 00000000ffffffff
R13: 00005628e951b040 R14: 00007fff2bd24ff0 R15: 00005628e95290d0

Crashes (771):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2020/09/03 22:41 bpf 21e9ba5373fc abf9ba4f .config console log report ci-upstream-bpf-kasan-gce
2020/09/03 16:03 bpf-next 1ba5fe2facf7 abf9ba4f .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 10:17 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 09:17 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 08:12 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 07:45 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 06:43 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 06:35 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 05:35 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 05:09 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 04:09 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 04:06 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 03:00 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 01:55 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 01:06 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/26 00:03 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 22:22 bpf-next a890cd860ef2 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 20:19 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 19:35 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 18:30 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 18:06 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 17:04 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 16:31 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 15:30 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 14:58 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 13:53 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 13:38 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 12:34 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 11:24 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 09:59 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 08:58 bpf-next 5ab739372a14 1f7cc1ca .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 06:09 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 05:49 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 04:43 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 04:35 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 03:23 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 02:34 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 01:32 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/25 01:04 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/24 23:54 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/24 23:03 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/24 22:37 bpf-next e0145983a8af 554af388 .config console log report ci-upstream-bpf-next-kasan-gce
2020/07/23 17:44 net-next-old 7fc3b978a897 70c104a1 .config console log report ci-upstream-net-kasan-gce
2020/07/22 18:49 net-next-old fa56a987449b 128cd85f .config console log report ci-upstream-net-kasan-gce
2020/07/19 05:57 net-next-old a050d82f5b04 9c812472 .config console log report ci-upstream-net-kasan-gce
2020/07/18 20:57 net-next-old a050d82f5b04 9c812472 .config console log report ci-upstream-net-kasan-gce
* Struck through repros no longer work on HEAD.