syzbot


INFO: task hung in ovs_dp_masks_rebalance (2)

Status: auto-obsoleted due to no activity on 2022/10/15 22:59
Subsystems: openvswitch
[Documentation on labels]
Reported-by: syzbot+45a5ef16ab5bc505d42a@syzkaller.appspotmail.com
First crash: 1312d, last: 675d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] INFO: task hung in ovs_dp_masks_rebalance (2) 0 (1) 2021/09/03 21:40
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in ovs_dp_masks_rebalance (3) openvswitch 1 517d 517d 0/26 auto-obsoleted due to no activity on 2023/04/22 10:20
upstream INFO: task hung in ovs_dp_masks_rebalance openvswitch 771 1327d 1370d 15/26 fixed on 2020/09/16 22:51
upstream INFO: task hung in ovs_dp_masks_rebalance (4) openvswitch 2 284d 304d 0/26 auto-obsoleted due to no activity on 2023/10/11 14:44

Sample crash report:
INFO: task kworker/0:0:6 blocked for more than 143 seconds.
      Not tainted 5.18.0-rc2-syzkaller-00219-g028192fea1de #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:0     state:D stack:23992 pid:    6 ppid:     2 flags:0x00004000
Workqueue: events ovs_dp_masks_rebalance
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5073 [inline]
 __schedule+0xa9a/0x4cc0 kernel/sched/core.c:6388
 schedule+0xd2/0x1f0 kernel/sched/core.c:6460
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6519
 __mutex_lock_common kernel/locking/mutex.c:673 [inline]
 __mutex_lock+0xa32/0x12f0 kernel/locking/mutex.c:733
 ovs_lock net/openvswitch/datapath.c:107 [inline]
 ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
 process_one_work+0x996/0x1610 kernel/workqueue.c:2289
 worker_thread+0x665/0x1080 kernel/workqueue.c:2436
 kthread+0x2e9/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/0:0/6:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc900000b7da8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
1 lock held by khungtaskd/28:
 #0: ffffffff8bd81de0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6467
2 locks held by getty/3273:
 #0: ffff88807eb50098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244
 #1: ffffc90002e732e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xcea/0x1230 drivers/tty/n_tty.c:2075
3 locks held by kworker/1:3/3674:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc900041efda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
3 locks held by kworker/0:6/3680:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc9000424fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
3 locks held by kworker/0:5/30863:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc9000b12fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
6 locks held by kworker/u4:6/13969:
 #0: ffff888010dbd938 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010dbd938 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010dbd938 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010dbd938 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010dbd938 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010dbd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc9000a67fda8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d53c010 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb00 net/core/net_namespace.c:556
 #3: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #3: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_exit_net+0x192/0xbc0 net/openvswitch/datapath.c:2610
 #4: ffffffff8d54fae8 (rtnl_mutex){+.+.}-{3:3}, at: internal_dev_destroy+0x6f/0x150 net/openvswitch/vport-internal_dev.c:183
 #5: ffffffff8bd8bba0 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
 #5: ffffffff8bd8bba0 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x4fa/0x620 kernel/rcu/tree_exp.h:841
3 locks held by kworker/1:4/31163:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc90017f8fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
3 locks held by kworker/1:6/15474:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc9000300fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
3 locks held by kworker/1:7/15475:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc9000301fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
3 locks held by kworker/0:3/15548:
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c64d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc9000306fda8 ((work_completion)(&(&ovs_net->masks_rebalance)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #2: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_masks_rebalance+0x20/0xf0 net/openvswitch/datapath.c:2462
2 locks held by kworker/0:7/16088:
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888010c66538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260
 #1: ffffc9000426fda8 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264
2 locks held by syz-executor.5/17176:
 #0: ffffffff8d5e9290 (cb_lock){++++}-{3:3}, at: genl_rcv+0x15/0x40 net/netlink/genetlink.c:802
 #1: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_lock net/openvswitch/datapath.c:107 [inline]
 #1: ffffffff8d9c46e8 (ovs_mutex){+.+.}-{3:3}, at: ovs_dp_cmd_new+0x51e/0x1120 net/openvswitch/datapath.c:1784

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 5.18.0-rc2-syzkaller-00219-g028192fea1de #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1e6/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
 watchdog+0xc1d/0xf50 kernel/hung_task.c:369
 kthread+0x2e9/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 16016 Comm: kworker/u4:39 Not tainted 5.18.0-rc2-syzkaller-00219-g028192fea1de #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: bat_events batadv_nc_worker
RIP: 0010:lookup_chain_cache kernel/locking/lockdep.c:3697 [inline]
RIP: 0010:lookup_chain_cache_add kernel/locking/lockdep.c:3716 [inline]
RIP: 0010:validate_chain kernel/locking/lockdep.c:3771 [inline]
RIP: 0010:__lock_acquire+0x16ce/0x56c0 kernel/locking/lockdep.c:5029
Code: ba 00 00 00 00 00 fc ff df eb 06 48 83 eb 08 74 40 48 8d 7b 18 48 89 f8 48 c1 e8 03 80 3c 10 00 0f 85 c2 30 00 00 48 8b 43 18 <49> 39 c7 0f 84 db f5 ff ff 48 8d 7b 08 48 89 f8 48 c1 e8 03 80 3c
RSP: 0018:ffffc900001e0a30 EFLAGS: 00000046
RAX: 65080dd34f11b8cc RBX: ffffffff8f3b5f00 RCX: ffffffff815d52de
RDX: dffffc0000000000 RSI: 0000000000000008 RDI: ffffffff8f3b5f18
RBP: 000000000000dc5a R08: 0000000000000000 R09: ffffffff9003297f
R10: fffffbfff200652f R11: 0000000000000001 R12: ffff88802993e200
R13: ffff88802993d700 R14: 0000000000000000 R15: 65080dd34f11b8cc
FS:  0000000000000000(0000) GS:ffff8880b9d00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000c003dea000 CR3: 000000007eabe000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <IRQ>
 lock_acquire kernel/locking/lockdep.c:5641 [inline]
 lock_acquire+0x1ab/0x510 kernel/locking/lockdep.c:5606
 __raw_read_lock_irqsave include/linux/rwlock_api_smp.h:160 [inline]
 _raw_read_lock_irqsave+0x45/0x90 kernel/locking/spinlock.c:236
 mISDN_clock_get+0x14/0x60 drivers/isdn/mISDN/clock.c:187
 dsp_cmx_send+0xe6e/0x1580 drivers/isdn/mISDN/dsp_cmx.c:1650
 call_timer_fn+0x1a5/0x6b0 kernel/time/timer.c:1421
 expire_timers kernel/time/timer.c:1466 [inline]
 __run_timers.part.0+0x67c/0xa30 kernel/time/timer.c:1734
 __run_timers kernel/time/timer.c:1715 [inline]
 run_timer_softirq+0xb3/0x1d0 kernel/time/timer.c:1747
 __do_softirq+0x29b/0x9c2 kernel/softirq.c:558
 invoke_softirq kernel/softirq.c:432 [inline]
 __irq_exit_rcu+0x123/0x180 kernel/softirq.c:637
 irq_exit_rcu+0x5/0x20 kernel/softirq.c:649
 sysvec_apic_timer_interrupt+0x93/0xc0 arch/x86/kernel/apic/apic.c:1097
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x12/0x20 arch/x86/include/asm/idtentry.h:645
RIP: 0010:lock_release+0x3f1/0x720 kernel/locking/lockdep.c:5649
Code: 7e 83 f8 01 0f 85 8d 01 00 00 9c 58 f6 c4 02 0f 85 78 01 00 00 48 f7 04 24 00 02 00 00 74 01 fb 48 b8 00 00 00 00 00 fc ff df <48> 01 c5 48 c7 45 00 00 00 00 00 c7 45 08 00 00 00 00 48 8b 84 24
RSP: 0018:ffffc9001584fbc0 EFLAGS: 00000206
RAX: dffffc0000000000 RBX: d38165a365746623 RCX: ffffc9001584fc10
RDX: 1ffff11005327c2a RSI: 0000000000000000 RDI: 0000000000000000
RBP: 1ffff92002b09f7a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000002
R13: 0000000000000003 R14: ffff88802993e158 R15: ffff88802993d700
 rcu_lock_release include/linux/rcupdate.h:273 [inline]
 rcu_read_unlock include/linux/rcupdate.h:727 [inline]
 batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:412 [inline]
 batadv_nc_worker+0x86b/0xfa0 net/batman-adv/network-coding.c:719
 process_one_work+0x996/0x1610 kernel/workqueue.c:2289
 worker_thread+0x665/0x1080 kernel/workqueue.c:2436
 kthread+0x2e9/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298
 </TASK>
----------------
Code disassembly (best guess):
   0:	ba 00 00 00 00       	mov    $0x0,%edx
   5:	00 fc                	add    %bh,%ah
   7:	ff                   	(bad)
   8:	df eb                	fucomip %st(3),%st
   a:	06                   	(bad)
   b:	48 83 eb 08          	sub    $0x8,%rbx
   f:	74 40                	je     0x51
  11:	48 8d 7b 18          	lea    0x18(%rbx),%rdi
  15:	48 89 f8             	mov    %rdi,%rax
  18:	48 c1 e8 03          	shr    $0x3,%rax
  1c:	80 3c 10 00          	cmpb   $0x0,(%rax,%rdx,1)
  20:	0f 85 c2 30 00 00    	jne    0x30e8
  26:	48 8b 43 18          	mov    0x18(%rbx),%rax
* 2a:	49 39 c7             	cmp    %rax,%r15 <-- trapping instruction
  2d:	0f 84 db f5 ff ff    	je     0xfffff60e
  33:	48 8d 7b 08          	lea    0x8(%rbx),%rdi
  37:	48 89 f8             	mov    %rdi,%rax
  3a:	48 c1 e8 03          	shr    $0x3,%rax
  3e:	80                   	.byte 0x80
  3f:	3c                   	.byte 0x3c

Crashes (68):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/04/15 16:21 upstream 028192fea1de 8bcc32a6 .config console log report info ci-upstream-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2022/01/15 13:20 upstream 112450df61b7 723cfaf0 .config console log report info ci-upstream-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2022/01/12 04:05 upstream 6f38be8f2ccd 44d1319a .config console log report info ci-upstream-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2021/12/06 07:28 upstream 944207047ca4 a617004c .config console log report info ci-upstream-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2021/11/23 00:45 upstream 136057256686 545ab074 .config console log report info ci-upstream-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2021/11/16 16:59 upstream 8ab774587903 600426bd .config console log report info ci-upstream-kasan-gce-smack-root INFO: task hung in ovs_dp_masks_rebalance
2021/11/04 23:51 upstream 7ddb58cb0eca 4c1be0be .config console log report info ci-upstream-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2021/11/03 23:00 upstream dcd68326d29b 4c1be0be .config console log report info ci-upstream-kasan-gce-selinux-root INFO: task hung in ovs_dp_masks_rebalance
2021/09/27 15:23 upstream 5816b3e6577e 78494d16 .config console log report info ci-upstream-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/09/27 04:41 upstream 996148ee05d0 78494d16 .config console log report info ci-upstream-kasan-gce-smack-root INFO: task hung in ovs_dp_masks_rebalance
2021/08/30 18:54 upstream 7d2a07b76933 8f58a0ef .config console log report info ci-upstream-kasan-gce-selinux-root INFO: task hung in ovs_dp_masks_rebalance
2021/07/06 03:50 upstream 3dbdb38e2869 55aa55c2 .config console log report info ci-upstream-kasan-gce-smack-root INFO: task hung in ovs_dp_masks_rebalance
2022/02/01 13:10 bpf e2bcbd7769ee c1c1631d .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2022/01/27 07:50 bpf e2bcbd7769ee 2cbffd88 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2022/01/04 23:48 bpf d6d86830705f 0a2584dd .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/12/31 19:58 bpf 819d11507f66 e1768e9c .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/12/31 04:49 bpf 819d11507f66 36bd2e48 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/11/18 03:17 net-old c366ce28750e cafff8b6 .config console log report info ci-upstream-net-this-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/10/26 08:02 bpf 04f8ef5643bc c1132b49 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/10/19 09:58 bpf 732b74d64704 24dc29db .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/09/26 01:21 bpf a3debf177f21 8cac236e .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/09/06 18:54 bpf 57f780f1c433 6ca60148 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/09/02 14:43 bpf 57f780f1c433 15cea0a3 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/08/30 21:24 bpf 57f780f1c433 8f58a0ef .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/08/16 17:53 bpf 3776f3517ed9 33c26cb7 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/07/23 14:49 bpf d6371c76e20d bc5f1d88 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/07/18 05:09 bpf a6c39de76d70 f115ae98 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/07/17 05:25 bpf 20192d9c9f6a f115ae98 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/06/26 16:35 bpf 3db6735f2ef4 9d2ab5df .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/04/03 18:16 bpf a14d273ba159 6a81331a .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/02/23 08:17 bpf 33ccec5fd740 c26fb06b .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/02/07 04:00 bpf 6183f4d3a0a2 0655e081 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/01/27 11:16 bpf 150a27328b68 a0ebf917 .config console log report info ci-upstream-bpf-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2022/06/17 22:58 net-next-old 4875d94c69d5 cb58b3b2 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2022/03/05 23:54 net-next-old d59e3cbaef70 7bdd8b2c .config console log report info ci-upstream-net-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2022/01/26 22:24 bpf-next c446fdacb10d 2cbffd88 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/12/16 14:59 bpf-next 4b443bc1785f 8dd6a5e3 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/12/10 19:36 bpf-next 666af7064562 49ca1f59 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/12/03 14:30 bpf-next 080a70b21f47 c7c20675 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/11/30 19:42 bpf-next c291d0a4d169 80270552 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/11/27 06:46 bpf-next e32cb12ff52a 63eeac02 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/11/13 10:37 bpf-next 325d956d6717 83f5c9b5 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/10/05 06:57 bpf-next 0693b27644f0 ce697b49 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/10/02 19:31 bpf-next d636c8da2d60 db0f5787 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/10/02 11:01 net-next-old 20ab39d13e2e db0f5787 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/09/15 06:46 net-next-old 339133f6c318 07e953c1 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/08/15 18:49 bpf-next fa183a86eefd 2489ab88 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/07/27 17:53 bpf-next 793eccae89bb fd511809 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/07/17 02:47 bpf-next c50524ec4e3a f115ae98 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/07/09 14:21 bpf-next eff94154cc1a 281e815f .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/07/07 10:33 net-next-old 5e437416ff66 4846d5c1 .config console log report info ci-upstream-net-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/06/29 23:37 bpf-next 84fe73996c2e a4fccb01 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/05/21 08:45 bpf-next a49e72b3bda7 3c7fef33 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/03/04 00:33 bpf-next 303dcc25b5c7 06ed56cd .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2021/02/01 21:06 bpf-next 61ca36c8c4eb e6b95f32 .config console log report info ci-upstream-bpf-next-kasan-gce INFO: task hung in ovs_dp_masks_rebalance
2022/01/13 00:16 linux-next 32ce2abb03cf 44d1319a .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2021/12/06 03:22 linux-next f81e94e91878 a617004c .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2021/12/05 04:34 linux-next f81e94e91878 a617004c .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2021/04/08 04:44 linux-next 5103a5be098c 6a81331a .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in ovs_dp_masks_rebalance
2020/09/26 20:28 bpf 87f92ac4c127 2d5ea0cb .config console log report info ci-upstream-bpf-kasan-gce
2020/09/18 21:38 bpf 642e450b6b59 53ce8104 .config console log report info ci-upstream-bpf-kasan-gce
2020/12/16 05:20 bpf-next 3db1a3fa9880 f213e07e .config console log report info ci-upstream-bpf-next-kasan-gce
2020/12/11 09:58 bpf-next 41003dd0241c f900b48c .config console log report info ci-upstream-bpf-next-kasan-gce
2020/10/28 08:49 bpf-next 3cb12d27ff65 96e03c1c .config console log report info ci-upstream-bpf-next-kasan-gce
2020/10/24 05:30 bpf-next 9ff9b0d392ea 2bb6666c .config console log report info ci-upstream-bpf-next-kasan-gce
2020/10/17 23:36 bpf-next 9ff9b0d392ea fea47c01 .config console log report info ci-upstream-bpf-next-kasan-gce
2020/10/08 12:28 bpf-next 1e9259eca8fd 1880b4a9 .config console log report info ci-upstream-bpf-next-kasan-gce
* Struck through repros no longer work on HEAD.