syzbot


INFO: task hung in affine_move_task

Status: auto-closed as invalid on 2021/01/17 23:07
Subsystems: btrfs
[Documentation on labels]
First crash: 1252d, last: 1252d

Sample crash report:
INFO: task rcu_gp:3 blocked for more than 143 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:rcu_gp          state:D stack:29040 pid:    3 ppid:     2 flags:0x00004000
Workqueue:  0x0 (rcu_gp)
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847
 do_wait_for_common kernel/sched/completion.c:85 [inline]
 __wait_for_common kernel/sched/completion.c:106 [inline]
 wait_for_common kernel/sched/completion.c:117 [inline]
 wait_for_completion+0x163/0x260 kernel/sched/completion.c:138
 affine_move_task+0x401/0x910 kernel/sched/core.c:2261
 __set_cpus_allowed_ptr+0x2d2/0x3a0 kernel/sched/core.c:2353
 worker_attach_to_pool+0x7c/0x290 kernel/workqueue.c:1852
 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task kworker/0:0:5 blocked for more than 143 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:0     state:D stack:26968 pid:    5 ppid:     2 flags:0x00004000
Workqueue: events pwq_unbound_release_workfn
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline]
 synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836
 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3699
 wq_unregister_lockdep kernel/workqueue.c:3464 [inline]
 pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696
 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task kworker/0:1:7 blocked for more than 143 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:1     state:D stack:24344 pid:    7 ppid:     2 flags:0x00004000
Workqueue: events pwq_unbound_release_workfn
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline]
 synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836
 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3699
 wq_unregister_lockdep kernel/workqueue.c:3464 [inline]
 pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696
 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task mm_percpu_wq:9 blocked for more than 144 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:mm_percpu_wq    state:D stack:29968 pid:    9 ppid:     2 flags:0x00004000
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103
 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task kworker/0:2:3002 blocked for more than 144 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:2     state:D stack:27080 pid: 3002 ppid:     2 flags:0x00004000
Workqueue: events pwq_unbound_release_workfn
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline]
 synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836
 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3699
 wq_unregister_lockdep kernel/workqueue.c:3464 [inline]
 pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696
 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task systemd-udevd:4904 blocked for more than 144 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:systemd-udevd   state:D stack:22688 pid: 4904 ppid:     1 flags:0x00004100
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103
 __blkdev_get+0x457/0x1870 fs/block_dev.c:1472
 blkdev_get+0xd1/0x240 fs/block_dev.c:1627
 blkdev_open+0x21d/0x2b0 fs/block_dev.c:1744
 do_dentry_open+0x4b9/0x11b0 fs/open.c:817
 do_open fs/namei.c:3252 [inline]
 path_openat+0x1b9a/0x2730 fs/namei.c:3369
 do_filp_open+0x17e/0x3c0 fs/namei.c:3396
 do_sys_openat2+0x16d/0x420 fs/open.c:1168
 do_sys_open fs/open.c:1184 [inline]
 __do_sys_open fs/open.c:1192 [inline]
 __se_sys_open fs/open.c:1188 [inline]
 __x64_sys_open+0x119/0x1c0 fs/open.c:1188
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x7f1d2046e840
Code: Unable to access opcode bytes at RIP 0x7f1d2046e816.
RSP: 002b:00007ffc08b5d278 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007ffc08b5d370 RCX: 00007f1d2046e840
RDX: 000055c046b69fe3 RSI: 00000000000a0800 RDI: 000055c047393050
RBP: 00007ffc08b5d800 R08: 000055c046b69670 R09: 0000000000000010
R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc08b5d420
R13: 000055c047377010 R14: 000055c04738b550 R15: 00007ffc08b5d2f0
INFO: task syz-executor.2:8503 blocked for more than 144 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.2  state:D stack:23768 pid: 8503 ppid:     1 flags:0x00004004
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847
 do_wait_for_common kernel/sched/completion.c:85 [inline]
 __wait_for_common kernel/sched/completion.c:106 [inline]
 wait_for_common kernel/sched/completion.c:117 [inline]
 wait_for_completion+0x163/0x260 kernel/sched/completion.c:138
 __flush_work+0x51f/0xac0 kernel/workqueue.c:3049
 lru_add_drain_all+0x41c/0x6f0 mm/swap.c:836
 invalidate_bdev+0x96/0xd0 fs/block_dev.c:96
 btrfs_close_bdev fs/btrfs/volumes.c:1116 [inline]
 btrfs_close_bdev fs/btrfs/volumes.c:1109 [inline]
 btrfs_close_one_device fs/btrfs/volumes.c:1135 [inline]
 close_fs_devices+0x58e/0x930 fs/btrfs/volumes.c:1165
 btrfs_close_devices+0x8e/0x4b0 fs/btrfs/volumes.c:1180
 close_ctree+0x6a0/0x6e3 fs/btrfs/disk-io.c:4231
 generic_shutdown_super+0x144/0x370 fs/super.c:464
 kill_anon_super+0x36/0x60 fs/super.c:1108
 btrfs_kill_super+0x38/0x50 fs/btrfs/super.c:2318
 deactivate_locked_super+0x94/0x160 fs/super.c:335
 deactivate_super+0xad/0xd0 fs/super.c:366
 cleanup_mnt+0x3a3/0x530 fs/namespace.c:1123
 task_work_run+0xdd/0x190 kernel/task_work.c:140
 tracehook_notify_resume include/linux/tracehook.h:188 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:172 [inline]
 exit_to_user_mode_prepare+0x1f0/0x200 kernel/entry/common.c:199
 syscall_exit_to_user_mode+0x38/0x260 kernel/entry/common.c:274
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x4608e7
Code: Unable to access opcode bytes at RIP 0x4608bd.
RSP: 002b:00007fff04b0bb28 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00000000000589f5 RCX: 00000000004608e7
RDX: 00000000004031f8 RSI: 0000000000000002 RDI: 00007fff04b0bbd0
RBP: 0000000000000170 R08: 0000000000000000 R09: 000000000000000b
R10: 0000000000000005 R11: 0000000000000246 R12: 00007fff04b0cc60
R13: 0000000002af1a60 R14: 0000000000000000 R15: 00007fff04b0cc60
INFO: task wg-crypt-wg0:8658 blocked for more than 144 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:wg-crypt-wg0    state:D stack:30272 pid: 8658 ppid:     2 flags:0x00004000
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103
 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task wg-crypt-wg1:8661 blocked for more than 145 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:wg-crypt-wg1    state:D stack:30240 pid: 8661 ppid:     2 flags:0x00004000
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103
 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task wg-crypt-wg0:8804 blocked for more than 145 seconds.
      Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:wg-crypt-wg0    state:D stack:30272 pid: 8804 ppid:     2 flags:0x00004000
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157
 __mutex_lock_common kernel/locking/mutex.c:1033 [inline]
 __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103
 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
INFO: task syz-executor.1:13844 can't die for more than 145 seconds.
task:syz-executor.1  state:D stack:26608 pid:13844 ppid:  8501 flags:0x00004004
Call Trace:
 context_switch kernel/sched/core.c:4269 [inline]
 __schedule+0x890/0x2030 kernel/sched/core.c:5019
 schedule+0xcf/0x270 kernel/sched/core.c:5098
 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline]
 synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836
 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3699
 blk_mq_quiesce_queue+0x189/0x1d0 block/blk-mq.c:236
 elevator_init_mq+0x2d7/0x400 block/elevator.c:682
 __device_add_disk+0x7d6/0x1250 block/genhd.c:770
 add_disk include/linux/genhd.h:295 [inline]
 loop_add+0x616/0x8b0 drivers/block/loop.c:2171
 loop_control_ioctl drivers/block/loop.c:2266 [inline]
 loop_control_ioctl+0x16c/0x480 drivers/block/loop.c:2248
 vfs_ioctl fs/ioctl.c:48 [inline]
 __do_sys_ioctl fs/ioctl.c:753 [inline]
 __se_sys_ioctl fs/ioctl.c:739 [inline]
 __x64_sys_ioctl+0x193/0x200 fs/ioctl.c:739
 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46
 entry_SYSCALL_64_after_hwframe+0x44/0xa9
RIP: 0033:0x45deb9
Code: Unable to access opcode bytes at RIP 0x45de8f.
RSP: 002b:00007f37052fbc78 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 0000000000012900 RCX: 000000000045deb9
RDX: 0000000000000000 RSI: 0000000000004c80 RDI: 0000000000000004
RBP: 000000000118bf60 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 000000000118bf2c
R13: 00007ffc8e562e7f R14: 00007f37052fc9c0 R15: 000000000118bf2c

Showing all locks held in the system:
1 lock held by rcu_gp/3:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
2 locks held by kworker/0:0/5:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90000ca7da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by kworker/0:1/7:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90000cc7da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
1 lock held by mm_percpu_wq/9:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
3 locks held by kworker/u4:2/27:
 #0: ffff888010e8a938 ((wq_completion)netns){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010e8a938 ((wq_completion)netns){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010e8a938 ((wq_completion)netns){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010e8a938 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010e8a938 ((wq_completion)netns){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010e8a938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90000e1fda8 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
 #2: ffffffff8c9210d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x9b/0xb10 net/core/net_namespace.c:566
4 locks held by kworker/1:1/55:
 #0: ffff8880b9f34f98 (&rq->lock){-.-.}-{2:2}, at: newidle_balance+0x789/0xe50 kernel/sched/fair.c:10622
 #1: ffffffff8b339ca0 (rcu_read_lock){....}-{1:2}, at: cpu_of kernel/sched/sched.h:1085 [inline]
 #1: ffffffff8b339ca0 (rcu_read_lock){....}-{1:2}, at: __update_idle_core+0x39/0x430 kernel/sched/fair.c:6039
 #2: ffff8880b9f24918 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x5a/0x1f0 kernel/time/timer.c:944
 #3: ffffffff8f0fedf8 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x12e/0x3e0 lib/debugobjects.c:656
1 lock held by khungtaskd/1658:
 #0: ffffffff8b339ca0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6252
1 lock held by khugepaged/1665:
 #0: ffffffff8b409de8 (lock#5){+.+.}-{3:3}, at: lru_add_drain_all+0x5f/0x6f0 mm/swap.c:787
2 locks held by kworker/0:2/3002:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc9000211fda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
1 lock held by dm_bufio_cache/4361:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by systemd-udevd/4904:
 #0: ffff8880176db480 (&bdev->bd_mutex){+.+.}-{3:3}, at: __blkdev_get+0x457/0x1870 fs/block_dev.c:1472
1 lock held by in:imklog/8189:
 #0: ffff88802168aff0 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:932
3 locks held by syz-executor.2/8503:
 #0: ffff88806c0920e0 (&type->s_umount_key#68){+.+.}-{3:3}, at: deactivate_super+0xa5/0xd0 fs/super.c:365
 #1: ffffffff8b92d3c8 (uuid_mutex){+.+.}-{3:3}, at: btrfs_close_devices+0x86/0x4b0 fs/btrfs/volumes.c:1179
 #2: ffffffff8b409de8 (lock#5){+.+.}-{3:3}, at: lru_add_drain_all+0x5f/0x6f0 mm/swap.c:787
1 lock held by wg-crypt-wg0/8658:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg1/8661:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg2/8666:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg0/8804:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg1/8819:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg2/8828:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg0/9015:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
2 locks held by kworker/0:3/9024:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90001affda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
1 lock held by wg-crypt-wg1/9027:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg2/9040:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg0/9230:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg1/9235:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg2/9281:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg0/9499:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg1/9510:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg2/9515:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg0/9565:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg1/9572:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
1 lock held by wg-crypt-wg2/9581:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846
2 locks held by kworker/0:4/9799:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc9000c3bfda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by kworker/0:5/9820:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc9000c4dfda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by systemd-udevd/13424:
 #0: ffff8880176dbb00 (&bdev->bd_mutex){+.+.}-{3:3}, at: __blkdev_get+0x457/0x1870 fs/block_dev.c:1472
 #1: ffffffff8bd23528 (loop_ctl_mutex){+.+.}-{3:3}, at: lo_open+0x19/0xd0 drivers/block/loop.c:1890
2 locks held by systemd-udevd/13479:
 #0: ffff8880176db480 (&bdev->bd_mutex){+.+.}-{3:3}, at: __blkdev_get+0x457/0x1870 fs/block_dev.c:1472
 #1: ffffffff8bd23528 (loop_ctl_mutex){+.+.}-{3:3}, at: lo_open+0x19/0xd0 drivers/block/loop.c:1890
2 locks held by systemd-udevd/13514:
 #0: ffff8880176d8700 (&bdev->bd_mutex){+.+.}-{3:3}, at: __blkdev_put+0xfc/0x890 fs/block_dev.c:1762
 #1: ffffffff8bd23528 (loop_ctl_mutex){+.+.}-{3:3}, at: lo_release+0x1a/0x1f0 drivers/block/loop.c:1909
2 locks held by kworker/0:6/13692:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90002267da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by kworker/0:7/13694:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc900024c7da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by kworker/0:8/13697:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90002547da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by kworker/0:9/13700:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90001d0fda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by kworker/0:10/13704:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90002537da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
2 locks held by kworker/0:11/13705:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90002427da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
3 locks held by kworker/0:12/13706:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc900021c7da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
 #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline]
 #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x27e/0x610 kernel/rcu/tree_exp.h:836
3 locks held by kworker/0:13/13773:
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline]
 #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243
 #1: ffffc90002807da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247
 #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline]
 #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x4f2/0x610 kernel/rcu/tree_exp.h:836
1 lock held by syz-executor.1/13844:
 #0: ffffffff8bd23528 (loop_ctl_mutex){+.+.}-{3:3}, at: loop_control_ioctl+0x7b/0x480 drivers/block/loop.c:2254
1 lock held by kworker/0:14/13845:
 #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 1658 Comm: khungtaskd Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:79 [inline]
 dump_stack+0x107/0x163 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x44/0xd7 lib/nmi_backtrace.c:105
 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:147 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:253 [inline]
 watchdog+0xd89/0xf30 kernel/hung_task.c:338
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 21 Comm: kworker/u4:1 Not tainted 5.10.0-rc4-next-20201118-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Workqueue: phy5 ieee80211_iface_work
RIP: 0010:kasan_set_track mm/kasan/common.c:56 [inline]
RIP: 0010:__kasan_kmalloc.constprop.0+0xb2/0xd0 mm/kasan/common.c:480
Code: 83 c4 08 4c 89 e0 5b 5d 41 5c 41 5d 41 5e 41 5f c3 e8 52 22 00 00 eb 81 49 63 9e bc 00 00 00 89 ef 65 48 8b 04 25 00 f0 01 00 <8b> 80 08 05 00 00 4c 01 e3 89 03 e8 1e fd ff ff 89 43 04 eb c0 66
RSP: 0018:ffffc90000dbf168 EFLAGS: 00000202
RAX: ffff888010e23500 RBX: 0000000000000040 RCX: 0000000000000000
RDX: 0000000000000003 RSI: 00000000000000fc RDI: 0000000000000a20
RBP: 0000000000000a20 R08: ffffed1004b149c0 R09: ffffed1004b149c5
R10: 0000000000082081 R11: 0000000000000158 R12: ffff8880258a4e00
R13: 0000000000000018 R14: ffff8880100418c0 R15: 0000000000000028
FS:  0000000000000000(0000) GS:ffff8880b9e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7575437000 CR3: 000000001444a000 CR4: 00000000001506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 kmalloc include/linux/slab.h:557 [inline]
 ieee802_11_parse_elems_crc+0x11e/0xf00 net/mac80211/util.c:1473
 ieee802_11_parse_elems net/mac80211/ieee80211_i.h:2042 [inline]
 ieee80211_bss_info_update+0x4b4/0xb70 net/mac80211/scan.c:212
 ieee80211_rx_bss_info net/mac80211/ibss.c:1126 [inline]
 ieee80211_rx_mgmt_probe_beacon+0xc77/0x1690 net/mac80211/ibss.c:1615
 ieee80211_ibss_rx_queued_mgmt+0xe3e/0x1870 net/mac80211/ibss.c:1642
 ieee80211_iface_work+0x7ed/0xa90 net/mac80211/iface.c:1421
 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272
 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418
 kthread+0x3af/0x4a0 kernel/kthread.c:292
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2020/11/18 22:59 linux-next 205292332779 0767f13f .config console log report info ci-upstream-linux-next-kasan-gce-root
* Struck through repros no longer work on HEAD.