INFO: task rcu_gp:3 blocked for more than 143 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:rcu_gp state:D stack:29040 pid: 3 ppid: 2 flags:0x00004000 Workqueue: 0x0 (rcu_gp) Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:0:5 blocked for more than 143 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:0 state:D stack:24928 pid: 5 ppid: 2 flags:0x00004000 Workqueue: events pwq_unbound_release_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline] synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3699 wq_unregister_lockdep kernel/workqueue.c:3464 [inline] pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:1:7 blocked for more than 143 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:1 state:D stack:23160 pid: 7 ppid: 2 flags:0x00004000 Workqueue: events pwq_unbound_release_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 synchronize_rcu_expedited+0x44b/0x610 kernel/rcu/tree_exp.h:852 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3699 wq_unregister_lockdep kernel/workqueue.c:3464 [inline] pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/u4:0:8 blocked for more than 143 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u4:0 state:D stack:25744 pid: 8 ppid: 2 flags:0x00004000 Workqueue: events_unbound fsnotify_connector_destroy_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x163/0x260 kernel/sched/completion.c:138 __synchronize_srcu+0x1a1/0x280 kernel/rcu/srcutree.c:935 fsnotify_connector_destroy_workfn+0x49/0xa0 fs/notify/mark.c:164 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task mm_percpu_wq:9 blocked for more than 144 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mm_percpu_wq state:D stack:29416 pid: 9 ppid: 2 flags:0x00004000 Workqueue: 0x0 (mm_percpu_wq) Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/u4:3:131 blocked for more than 144 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u4:3 state:D stack:25664 pid: 131 ppid: 2 flags:0x00004000 Workqueue: events_unbound fsnotify_mark_destroy_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x163/0x260 kernel/sched/completion.c:138 __synchronize_srcu+0x1a1/0x280 kernel/rcu/srcutree.c:935 fsnotify_mark_destroy_workfn+0xfd/0x340 fs/notify/mark.c:836 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/0:2:3004 blocked for more than 144 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:2 state:D stack:26072 pid: 3004 ppid: 2 flags:0x00004000 Workqueue: 0x0 (events) Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 create_worker+0x461/0x6d0 kernel/workqueue.c:1941 maybe_create_worker kernel/workqueue.c:2091 [inline] manage_workers kernel/workqueue.c:2143 [inline] worker_thread+0xaef/0x1120 kernel/workqueue.c:2390 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task dm_bufio_cache:4360 blocked for more than 144 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:dm_bufio_cache state:D stack:30240 pid: 4360 ppid: 2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task syz-executor.1:8479 blocked for more than 144 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.1 state:D stack:23696 pid: 8479 ppid: 1 flags:0x00004006 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline] synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836 bdi_remove_from_list mm/backing-dev.c:865 [inline] bdi_unregister+0x16b/0x590 mm/backing-dev.c:871 release_bdi+0xa1/0xc0 mm/backing-dev.c:893 kref_put include/linux/kref.h:65 [inline] bdi_put+0x72/0xa0 mm/backing-dev.c:901 generic_shutdown_super+0x2aa/0x370 fs/super.c:478 kill_anon_super+0x36/0x60 fs/super.c:1108 btrfs_kill_super+0x38/0x50 fs/btrfs/super.c:2318 deactivate_locked_super+0x94/0x160 fs/super.c:335 deactivate_super+0xad/0xd0 fs/super.c:366 cleanup_mnt+0x3a3/0x530 fs/namespace.c:1123 task_work_run+0xdd/0x190 kernel/task_work.c:140 tracehook_notify_resume include/linux/tracehook.h:188 [inline] exit_to_user_mode_loop kernel/entry/common.c:172 [inline] exit_to_user_mode_prepare+0x1f0/0x200 kernel/entry/common.c:199 syscall_exit_to_user_mode+0x38/0x260 kernel/entry/common.c:274 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x4608e7 Code: Unable to access opcode bytes at RIP 0x4608bd. RSP: 002b:00007ffdf37390c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00000000004608e7 RDX: 00000000004031f8 RSI: 0000000000000002 RDI: 00007ffdf3739170 RBP: 00000000000000d1 R08: 0000000000000000 R09: 000000000000000a R10: 0000000000000005 R11: 0000000000000246 R12: 00007ffdf373a200 R13: 00000000028daa60 R14: 0000000000000000 R15: 00007ffdf373a200 INFO: task syz-executor.2:8481 blocked for more than 145 seconds. Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.2 state:D stack:23784 pid: 8481 ppid: 1 flags:0x00000004 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline] synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836 namespace_unlock+0x1af/0x410 fs/namespace.c:1435 do_umount fs/namespace.c:1659 [inline] path_umount+0x7aa/0x12a0 fs/namespace.c:1746 ksys_umount fs/namespace.c:1765 [inline] __do_sys_umount fs/namespace.c:1770 [inline] __se_sys_umount fs/namespace.c:1768 [inline] __x64_sys_umount+0xfb/0x150 fs/namespace.c:1768 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x4608e7 Code: Unable to access opcode bytes at RIP 0x4608bd. RSP: 002b:00007fff794f2988 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00000000004608e7 RDX: 00000000004031f8 RSI: 0000000000000002 RDI: 00007fff794f2a30 RBP: 00000000000000c4 R08: 0000000000000000 R09: 000000000000000a R10: 0000000000000005 R11: 0000000000000246 R12: 00007fff794f3ac0 R13: 0000000003556a60 R14: 0000000000000000 R15: 00007fff794f3ac0 Showing all locks held in the system: 1 lock held by rcu_gp/3: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 2 locks held by kworker/0:0/5: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90000ca7da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 3 locks held by kworker/0:1/7: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90000cc7da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline] #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x27e/0x610 kernel/rcu/tree_exp.h:836 2 locks held by kworker/u4:0/8: #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90000cd7da8 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 1 lock held by mm_percpu_wq/9: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 2 locks held by kworker/u4:1/21: 2 locks held by kworker/u4:3/131: #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc900013dfda8 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 1 lock held by kworker/u4:5/189: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by khungtaskd/1652: #0: ffffffff8b339ca0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6252 1 lock held by khugepaged/1663: #0: ffffffff8b409fc8 (lock#5){+.+.}-{3:3}, at: lru_add_drain_all+0x5f/0x6f0 mm/swap.c:787 1 lock held by kworker/0:2/3004: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by dm_bufio_cache/4360: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by bat_events/4824: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 2 locks held by in:imklog/8172: #0: ffff88801af44870 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe9/0x100 fs/file.c:932 #1: ffff8880b9e34f98 (&rq->lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1318 [inline] #1: ffff8880b9e34f98 (&rq->lock){-.-.}-{2:2}, at: __schedule+0x217/0x2030 kernel/sched/core.c:4936 2 locks held by agetty/8395: #0: ffff88801b523098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:266 #1: ffffc90000ebc2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x21d/0x1aa0 drivers/tty/n_tty.c:2158 1 lock held by wg-crypt-wg0/8621: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/8631: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/8641: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/8806: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/8831: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/8871: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9008: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9023: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9028: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9283: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9288: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9297: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9300: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9303: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9306: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9561: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9580: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9584: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 2 locks held by kworker/0:3/9747: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90016017da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/0:4/9768: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90016107da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/0:5/9875: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90016387da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/0:6/12101: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc9000116fda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 3 locks held by kworker/0:7/12102: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc9000138fda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline] #2: ffffffff8b3423e8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x27e/0x610 kernel/rcu/tree_exp.h:836 2 locks held by kworker/0:8/12105: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc900018cfda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 1 lock held by ext4-rsv-conver/12117: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: set_pf_worker kernel/workqueue.c:2340 [inline] #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: rescuer_thread+0xd8/0xd30 kernel/workqueue.c:2477 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 1652 Comm: khungtaskd Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:120 nmi_cpu_backtrace.cold+0x44/0xd7 lib/nmi_backtrace.c:105 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:147 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:253 [inline] watchdog+0xd89/0xf30 kernel/hung_task.c:338 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 21 Comm: kworker/u4:1 Not tainted 5.10.0-rc4-next-20201119-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Workqueue: bat_events batadv_nc_worker RIP: 0010:check_wait_context kernel/locking/lockdep.c:4527 [inline] RIP: 0010:__lock_acquire+0x6a4/0x5c00 kernel/locking/lockdep.c:4780 Code: 0f 8e f3 2e 00 00 0f b7 80 b8 00 00 00 66 85 c0 74 08 66 41 39 c5 44 0f 4f e8 48 8b 04 24 83 c3 01 48 c1 e8 03 42 0f b6 04 30 <84> c0 74 08 3c 03 0f 8e b5 2e 00 00 3b 9d 20 09 00 00 0f 8d 9d 00 RSP: 0018:ffffc90000dbfa58 EFLAGS: 00000806 RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffffffff8156325d RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8ec0d938 RBP: ffff888010e19a80 R08: 0000000000000000 R09: ffffffff8ebbf82f R10: fffffbfff1d77f05 R11: 0000000000000000 R12: 000000000000067c R13: 0000000000000004 R14: dffffc0000000000 R15: ffff888010e1a3a8 FS: 0000000000000000(0000) GS:ffff8880b9e00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f475315e000 CR3: 000000001a8fb000 CR4: 00000000001506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: lock_acquire kernel/locking/lockdep.c:5435 [inline] lock_acquire+0x2a3/0x8c0 kernel/locking/lockdep.c:5400 rcu_lock_acquire include/linux/rcupdate.h:255 [inline] rcu_read_lock include/linux/rcupdate.h:644 [inline] batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:407 [inline] batadv_nc_worker+0x12d/0xe50 net/batman-adv/network-coding.c:718 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296