INFO: task rcu_gp:3 blocked for more than 143 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:rcu_gp state:D stack:29712 pid: 3 ppid: 2 flags:0x00004000 Workqueue: 0x0 (rcu_gp) Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/u4:0:8 blocked for more than 143 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u4:0 state:D stack:25640 pid: 8 ppid: 2 flags:0x00004000 Workqueue: events_unbound fsnotify_mark_destroy_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x163/0x260 kernel/sched/completion.c:138 __synchronize_srcu+0x132/0x220 kernel/rcu/srcutree.c:924 fsnotify_mark_destroy_workfn+0xfd/0x340 fs/notify/mark.c:836 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task mm_percpu_wq:9 blocked for more than 143 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:mm_percpu_wq state:D stack:29016 pid: 9 ppid: 2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x163/0x260 kernel/sched/completion.c:138 affine_move_task+0x401/0x910 kernel/sched/core.c:2261 __set_cpus_allowed_ptr+0x2d2/0x3a0 kernel/sched/core.c:2353 worker_attach_to_pool+0x7c/0x290 kernel/workqueue.c:1852 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/1:0:17 blocked for more than 143 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:0 state:D stack:24960 pid: 17 ppid: 2 flags:0x00004000 Workqueue: events pwq_unbound_release_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline] synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3690 wq_unregister_lockdep kernel/workqueue.c:3464 [inline] pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/u4:2:27 blocked for more than 144 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u4:2 state:D stack:24440 pid: 27 ppid: 2 flags:0x00004000 Workqueue: events_unbound fsnotify_connector_destroy_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x163/0x260 kernel/sched/completion.c:138 __synchronize_srcu+0x132/0x220 kernel/rcu/srcutree.c:924 fsnotify_connector_destroy_workfn+0x49/0xa0 fs/notify/mark.c:164 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/1:1:34 blocked for more than 144 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:1 state:D stack:28464 pid: 34 ppid: 2 flags:0x00004000 Workqueue: events pwq_unbound_release_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline] synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3690 wq_unregister_lockdep kernel/workqueue.c:3464 [inline] pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task khugepaged:1663 blocked for more than 144 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:khugepaged state:D stack:28464 pid: 1663 ppid: 2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 lru_add_drain_all+0x5f/0x6f0 mm/swap.c:801 khugepaged_do_scan mm/khugepaged.c:2190 [inline] khugepaged+0x10b/0x6870 mm/khugepaged.c:2251 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task kworker/1:2:3130 blocked for more than 144 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/1:2 state:D stack:24464 pid: 3130 ppid: 2 flags:0x00004000 Workqueue: events pwq_unbound_release_workfn Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 synchronize_rcu_expedited+0x44b/0x610 kernel/rcu/tree_exp.h:852 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3690 wq_unregister_lockdep kernel/workqueue.c:3464 [inline] pwq_unbound_release_workfn+0x227/0x2d0 kernel/workqueue.c:3696 process_one_work+0x933/0x15a0 kernel/workqueue.c:2272 worker_thread+0x64c/0x1120 kernel/workqueue.c:2418 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task wg-crypt-wg0:8679 blocked for more than 145 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:wg-crypt-wg0 state:D stack:30040 pid: 8679 ppid: 2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task wg-crypt-wg1:8688 blocked for more than 145 seconds. Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:wg-crypt-wg1 state:D stack:29728 pid: 8688 ppid: 2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:5157 __mutex_lock_common kernel/locking/mutex.c:1033 [inline] __mutex_lock+0x81a/0x1110 kernel/locking/mutex.c:1103 worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 rescuer_thread+0x3af/0xd30 kernel/workqueue.c:2506 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 INFO: task syz-executor.2:12630 can't die for more than 145 seconds. task:syz-executor.2 state:D stack:26440 pid:12630 ppid: 8512 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 schedule_timeout+0x1d8/0x250 kernel/time/timer.c:1847 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x163/0x260 kernel/sched/completion.c:138 __flush_work+0x51f/0xac0 kernel/workqueue.c:3049 lru_add_drain_all+0x41c/0x6f0 mm/swap.c:850 invalidate_bdev+0x96/0xd0 fs/block_dev.c:96 btrfs_close_bdev fs/btrfs/volumes.c:1109 [inline] btrfs_close_bdev fs/btrfs/volumes.c:1102 [inline] btrfs_close_one_device fs/btrfs/volumes.c:1128 [inline] close_fs_devices+0x586/0x930 fs/btrfs/volumes.c:1157 btrfs_close_devices+0x8e/0x4b0 fs/btrfs/volumes.c:1172 open_ctree+0x3f47/0x3fec fs/btrfs/disk-io.c:3464 btrfs_fill_super fs/btrfs/super.c:1348 [inline] btrfs_mount_root.cold+0x14/0x165 fs/btrfs/super.c:1717 legacy_get_tree+0x105/0x220 fs/fs_context.c:592 vfs_get_tree+0x89/0x2f0 fs/super.c:1549 fc_mount fs/namespace.c:983 [inline] vfs_kern_mount.part.0+0xd3/0x170 fs/namespace.c:1013 vfs_kern_mount+0x3c/0x60 fs/namespace.c:1000 btrfs_mount+0x234/0xa20 fs/btrfs/super.c:1777 legacy_get_tree+0x105/0x220 fs/fs_context.c:592 vfs_get_tree+0x89/0x2f0 fs/super.c:1549 do_new_mount fs/namespace.c:2896 [inline] path_mount+0x12ae/0x1e70 fs/namespace.c:3227 do_mount fs/namespace.c:3240 [inline] __do_sys_mount fs/namespace.c:3448 [inline] __se_sys_mount fs/namespace.c:3425 [inline] __x64_sys_mount+0x27f/0x300 fs/namespace.c:3425 do_syscall_64+0x2d/0x70 arch/x86/entry/common.c:46 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x46090a Code: Unable to access opcode bytes at RIP 0x4608e0. RSP: 002b:00007f92bab07a88 EFLAGS: 00000202 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007f92bab07b20 RCX: 000000000046090a RDX: 0000000020000000 RSI: 0000000020000100 RDI: 00007f92bab07ae0 RBP: 00007f92bab07ae0 R08: 00007f92bab07b20 R09: 0000000020000000 R10: 0000000000000000 R11: 0000000000000202 R12: 0000000020000000 R13: 0000000020000100 R14: 0000000020000200 R15: 0000000020016b00 INFO: task syz-executor.1:12649 can't die for more than 145 seconds. task:syz-executor.1 state:D stack:28152 pid:12649 ppid: 8510 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:4269 [inline] __schedule+0x890/0x2030 kernel/sched/core.c:5019 schedule+0xcf/0x270 kernel/sched/core.c:5098 exp_funnel_lock kernel/rcu/tree_exp.h:313 [inline] synchronize_rcu_expedited+0x59d/0x610 kernel/rcu/tree_exp.h:836 synchronize_rcu+0xdf/0x180 kernel/rcu/tree.c:3690 tracepoint_synchronize_unregister include/linux/tracepoint.h:84 [inline] perf_trace_event_unreg.isra.0+0xc1/0x250 kernel/trace/trace_event_perf.c:168 perf_trace_destroy+0xb5/0xf0 kernel/trace/trace_event_perf.c:243 _free_event+0x2ee/0x1300 kernel/events/core.c:4840 put_event kernel/events/core.c:4934 [inline] perf_event_release_kernel+0xa24/0xe00 kernel/events/core.c:5049 perf_release+0x33/0x40 kernel/events/core.c:5059 __fput+0x283/0x920 fs/file_table.c:280 task_work_run+0xdd/0x190 kernel/task_work.c:140 tracehook_notify_resume include/linux/tracehook.h:188 [inline] exit_to_user_mode_loop kernel/entry/common.c:172 [inline] exit_to_user_mode_prepare+0x1f0/0x200 kernel/entry/common.c:199 syscall_exit_to_user_mode+0x38/0x260 kernel/entry/common.c:274 entry_SYSCALL_64_after_hwframe+0x44/0xa9 RIP: 0033:0x417811 Code: Unable to access opcode bytes at RIP 0x4177e7. RSP: 002b:00007ffd9ebf0070 EFLAGS: 00000293 ORIG_RAX: 0000000000000003 RAX: 0000000000000000 RBX: 0000000000000004 RCX: 0000000000417811 RDX: 0000000000000000 RSI: 0000000000000e7e RDI: 0000000000000003 RBP: 0000000000000001 R08: 0000000082104e7e R09: 0000000082104e82 R10: 00007ffd9ebf0150 R11: 0000000000000293 R12: 000000000118c9a0 R13: 000000000118c9a0 R14: 00000000000003e8 R15: 000000000118bf2c Showing all locks held in the system: 1 lock held by rcu_gp/3: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 2 locks held by kworker/u4:0/8: #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90000cd7da8 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 1 lock held by mm_percpu_wq/9: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 2 locks held by kworker/1:0/17: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90000d77da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/u4:2/27: #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010069138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90000e1fda8 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/1:1/34: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc90000e5fda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 1 lock held by kworker/u4:5/234: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by khungtaskd/1650: #0: ffffffff8b339ce0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6252 1 lock held by khugepaged/1663: #0: ffffffff8b409e28 (lock#5){+.+.}-{3:3}, at: lru_add_drain_all+0x5f/0x6f0 mm/swap.c:801 3 locks held by kworker/1:2/3130: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc9000261fda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 #2: ffffffff8b342428 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline] #2: ffffffff8b342428 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x27e/0x610 kernel/rcu/tree_exp.h:836 1 lock held by ipv6_addrconf/4689: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by in:imklog/8195: 1 lock held by wg-crypt-wg0/8679: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/8688: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/8724: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/8819: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/8834: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/8837: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9080: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9083: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9130: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9264: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9319: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9336: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9339: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9358: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9375: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg0/9540: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg1/9545: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 1 lock held by wg-crypt-wg2/9548: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 2 locks held by kworker/1:3/9792: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc9000bd87da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/1:4/9797: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc9000bfcfda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/1:5/9851: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc9000c3cfda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 3 locks held by syz-executor.2/12630: #0: ffff8880649220e0 (&type->s_umount_key#65/1){+.+.}-{3:3}, at: alloc_super+0x201/0xaf0 fs/super.c:229 #1: ffffffff8b92d1a8 (uuid_mutex){+.+.}-{3:3}, at: btrfs_close_devices+0x86/0x4b0 fs/btrfs/volumes.c:1171 #2: ffffffff8b409e28 (lock#5){+.+.}-{3:3}, at: lru_add_drain_all+0x5f/0x6f0 mm/swap.c:801 1 lock held by systemd-udevd/12631: #0: ffffffff8b92d1a8 (uuid_mutex){+.+.}-{3:3}, at: btrfs_control_ioctl+0x115/0x2d0 fs/btrfs/super.c:2371 1 lock held by syz-executor.1/12649: #0: ffffffff8b3a57a8 (event_mutex){+.+.}-{3:3}, at: perf_trace_destroy+0x23/0xf0 kernel/trace/trace_event_perf.c:241 3 locks held by kworker/1:6/12677: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc9001703fda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 #2: ffffffff8b342428 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline] #2: ffffffff8b342428 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x27e/0x610 kernel/rcu/tree_exp.h:836 2 locks held by kworker/1:7/12678: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc900171cfda8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 2 locks held by kworker/1:8/12680: #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic64_set include/asm-generic/atomic-instrumented.h:856 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: atomic_long_set include/asm-generic/atomic-long.h:41 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:616 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:643 [inline] #0: ffff888010064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x821/0x15a0 kernel/workqueue.c:2243 #1: ffffc900173a7da8 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x854/0x15a0 kernel/workqueue.c:2247 1 lock held by kworker/1:9/12681: #0: ffffffff8b204b88 (wq_pool_attach_mutex){+.+.}-{3:3}, at: worker_attach_to_pool+0x27/0x290 kernel/workqueue.c:1846 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 1650 Comm: khungtaskd Not tainted 5.10.0-rc3-next-20201116-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:79 [inline] dump_stack+0x107/0x163 lib/dump_stack.c:120 nmi_cpu_backtrace.cold+0x44/0xd7 lib/nmi_backtrace.c:105 nmi_trigger_cpumask_backtrace+0x1b3/0x230 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:147 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:253 [inline] watchdog+0xd89/0xf30 kernel/hung_task.c:338 kthread+0x3af/0x4a0 kernel/kthread.c:292 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:296 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:60 [inline] NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:103 [inline] NMI backtrace for cpu 0 skipped: idling at acpi_safe_halt drivers/acpi/processor_idle.c:111 [inline] NMI backtrace for cpu 0 skipped: idling at acpi_idle_do_entry+0x1c9/0x250 drivers/acpi/processor_idle.c:517