INFO: task syz-executor.4:9974 blocked for more than 144 seconds. Not tainted 6.0.0-syzkaller-06475-g4c86114194e6 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.4 state:D stack:27080 pid: 9974 ppid: 3649 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:5183 [inline] __schedule+0x957/0xe20 kernel/sched/core.c:6495 schedule+0xcb/0x190 kernel/sched/core.c:6571 schedule_timeout+0xac/0x300 kernel/time/timer.c:1911 do_wait_for_common+0x3ea/0x560 kernel/sched/completion.c:85 __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x46/0x60 kernel/sched/completion.c:138 __flush_work+0x124/0x1a0 kernel/workqueue.c:3073 __cancel_work_timer+0x517/0x6a0 kernel/workqueue.c:3160 p9_conn_destroy net/9p/trans_fd.c:885 [inline] p9_fd_close+0x24d/0x410 net/9p/trans_fd.c:920 p9_client_create+0xa16/0x1030 net/9p/client.c:1001 v9fs_session_init+0x1e3/0x1990 fs/9p/v9fs.c:408 v9fs_mount+0xd2/0xcb0 fs/9p/vfs_super.c:126 legacy_get_tree+0xea/0x180 fs/fs_context.c:610 vfs_get_tree+0x88/0x270 fs/super.c:1530 do_new_mount+0x289/0xad0 fs/namespace.c:3040 do_mount fs/namespace.c:3383 [inline] __do_sys_mount fs/namespace.c:3591 [inline] __se_sys_mount+0x2e3/0x3d0 fs/namespace.c:3568 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f30d6e8a5a9 RSP: 002b:00007f30d7fd2168 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007f30d6fac050 RCX: 00007f30d6e8a5a9 RDX: 00000000200001c0 RSI: 0000000020000180 RDI: 0000000000000000 RBP: 00007f30d6ee5580 R08: 0000000020000200 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffc7cf6c67f R14: 00007f30d7fd2300 R15: 0000000000022000 Showing all locks held in the system: 1 lock held by rcu_tasks_kthre/13: #0: ffffffff8cd1f1b0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x30/0xd00 kernel/rcu/tasks.h:507 1 lock held by rcu_tasks_trace/14: #0: ffffffff8cd1f9b0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x30/0xd00 kernel/rcu/tasks.h:507 1 lock held by khungtaskd/29: #0: ffffffff8cd1efe0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30 2 locks held by kworker/0:2/142: #0: ffff888012066538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x796/0xd10 kernel/workqueue.c:2262 #1: ffffc90002e2fd00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7d0/0xd10 kernel/workqueue.c:2264 2 locks held by getty/3287: #0: ffff888140b3a098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6e8/0x1e50 drivers/tty/n_tty.c:2177 1 lock held by syz-executor.2/3648: #0: ffffffff8cd245b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline] #0: ffffffff8cd245b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x46f/0x890 kernel/rcu/tree_exp.h:946 2 locks held by kworker/1:7/3708: #0: ffff888012064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x796/0xd10 kernel/workqueue.c:2262 #1: ffffc9000462fd00 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x7d0/0xd10 kernel/workqueue.c:2264 3 locks held by kworker/1:18/6684: #0: ffff888012064d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x796/0xd10 kernel/workqueue.c:2262 #1: ffffc90004857d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7d0/0xd10 kernel/workqueue.c:2264 #2: ffffffff8ddddb88 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xa/0x50 net/core/link_watch.c:263 2 locks held by syz-executor.5/11724: #0: ffffffff8ddddb88 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline] #0: ffffffff8ddddb88 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x75d/0xe90 net/core/rtnetlink.c:6088 #1: ffffffff8cd245b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline] #1: ffffffff8cd245b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3a6/0x890 kernel/rcu/tree_exp.h:946 1 lock held by syz-executor.5/11736: #0: ffffffff8ddddb88 (rtnl_mutex){+.+.}-{3:3}, at: dev_ioctl+0x621/0xf30 net/core/dev_ioctl.c:612 1 lock held by syz-executor.4/11723: #0: ffff88807ea17128 (&mm->mmap_lock#2){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:87 [inline] #0: ffff88807ea17128 (&mm->mmap_lock#2){++++}-{3:3}, at: vm_mmap_pgoff+0x18f/0x2f0 mm/util.c:550 2 locks held by syz-executor.4/11725: #0: ffffffff8ddd1810 (pernet_ops_rwsem){++++}-{3:3}, at: copy_net_ns+0x33d/0x5b0 net/core/net_namespace.c:467 #1: ffffffff8ddddb88 (rtnl_mutex){+.+.}-{3:3}, at: smc_pnet_create_pnetids_list net/smc/smc_pnet.c:805 [inline] #1: ffffffff8ddddb88 (rtnl_mutex){+.+.}-{3:3}, at: smc_pnet_net_init+0x173/0x420 net/smc/smc_pnet.c:874 3 locks held by kworker/u4:14/11764: #0: ffff8880b9a39c58 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x25/0x110 kernel/sched/core.c:545 #1: ffff8880b9a27748 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x53a/0x8a0 kernel/sched/psi.c:876 #2: ffff8880b9a28318 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x12a/0x270 kernel/time/timer.c:999 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.0.0-syzkaller-06475-g4c86114194e6 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/22/2022 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 nmi_cpu_backtrace+0x47c/0x4b0 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x169/0x280 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline] watchdog+0xcd5/0xd20 kernel/hung_task.c:369 kthread+0x266/0x300 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] NMI backtrace for cpu 0 skipped: idling at acpi_safe_halt drivers/acpi/processor_idle.c:112 [inline] NMI backtrace for cpu 0 skipped: idling at acpi_idle_do_entry drivers/acpi/processor_idle.c:572 [inline] NMI backtrace for cpu 0 skipped: idling at acpi_idle_enter+0x43d/0x800 drivers/acpi/processor_idle.c:709