INFO: task syz-executor.5:27419 blocked for more than 143 seconds. Not tainted 6.2.0-syzkaller-06695-gd8ca6dbb8de7 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.5 state:D stack:24760 pid:27419 ppid:5117 flags:0x00104000 Call Trace: context_switch kernel/sched/core.c:5304 [inline] __schedule+0x17d8/0x4990 kernel/sched/core.c:6622 schedule+0xc3/0x180 kernel/sched/core.c:6698 schedule_timeout+0xb0/0x310 kernel/time/timer.c:2143 do_wait_for_common+0x449/0x5f0 kernel/sched/completion.c:85 __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common kernel/sched/completion.c:117 [inline] wait_for_completion+0x4a/0x60 kernel/sched/completion.c:138 io_wq_exit_workers io_uring/io-wq.c:1259 [inline] io_wq_put_and_exit+0x46c/0xb20 io_uring/io-wq.c:1294 io_uring_clean_tctx+0x168/0x1e0 io_uring/tctx.c:193 io_uring_cancel_generic+0x614/0x680 io_uring/io_uring.c:3260 io_uring_files_cancel include/linux/io_uring.h:55 [inline] do_exit+0x32c/0x2290 kernel/exit.c:824 do_group_exit+0x206/0x2c0 kernel/exit.c:1019 get_signal+0x1701/0x17e0 kernel/signal.c:2859 arch_do_signal_or_restart+0x91/0x670 arch/x86/kernel/signal.c:306 exit_to_user_mode_loop+0x6a/0x100 kernel/entry/common.c:168 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:203 __syscall_exit_to_user_mode_work kernel/entry/common.c:285 [inline] syscall_exit_to_user_mode+0x64/0x2e0 kernel/entry/common.c:296 do_syscall_64+0x4d/0xc0 arch/x86/entry/common.c:86 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7f1bb268c0f9 RSP: 002b:00007f1bb3364168 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa RAX: 0000000000000800 RBX: 00007f1bb27ac050 RCX: 00007f1bb268c0f9 RDX: 0000000000000000 RSI: 0000000000002905 RDI: 0000000000000005 RBP: 00007f1bb26e7ae9 R08: 0000000000000000 R09: 0200000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffcb08ccacf R14: 00007f1bb3364300 R15: 0000000000022000 Showing all locks held in the system: 1 lock held by rcu_tasks_kthre/12: #0: ffffffff8cf27770 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510 1 lock held by rcu_tasks_trace/13: #0: ffffffff8cf27f70 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510 1 lock held by khungtaskd/28: #0: ffffffff8cf275a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30 5 locks held by kworker/u4:5/1031: #0: ffff888012612938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x77f/0x13a0 #1: ffffc900053c7d20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7c6/0x13a0 kernel/workqueue.c:2365 #2: ffffffff8e07e890 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf5/0xb80 net/core/net_namespace.c:575 #3: ffffffff8e08ae08 (rtnl_mutex){+.+.}-{3:3}, at: ip6gre_exit_batch_net+0xc4/0x460 net/ipv6/ip6_gre.c:1636 #4: ffffffff8cf2cc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:293 [inline] #4: ffffffff8cf2cc78 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3a3/0x890 kernel/rcu/tree_exp.h:989 1 lock held by dhcpcd/4645: #0: ffffffff8e08ae08 (rtnl_mutex){+.+.}-{3:3}, at: devinet_ioctl+0x2ce/0x1bc0 net/ipv4/devinet.c:1071 2 locks held by getty/4746: #0: ffff888027ca5098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:244 #1: ffffc900015802f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6ab/0x1db0 drivers/tty/n_tty.c:2177 2 locks held by kworker/1:1/15555: 3 locks held by kworker/1:4/12830: #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77f/0x13a0 #1: ffffc9000b027d20 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x7c6/0x13a0 kernel/workqueue.c:2365 #2: ffffffff8e08ae08 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:277 2 locks held by kworker/u4:9/16076: 2 locks held by kworker/1:5/26122: #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x77f/0x13a0 #1: ffffc9000332fd20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7c6/0x13a0 kernel/workqueue.c:2365 1 lock held by iou-wrk-27419/27425: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-27419/27426: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-27419/27427: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-27419/27428: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-27419/27429: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-27419/27430: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-27419/27431: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-27419/27433: #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880463160a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28055: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28056: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28057: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28058: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28059: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28060: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28061: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 1 lock held by iou-wrk-28054/28062: #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_ring_submit_lock io_uring/io_uring.h:214 [inline] #0: ffff8880a5a6e0a8 (&ctx->uring_lock){+.+.}-{3:3}, at: io_provide_buffers+0x79/0xed0 io_uring/kbuf.c:428 7 locks held by syz-executor.1/28865: #0: ffff88807ec7a460 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x26d/0xbb0 fs/read_write.c:580 #1: ffff888029f64488 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1eb/0x4f0 fs/kernfs/file.c:325 #2: ffff888020bce490 (kn->active#51){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20f/0x4f0 fs/kernfs/file.c:326 #3: ffffffff8d9ff808 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: del_device_store+0xfc/0x480 drivers/net/netdevsim/bus.c:209 #4: ffff8880a58820e8 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:831 [inline] #4: ffff8880a58820e8 (&dev->mutex){....}-{3:3}, at: __device_driver_lock drivers/base/dd.c:1073 [inline] #4: ffff8880a58820e8 (&dev->mutex){....}-{3:3}, at: device_release_driver_internal+0xba/0x880 drivers/base/dd.c:1276 #5: ffff8880a5883250 (&devlink->lock_key#13){+.+.}-{3:3}, at: nsim_drv_remove+0x50/0x160 drivers/net/netdevsim/dev.c:1675 #6: ffffffff8e08ae08 (rtnl_mutex){+.+.}-{3:3}, at: nsim_destroy+0x3e/0x150 drivers/net/netdevsim/netdev.c:374 4 locks held by syz-executor.5/28884: #0: ffff88807ec7a460 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x26d/0xbb0 fs/read_write.c:580 #1: ffff888147484c88 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1eb/0x4f0 fs/kernfs/file.c:325 #2: ffff888020bce490 (kn->active#51){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20f/0x4f0 fs/kernfs/file.c:326 #3: ffffffff8d9ff808 (nsim_bus_dev_list_lock){+.+.}-{3:3}, at: del_device_store+0xfc/0x480 drivers/net/netdevsim/bus.c:209 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.2.0-syzkaller-06695-gd8ca6dbb8de7 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/21/2023 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106 nmi_cpu_backtrace+0x4e5/0x560 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x1b4/0x3f0 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline] watchdog+0xffb/0x1040 kernel/hung_task.c:377 kthread+0x270/0x300 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline] NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:86 [inline] NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt+0x20/0x30 drivers/acpi/processor_idle.c:112