syzbot


INFO: task hung in p9_fd_close (3)

Status: upstream: reported C repro on 2025/10/13 22:08
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+ed53e35a1e9dde289579@syzkaller.appspotmail.com
First crash: 57d, last: 36d
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in p9_fd_close (3) 0 (1) 2025/10/13 22:08
Similar bugs (7)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in p9_fd_close v9fs 1 C error error 484 1128d 2292d 22/29 fixed on 2023/02/24 13:50
android-6-12 INFO: task hung in p9_fd_close origin:upstream 1 C 1 64d 78d 0/1 premoderation: reported C repro on 2025/09/22 01:10
linux-4.14 INFO: task hung in p9_fd_close 1 C inconclusive 78 1113d 2312d 0/1 upstream: reported C repro on 2019/08/11 15:06
upstream INFO: task hung in p9_fd_close (2) kernel 1 1 186d 186d 0/29 auto-obsoleted due to no activity on 2025/09/03 21:17
linux-5.15 INFO: task hung in p9_fd_close origin:lts-only 1 C error 2 6d22h 181d 0/3 upstream: reported C repro on 2025/06/11 07:14
linux-4.19 INFO: task hung in p9_fd_close 1 C error 219 1013d 2304d 0/1 upstream: reported C repro on 2019/08/19 15:52
upstream INFO: task can't die in p9_fd_close 1 C done 58 1324d 1931d 0/29 closed as dup on 2022/08/26 12:44
Last patch testing requests (9)
Created Duration User Patch Repo Result
2025/11/17 08:45 21m retest repro linux-next error
2025/11/17 08:45 1h07m retest repro linux-next error
2025/11/17 08:45 1h13m retest repro linux-next error
2025/11/17 08:45 19m retest repro linux-next report log
2025/11/17 02:46 30m retest repro linux-next report log
2025/11/17 02:46 28m retest repro linux-next report log
2025/11/17 02:46 31m retest repro linux-next report log
2025/11/17 02:46 18m retest repro linux-next report log
2025/11/17 02:46 21m retest repro linux-next error

Sample crash report:
INFO: task syz.0.19:6064 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.19        state:D stack:25928 pid:6064  tgid:6063  ppid:5954   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5254 [inline]
 __schedule+0x17c4/0x4d60 kernel/sched/core.c:6862
 __schedule_loop kernel/sched/core.c:6944 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6959
 schedule_timeout+0x9a/0x270 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common kernel/sched/completion.c:121 [inline]
 wait_for_common kernel/sched/completion.c:132 [inline]
 wait_for_completion+0x2bf/0x5d0 kernel/sched/completion.c:153
 __flush_work+0x9b9/0xbc0 kernel/workqueue.c:4277
 __cancel_work_sync+0xbe/0x110 kernel/workqueue.c:4397
 p9_conn_destroy net/9p/trans_fd.c:909 [inline]
 p9_fd_close+0x251/0x430 net/9p/trans_fd.c:944
 p9_client_create+0xd1c/0x10b0 net/9p/client.c:1060
 v9fs_session_init+0x1d7/0x19a0 fs/9p/v9fs.c:410
 v9fs_mount+0xc8/0xa50 fs/9p/vfs_super.c:122
 legacy_get_tree+0xfd/0x1a0 fs/fs_context.c:663
 vfs_get_tree+0x92/0x2b0 fs/super.c:1752
 fc_mount fs/namespace.c:1198 [inline]
 do_new_mount_fc fs/namespace.c:3641 [inline]
 do_new_mount+0x302/0xa10 fs/namespace.c:3717
 do_mount fs/namespace.c:4040 [inline]
 __do_sys_mount fs/namespace.c:4228 [inline]
 __se_sys_mount+0x313/0x410 fs/namespace.c:4205
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f973e18efc9
RSP: 002b:00007f973f00b038 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f973e3e5fa0 RCX: 00007f973e18efc9
RDX: 0000200000000540 RSI: 0000200000000180 RDI: 0000000000000000
RBP: 00007f973e211f91 R08: 0000200000000580 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f973e3e6038 R14: 00007f973e3e5fa0 R15: 00007ffe9e0656f8
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/0:1/10:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc900000f7ba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
2 locks held by kworker/1:0/24:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc900001e7ba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
1 lock held by khungtaskd/31:
 #0: ffffffff8df3d960 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8df3d960 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8df3d960 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:2/36:
 #0: ffff8880302c0948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc90000ac7ba0 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
 #2: ffffffff8f2d4b88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8f2d4b88 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x112/0x14b0 net/ipv6/addrconf.c:4194
2 locks held by kworker/1:1/43:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc90000b37ba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
7 locks held by kworker/u8:3/50:
3 locks held by kworker/u8:4/64:
 #0: ffff88803348d948 ((wq_completion)udp_tunnel_nic){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc9000215fba0 ((work_completion)(&utn->work)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
 #2: ffffffff8f2d4b88 (rtnl_mutex){+.+.}-{4:4}, at: udp_tunnel_nic_device_sync_work+0x29/0xa50 net/ipv4/udp_tunnel_nic.c:736
2 locks held by kworker/0:2/797:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc900030a7ba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
2 locks held by kworker/1:2/798:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc90002ff7ba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
2 locks held by getty/5587:
 #0: ffff88814c9640a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900036c32f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
2 locks held by kworker/0:5/6088:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc900036ffba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
2 locks held by kworker/1:4/6093:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc900036dfba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
3 locks held by kworker/0:6/6150:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc90002eb7ba0 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
 #2: ffff888077b9a240 (&data->fib_lock){+.+.}-{4:4}, at: nsim_fib_event_work+0x1f7/0x3b0 drivers/net/netdevsim/fib.c:1490
2 locks held by kworker/1:8/6277:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc9000422fba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
2 locks held by kworker/1:9/6286:
 #0: ffff88813fe55948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x841/0x15d0 kernel/workqueue.c:3242
 #1: ffffc900044afba0 ((work_completion)(&m->rq)){+.+.}-{0:0}, at: process_one_work+0x868/0x15d0 kernel/workqueue.c:3243
3 locks held by syz-executor/6320:
 #0: ffffffff8f2d4b88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8f2d4b88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #0: ffffffff8f2d4b88 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8e9/0x1c80 net/core/rtnetlink.c:4064
 #1: ffff88805604d528 (&wg->device_update_lock){+.+.}-{4:4}, at: wg_open+0x227/0x420 drivers/net/wireguard/device.c:50
 #2: ffffffff8df433f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:311 [inline]
 #2: ffffffff8df433f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f6/0x730 kernel/rcu/tree_exp.h:957

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:337 [inline]
 watchdog+0xfa9/0xff0 kernel/hung_task.c:500
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 64 Comm: kworker/u8:4 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: udp_tunnel_nic udp_tunnel_nic_device_sync_work
RIP: 0010:sprintf+0x0/0x120 lib/vsprintf.c:3084
Code: f6 be ff ff ff 7f 4c 89 ff 4c 89 f2 48 89 d9 5b 41 5e 41 5f e9 61 bb ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <f3> 0f 1e fa 55 48 89 e5 41 57 41 56 41 55 41 54 53 48 83 e4 e0 48
RSP: 0018:ffffc9000215ed58 EFLAGS: 00000206
RAX: 00000044eaf99000 RBX: ffffc9000215eea0 RCX: 0000000000068644
RDX: 0000000000000128 RSI: ffffffff8b6b9400 RDI: ffffc9000215eea0
RBP: ffffc9000215ee10 R08: ffffc9000215eecf R09: 0000000000000000
R10: ffffc9000215eea0 R11: fffff5200042bdda R12: 0000000000000000
R13: ffffc9000215f020 R14: 0000000000000000 R15: ffffc9000215f028
FS:  0000000000000000(0000) GS:ffff888125ee2000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f135fa3a6b0 CR3: 0000000030f00000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 print_time kernel/printk/printk.c:1354 [inline]
 info_print_prefix+0x155/0x310 kernel/printk/printk.c:1380
 record_print_text+0x154/0x420 kernel/printk/printk.c:1429
 printk_get_next_message+0x26d/0x7b0 kernel/printk/printk.c:2997
 console_emit_next_record kernel/printk/printk.c:3062 [inline]
 console_flush_one_record kernel/printk/printk.c:3194 [inline]
 console_flush_all+0x4cc/0xb10 kernel/printk/printk.c:3268
 __console_flush_and_unlock kernel/printk/printk.c:3298 [inline]
 console_unlock+0xbb/0x190 kernel/printk/printk.c:3338
 vprintk_emit+0x4c5/0x590 kernel/printk/printk.c:2423
 dev_vprintk_emit+0x337/0x3f0 drivers/base/core.c:4914
 dev_printk_emit+0xe0/0x130 drivers/base/core.c:4925
 __netdev_printk+0x3d7/0x4d0 net/core/dev.c:12873
 netdev_info+0x10a/0x160 net/core/dev.c:12928
 nsim_udp_tunnel_set_port+0x26e/0x3e0 drivers/net/netdevsim/udp_tunnels.c:31
 udp_tunnel_nic_device_sync_one net/ipv4/udp_tunnel_nic.c:-1 [inline]
 udp_tunnel_nic_device_sync_by_port net/ipv4/udp_tunnel_nic.c:249 [inline]
 __udp_tunnel_nic_device_sync+0xb0f/0x14d0 net/ipv4/udp_tunnel_nic.c:292
 udp_tunnel_nic_device_sync_work+0x97/0xa50 net/ipv4/udp_tunnel_nic.c:740
 process_one_work+0x94a/0x15d0 kernel/workqueue.c:3267
 process_scheduled_works kernel/workqueue.c:3350 [inline]
 worker_thread+0x9b0/0xee0 kernel/workqueue.c:3431
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (9):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/11/02 19:21 linux-next 98bd8b16ae57 2c50b6a9 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/11/01 17:02 linux-next 98bd8b16ae57 2c50b6a9 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/10/30 01:24 linux-next f9ba12abc528 fd2207e7 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/10/21 19:59 linux-next fe45352cd106 9832ed61 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/10/21 06:29 linux-next 606da5bb1655 9832ed61 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/10/19 08:36 linux-next 93f3bab4310d 1c8c8cd8 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/10/18 10:03 linux-next 93f3bab4310d 1c8c8cd8 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/10/17 05:28 linux-next 2433b8476165 19568248 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
2025/10/12 22:16 linux-next 2b763d465239 ff1712fe .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in p9_fd_close
* Struck through repros no longer work on HEAD.