syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 676d, last: 4h17m
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
a607d1e4-f56a-479f-bd5d-819025c7ef3e repro INFO: task hung in nfsd_umount 2026/03/07 03:10 2026/03/07 03:11 2026/03/07 03:20 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:5829 blocked for more than 143 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:22200 pid:5829  tgid:5829  ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7065
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1364
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
 exit_to_user_mode_loop+0x100/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x668/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f984cd9d9d7
RSP: 002b:00007ffc4f615478 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f984ce32050 RCX: 00007f984cd9d9d7
RDX: 0000000000000004 RSI: 0000000000000009 RDI: 00007ffc4f6165c0
RBP: 00007ffc4f6165ac R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc4f6165c0
R13: 00007f984ce32050 R14: 000000000008255c R15: 00007ffc4f616600
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
2 locks held by getty/5588:
 #0: ffff8880386000a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz-executor/5829:
 #0: ffff888079d800e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888079d800e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff888079d800e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff888079d800e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec589a8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by kworker/u10:57/9003:
 #0: ffff88801d742148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc9000354fd08 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
7 locks held by kworker/u10:58/9004:
2 locks held by kworker/u10:59/9005:
 #0: ffff88801d742148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc9000356fd08 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
2 locks held by syz.4.1151/11741:
 #0: ffffffff906c1c50 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec589a8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x6c1/0xc00 fs/nfsd/nfsctl.c:1607
2 locks held by syz-executor/11952:
 #0: ffff8880383fc0e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880383fc0e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880383fc0e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff8880383fc0e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec589a8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/13101:
 #0: ffff88805de360e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88805de360e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88805de360e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88805de360e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec589a8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz.0.1488/13305:
 #0: ffffffff906c1c50 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec589a8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x6c1/0xc00 fs/nfsd/nfsctl.c:1607
1 lock held by syz.7.1578/13731:
2 locks held by syz-executor/13877:
 #0: ffff8880945d20e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880945d20e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880945d20e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff8880945d20e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec589a8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz.3.1696/14386:
 #0: ffff88805a8700e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88805a8700e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88805a8700e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88805a8700e0 (&type->s_umount_key#52){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec589a8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
1 lock held by syz.3.1696/14389:
 #0: ffff88805a8700e0 (&type->s_umount_key#52){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88805a8700e0 (&type->s_umount_key#52){++++}-{4:4}, at: super_lock+0x1ca/0x3f0 fs/super.c:122
1 lock held by syz-executor/14557:
 #0: ffffffff8e7f32b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
2 locks held by syz.8.1719/14633:
 #0: ffffffff905fd0d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff8e7f3180 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3828
3 locks held by syz.1.1718/14637:
 #0: ffffffff905fd0d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff90615928 (rtnl_mutex){+.+.}-{4:4}, at: ops_exit_rtnl_list net/core/net_namespace.c:173 [inline]
 #1: ffffffff90615928 (rtnl_mutex){+.+.}-{4:4}, at: ops_undo_list+0x7ec/0xab0 net/core/net_namespace.c:248
 #2: ffffffff8e7f32b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x27f/0x3c0 kernel/rcu/tree_exp.h:311
2 locks held by syz.1.1718/14655:
 #0: ffffffff905fd0d0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff90615928 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x21e/0x780 net/ipv4/ip_tunnel.c:1146
4 locks held by sed/14692:
 #0: ffff8880b843b360 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2c/0x140 kernel/sched/core.c:647
 #1: ffff8880b8424648 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:225 [inline]
 #1: ffff8880b8424648 (psi_seq){-.-.}-{0:0}, at: __schedule+0x2c11/0x6120 kernel/sched/core.c:6905
 #2: ffff88802d283a38 (&sig->wait_chldexit){....}-{3:3}, at: __wake_up_common_lock kernel/sched/wait.c:124 [inline]
 #2: ffff88802d283a38 (&sig->wait_chldexit){....}-{3:3}, at: __wake_up_sync_key+0x1c/0x50 kernel/sched/wait.c:192
 #3: ffff888032782910 (&p->pi_lock){-.-.}-{2:2}, at: class_raw_spinlock_irqsave_constructor include/linux/spinlock.h:570 [inline]
 #3: ffff888032782910 (&p->pi_lock){-.-.}-{2:2}, at: try_to_wake_up+0xb2/0x1a80 kernel/sched/core.c:4130

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (3907):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/03/27 10:27 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 08:31 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 00:49 upstream 0138af2472df 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 09:38 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 07:53 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 05:10 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 00:57 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 23:17 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 22:08 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 19:22 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 10:40 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 09:32 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 07:30 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 23:24 upstream e3c33bc767b5 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 21:52 upstream e3c33bc767b5 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 08:49 upstream c369299895a5 baf8bf12 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 07:39 upstream c369299895a5 baf8bf12 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 06:08 upstream c369299895a5 baf8bf12 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 02:56 upstream c369299895a5 baf8bf12 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/23 19:03 upstream c369299895a5 5e3db351 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/23 06:54 upstream 8d8bd2a5aa98 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/23 05:12 upstream 8d8bd2a5aa98 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/23 00:28 upstream 8d8bd2a5aa98 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/22 17:28 upstream 113ae7b4decc 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/22 14:47 upstream 113ae7b4decc 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/22 11:38 upstream 113ae7b4decc 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/22 08:18 upstream 113ae7b4decc 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/22 06:02 upstream 113ae7b4decc 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/22 00:59 upstream a0c83177734a 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/21 14:38 upstream 42bddab0563f 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/21 13:34 upstream 42bddab0563f 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/21 12:28 upstream 42bddab0563f 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/21 07:49 upstream 42bddab0563f 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/21 02:51 upstream 42bddab0563f 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/20 14:16 upstream 0e4f8f1a3d08 85bf2a64 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/20 11:25 upstream a1d9d8e83378 2f245add .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/20 08:51 upstream a1d9d8e83378 2f245add .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/20 04:47 upstream a1d9d8e83378 2f245add .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/20 02:37 upstream a1d9d8e83378 2f245add .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/19 23:59 upstream a1d9d8e83378 2f245add .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/19 20:39 upstream 8a30aeb0d1b4 bd6dcb30 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/19 09:58 upstream 8a30aeb0d1b4 0199f9a1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/18 18:49 upstream a989fde763f4 0199f9a1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/18 17:37 upstream a989fde763f4 0199f9a1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/18 14:01 upstream a989fde763f4 0199f9a1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/18 08:25 upstream f0caa1d49cc0 c8810548 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/14 17:38 upstream 1c9982b49613 ee8d34d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2026/03/13 18:46 upstream b36eb6e3f5d8 351cb5cf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/03/08 20:16 upstream 014441d1e4b2 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/03/06 23:37 upstream 651690480a96 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/03/24 14:27 linux-next 09c0f7f1bcdb 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
2026/03/20 23:11 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/03/20 10:21 linux-next b5d083a3ed1e 2f245add .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.