syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 638d, last: 2h05m
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz.3.2120:14333 blocked for more than 143 seconds.
      Tainted: G             L      syzkaller #0
      Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.2120      state:D stack:25592 pid:14333 tgid:14331 ppid:5814   task_flags:0x40054c flags:0x00080001
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1354
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 exit_task_work include/linux/task_work.h:40 [inline]
 do_exit+0x829/0x2a90 kernel/exit.c:971
 do_group_exit+0xd5/0x2a0 kernel/exit.c:1112
 get_signal+0x1ec7/0x21e0 kernel/signal.c:3034
 arch_do_signal_or_restart+0x91/0x7a0 arch/x86/kernel/signal.c:337
 __exit_to_user_mode_loop kernel/entry/common.c:64 [inline]
 exit_to_user_mode_loop+0x86/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x67c/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fac10b9bf79
RSP: 002b:00007fac0edcd028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: 0000000000000000 RBX: 00007fac10e16090 RCX: 00007fac10b9bf79
RDX: 0000200000000280 RSI: 00000000c0481273 RDI: 0000000000000005
RBP: 00007fac10c327e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fac10e16128 R14: 00007fac10e16090 R15: 00007ffef11df478
 </TASK>
INFO: task syz.4.2159:14507 blocked for more than 144 seconds.
      Tainted: G             L      syzkaller #0
      Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.2159      state:D stack:26616 pid:14507 tgid:14501 ppid:5807   task_flags:0x40044c flags:0x00080003
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x60e0 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1354
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 exit_task_work include/linux/task_work.h:40 [inline]
 do_exit+0x829/0x2a90 kernel/exit.c:971
 do_group_exit+0xd5/0x2a0 kernel/exit.c:1112
 get_signal+0x1ec7/0x21e0 kernel/signal.c:3034
 arch_do_signal_or_restart+0x91/0x7a0 arch/x86/kernel/signal.c:337
 __exit_to_user_mode_loop kernel/entry/common.c:64 [inline]
 exit_to_user_mode_loop+0x86/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x67c/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f08c9f9bc0b
RSP: 002b:00007f08c81d2f50 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: fffffffffffffffc RBX: 0000000000000003 RCX: 00007f08c9f9bc0b
RDX: 00007f08c81d3fe0 RSI: 0000000080085502 RDI: 0000000000000003
RBP: 00007f08ca0327e0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000003 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f08ca216090 R15: 00007ffcec80acc8
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/0:0/9:
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e94a0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e94a0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e94a0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
3 locks held by kworker/0:2/2161:
 #0: ffff88813fe63548 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1287/0x1920 kernel/workqueue.c:3250
 #1: ffffc90007407d08 (free_ipc_work){+.+.}-{0:0}, at: process_one_work+0x93c/0x1920 kernel/workqueue.c:3251
 #2: ffffffff8e7f50b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
2 locks held by getty/5569:
 #0: ffff8880387610a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
1 lock held by udevd/10519:
 #0: ffff88802758b358 (&disk->open_mutex){+.+.}-{4:4}, at: bdev_open+0x41a/0xe40 block/bdev.c:961
3 locks held by kworker/0:5/12975:
2 locks held by syz.0.2001/13812:
 #0: ffffffff906b8e70 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8ec56ce8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xd5/0x1b20 fs/nfsd/nfsctl.c:1893
2 locks held by syz.3.2120/14333:
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec56ce8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz.4.2159/14507:
 #0: ffff88803312c0e0 (&type->s_umount_key#97){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88803312c0e0 (&type->s_umount_key#97){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88803312c0e0 (&type->s_umount_key#97){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88803312c0e0 (&type->s_umount_key#97){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec56ce8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
1 lock held by syz.5.2466/15825:
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: __super_lock fs/super.c:60 [inline]
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: super_lock+0x320/0x3f0 fs/super.c:122
1 lock held by syz.7.2608/16424:
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: __super_lock fs/super.c:60 [inline]
 #0: ffff88807c3140e0 (&type->s_umount_key#97){++++}-{4:4}, at: super_lock+0x320/0x3f0 fs/super.c:122
3 locks held by syz.2.2678/16729:
 #0: ffffffff905f4430 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff9060cd28 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0x90/0xc60 net/core/dev.c:13037
 #2: ffffffff8e7f50b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
3 locks held by syz.6.2684/16756:
 #0: ffff888155e88ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0xb0 net/bluetooth/hci_core.c:500
 #1: ffff888155e880c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x35c/0x1240 net/bluetooth/hci_sync.c:5346
 #2: ffffffff908a4ca8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2151 [inline]
 #2: ffffffff908a4ca8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x280 net/bluetooth/hci_conn.c:2644
4 locks held by syz.8.2683/16752:
 #0: ffff88803083cec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0xb0 net/bluetooth/hci_core.c:500
 #1: ffff88803083c0c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x35c/0x1240 net/bluetooth/hci_sync.c:5346
 #2: ffffffff908a4ca8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2151 [inline]
 #2: ffffffff908a4ca8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x280 net/bluetooth/hci_conn.c:2644
 #3: ffff888068d292f8 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x80/0x770 net/bluetooth/l2cap_core.c:1755
3 locks held by syz.1.2685/16762:
 #0: ffff888045410ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0xb0 net/bluetooth/hci_core.c:500
 #1: ffff8880454100c0 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x35c/0x1240 net/bluetooth/hci_sync.c:5346
 #2: ffffffff908a4ca8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2151 [inline]
 #2: ffffffff908a4ca8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x280 net/bluetooth/hci_conn.c:2644
1 lock held by syz.9.2680/16764:
 #0: ffffffff9060cd28 (rtnl_mutex){+.+.}-{4:4}, at: ppp_release+0x16f/0x230 drivers/net/ppp/ppp_generic.c:413

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
RIP: 0010:pv_native_safe_halt+0xf/0x20 arch/x86/kernel/paravirt.c:63
Code: 98 83 02 c3 cc cc cc cc 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d 43 2c 1d 00 fb f4 <e9> bc 35 03 00 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90
RSP: 0018:ffffffff8e407e00 EFLAGS: 00000242
RAX: 000000000293ebfd RBX: ffffffff8e4975c0 RCX: ffffffff8b8e5c75
RDX: 0000000000000000 RSI: ffffffff8de6cf89 RDI: ffffffff8c1adf20
RBP: 0000000000000000 R08: 0000000000000001 R09: ffffed1017086795
R10: ffff8880b8433cab R11: 0000000000000000 R12: fffffbfff1c92eb8
R13: 0000000000000000 R14: ffffffff90d92f10 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888124352000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000200000404030 CR3: 0000000033e44000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 arch_safe_halt arch/x86/include/asm/paravirt.h:73 [inline]
 default_idle+0x9/0x10 arch/x86/kernel/process.c:767
 default_idle_call+0x6c/0xb0 kernel/sched/idle.c:122
 cpuidle_idle_call kernel/sched/idle.c:191 [inline]
 do_idle+0x35b/0x4b0 kernel/sched/idle.c:332
 cpu_startup_entry+0x4f/0x60 kernel/sched/idle.c:430
 rest_init+0x251/0x260 init/main.c:760
 start_kernel+0x47f/0x480 init/main.c:1210
 x86_64_start_reservations+0x24/0x30 arch/x86/kernel/head64.c:310
 x86_64_start_kernel+0x12b/0x130 arch/x86/kernel/head64.c:291
 common_startup_64+0x13e/0x148
 </TASK>

Crashes (3632):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/17 04:37 upstream 0f2acd3148e0 5d52cba5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/02/16 10:37 upstream 26a4cfaff82a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/16 06:58 upstream 26a4cfaff82a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/16 05:14 upstream 26a4cfaff82a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/16 01:28 upstream 26a4cfaff82a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/15 22:54 upstream 26a4cfaff82a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/15 19:18 upstream ca4ee40bf13d 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/15 13:35 upstream ca4ee40bf13d 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/15 12:14 upstream ca4ee40bf13d 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/15 06:15 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/15 02:32 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/14 23:16 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/14 22:05 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/14 16:51 upstream 770aaedb461a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/14 15:27 upstream 770aaedb461a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/14 14:12 upstream 770aaedb461a 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/14 05:38 upstream cee73b1e840c 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/14 04:11 upstream cee73b1e840c 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/13 23:12 upstream cee73b1e840c 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/13 21:53 upstream cee73b1e840c 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/13 20:32 upstream cee73b1e840c 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/13 12:48 upstream 7449f86bafcd 6a673c50 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/12 10:13 upstream 1e83ccd5921a 76a109e2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/12 06:35 upstream 1e83ccd5921a 76a109e2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/11 20:50 upstream 192c0159402e 75707236 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/11 06:45 upstream dc855b77719f 441e25b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/10 12:32 upstream 72c395024dac a076df6f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/10 10:05 upstream 8a5203c630c6 4ab09a02 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 20:45 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 18:47 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 17:32 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 15:43 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 06:00 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 23:52 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 21:52 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 19:30 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 17:41 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 16:40 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 11:56 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 09:24 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/02/08 02:18 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 00:56 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 19:50 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 17:52 upstream 2687c848e578 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 01:39 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/01/06 15:39 upstream 7f98ab9da046 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/02/16 00:24 linux-next 635c467cc14e 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/02/13 17:07 linux-next af98e93c5c39 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.