syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 505d, last: 29m
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:12957 blocked for more than 143 seconds.
      Tainted: G     U    I         syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:23880 pid:12957 tgid:12957 ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x818/0x1060 kernel/locking/mutex.c:760
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
 nfsd_umount+0x48/0xe0 fs/nfsd/nfsctl.c:1347
 deactivate_locked_super+0xc1/0x1a0 fs/super.c:473
 deactivate_super fs/super.c:506 [inline]
 deactivate_super+0xde/0x100 fs/super.c:502
 cleanup_mnt+0x225/0x450 fs/namespace.c:1327
 task_work_run+0x150/0x240 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop+0xec/0x130 kernel/entry/common.c:43
 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
 do_syscall_64+0x426/0xfa0 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4891f901f7
RSP: 002b:00007ffed7de15a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f4892011d7d RCX: 00007f4891f901f7
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007ffed7de1660
RBP: 00007ffed7de1660 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007ffed7de26f0
R13: 00007f4892011d7d R14: 00000000000ca04f R15: 00007ffed7de2730
 </TASK>
INFO: task syz-executor:13550 blocked for more than 144 seconds.
      Tainted: G     U    I         syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:23992 pid:13550 tgid:13550 ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x818/0x1060 kernel/locking/mutex.c:760
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
 nfsd_umount+0x48/0xe0 fs/nfsd/nfsctl.c:1347
 deactivate_locked_super+0xc1/0x1a0 fs/super.c:473
 deactivate_super fs/super.c:506 [inline]
 deactivate_super+0xde/0x100 fs/super.c:502
 cleanup_mnt+0x225/0x450 fs/namespace.c:1327
 task_work_run+0x150/0x240 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop+0xec/0x130 kernel/entry/common.c:43
 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
 do_syscall_64+0x426/0xfa0 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f72f43901f7
RSP: 002b:00007fffa5890de8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f72f43901f7
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007fffa5890ea0
RBP: 00007fffa5890ea0 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007fffa5891f30
R13: 00007f72f4411d7d R14: 00000000000c93c8 R15: 00007fffa5891f70
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/32:
 #0: ffffffff8e3c4320 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3c4320 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e3c4320 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
2 locks held by syz-executor/5832:
 #0: ffff88807c9180e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88807c9180e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88807c9180e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff88807c9180e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
3 locks held by kworker/0:6/5927:
2 locks held by syz-executor/12957:
 #0: ffff88803400e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88803400e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88803400e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff88803400e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
2 locks held by syz-executor/13550:
 #0: ffff88804fc680e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88804fc680e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88804fc680e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff88804fc680e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
5 locks held by kworker/u10:2/16769:
 #0: ffff88801ba9f148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc900042dfd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffffffff900e7af0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xad/0x8b0 net/core/net_namespace.c:669
 #3: ffffffff900fdf08 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0x8b/0xaf0 net/core/dev.c:12807
 #4: ffffffff8e3cf8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x284/0x3c0 kernel/rcu/tree_exp.h:311
2 locks held by syz.2.2298/18322:
 #0: ffffffff901a1850 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xd5/0x1b10 fs/nfsd/nfsctl.c:1880
2 locks held by syz-executor/18756:
 #0: ffff88807f6880e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88807f6880e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88807f6880e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff88807f6880e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
2 locks held by syz-executor/19477:
 #0: ffff88807f17e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff88807f17e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff88807f17e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff88807f17e0e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
3 locks held by kworker/0:0/20212:
 #0: ffff88813ff19948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc9000b547d00 (free_ipc_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffffffff8e3cf8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:343
2 locks held by syz-executor/22211:
 #0: ffff888097a480e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff888097a480e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff888097a480e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff888097a480e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
2 locks held by syz.9.2639/23686:
 #0: ffff888030f4c0e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff888030f4c0e0 (&type->s_umount_key#55){++++}-{4:4}, at: __super_lock_excl fs/super.c:72 [inline]
 #0: ffff888030f4c0e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff888030f4c0e0 (&type->s_umount_key#55){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:593
2 locks held by getty/24251:
 #0: ffff888030f910a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002e922f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
2 locks held by syz.7.2697/24767:
 #0: ffffffff901a1850 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e7ec888 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x687/0xbc0 fs/nfsd/nfsctl.c:1590
4 locks held by syz.0.2761/26317:
 #0: ffff88809615cdc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0x90 net/bluetooth/hci_core.c:499
 #1: ffff88809615c0b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x3ae/0x11d0 net/bluetooth/hci_sync.c:5291
 #2: ffffffff90370708 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2118 [inline]
 #2: ffffffff90370708 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x260 net/bluetooth/hci_conn.c:2602
 #3: ffff88807acab338 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x80/0x730 net/bluetooth/l2cap_core.c:1762
3 locks held by syz.4.2762/26329:
 #0: ffff88808eb98dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0x90 net/bluetooth/hci_core.c:499
 #1: ffff88808eb980b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x3ae/0x11d0 net/bluetooth/hci_sync.c:5291
 #2: ffffffff90370708 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2118 [inline]
 #2: ffffffff90370708 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x260 net/bluetooth/hci_conn.c:2602
3 locks held by syz.2.2763/26327:
 #0: ffff88805c420dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0x90 net/bluetooth/hci_core.c:499
 #1: ffff88805c4200b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x3ae/0x11d0 net/bluetooth/hci_sync.c:5291
 #2: ffffffff90370708 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2118 [inline]
 #2: ffffffff90370708 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xbb/0x260 net/bluetooth/hci_conn.c:2602

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 32 Comm: khungtaskd Tainted: G     U    I         syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [I]=FIRMWARE_WORKAROUND
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf3f/0x1170 kernel/hung_task.c:495
 kthread+0x3c2/0x780 kernel/kthread.c:463
 ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 0 Comm: swapper/0 Tainted: G     U    I         syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [I]=FIRMWARE_WORKAROUND
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/18/2025
RIP: 0010:pv_native_safe_halt+0xf/0x20 arch/x86/kernel/paravirt.c:82
Code: b7 78 02 c3 cc cc cc cc 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 66 90 0f 00 2d 73 35 28 00 fb f4 <e9> 0c 0a 03 00 66 2e 0f 1f 84 00 00 00 00 00 66 90 90 90 90 90 90
RSP: 0018:ffffffff8e007df8 EFLAGS: 000002c2
RAX: 00000000008f1c71 RBX: 0000000000000000 RCX: ffffffff8b61e2d9
RDX: 0000000000000000 RSI: ffffffff8daff475 RDI: ffffffff8bf1d540
RBP: fffffbfff1c12f40 R08: 0000000000000001 R09: ffffed1017086655
R10: ffff8880b84332ab R11: 0000000000000000 R12: 0000000000000000
R13: ffffffff8e097a00 R14: ffffffff908358d0 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8881249e7000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f2275df4d58 CR3: 0000000031928000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 arch_safe_halt arch/x86/include/asm/paravirt.h:107 [inline]
 default_idle+0x13/0x20 arch/x86/kernel/process.c:767
 default_idle_call+0x6c/0xb0 kernel/sched/idle.c:122
 cpuidle_idle_call kernel/sched/idle.c:190 [inline]
 do_idle+0x38d/0x500 kernel/sched/idle.c:330
 cpu_startup_entry+0x4f/0x60 kernel/sched/idle.c:428
 rest_init+0x16b/0x2b0 init/main.c:757
 start_kernel+0x3f3/0x4e0 init/main.c:1111
 x86_64_start_reservations+0x18/0x30 arch/x86/kernel/head64.c:310
 x86_64_start_kernel+0x130/0x190 arch/x86/kernel/head64.c:291
 common_startup_64+0x13e/0x148
 </TASK>

Crashes (2799):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/10/07 12:21 upstream c746c3b51698 8ef35d49 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/07 08:36 upstream c746c3b51698 8ef35d49 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/06 18:55 upstream fd94619c4336 91305dbe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/06 16:51 upstream fd94619c4336 91305dbe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/06 05:42 upstream 6a74422b9710 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/06 02:11 upstream 6a74422b9710 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/05 17:10 upstream 6093a688a07d 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/05 15:33 upstream 6093a688a07d 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/05 13:06 upstream 6093a688a07d 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/05 11:48 upstream 6093a688a07d 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/05 05:21 upstream cbf33b8e0b36 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/05 01:40 upstream cbf33b8e0b36 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/05 00:13 upstream cbf33b8e0b36 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/04 18:15 upstream cbf33b8e0b36 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/04 15:14 upstream 2ccb4d203fe4 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/04 09:17 upstream 2ccb4d203fe4 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/04 01:09 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/03 19:16 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/03 18:06 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/03 15:17 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/03 12:30 upstream f79e772258df 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/03 10:59 upstream f79e772258df 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/03 08:59 upstream f79e772258df 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/03 07:25 upstream f79e772258df 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/02 23:10 upstream 7f7072574127 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/02 18:42 upstream 7f7072574127 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/02 16:21 upstream 7f7072574127 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/02 11:07 upstream 7f7072574127 a1859138 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/10/01 19:20 upstream 50c19e20ed2e 3af39644 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/01 17:46 upstream 50c19e20ed2e 3af39644 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/10/01 16:07 upstream 50c19e20ed2e 3af39644 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/30 19:10 upstream 30d4efb2f5a5 65a0eece .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/29 16:29 upstream e5f0a698b34e 86341da6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/28 23:09 upstream 8f9736633f8c 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/28 16:16 upstream 51a24b7deaae 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/28 14:39 upstream 51a24b7deaae 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/28 10:36 upstream 51a24b7deaae 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/28 07:15 upstream 51a24b7deaae 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/27 23:30 upstream fec734e8d564 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/27 19:51 upstream fec734e8d564 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/27 17:38 upstream fec734e8d564 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/27 16:22 upstream fec734e8d564 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/27 11:17 upstream fec734e8d564 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/26 16:33 upstream 4ff71af020ae 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/26 14:03 upstream 4ff71af020ae 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/26 10:44 upstream 4ff71af020ae 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/26 07:22 upstream 4ff71af020ae 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/25 17:16 upstream bf40f4b87761 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/09/12 11:36 upstream 02ffd6f89c50 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2025/08/17 12:17 upstream 99bade344cfa 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/10/02 00:59 linux-next 3b9b1f8df454 a1859138 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
2025/10/01 21:22 linux-next 3b9b1f8df454 a1859138 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
2025/09/11 05:39 linux-next 5f540c4aade9 fdeaa69b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.