syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 366d, last: 1d13h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:10125 blocked for more than 143 seconds.
      Tainted: G     U    I         6.15.0-rc7-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:23656 pid:10125 tgid:10125 ppid:1      task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5382 [inline]
 __schedule+0x116f/0x5de0 kernel/sched/core.c:6767
 __schedule_loop kernel/sched/core.c:6845 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:6860
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6917
 __mutex_lock_common kernel/locking/mutex.c:678 [inline]
 __mutex_lock+0x6c7/0xb90 kernel/locking/mutex.c:746
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:596
 nfsd_umount+0x48/0xe0 fs/nfsd/nfsctl.c:1386
 deactivate_locked_super+0xbe/0x1a0 fs/super.c:473
 deactivate_super fs/super.c:506 [inline]
 deactivate_super+0xde/0x100 fs/super.c:502
 cleanup_mnt+0x225/0x450 fs/namespace.c:1431
 task_work_run+0x150/0x240 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x27b/0x2a0 kernel/entry/common.c:218
 do_syscall_64+0xda/0x230 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f5a7bd8fc97
RSP: 002b:00007fff028b9a08 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f5a7be1089d RCX: 00007f5a7bd8fc97
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007fff028b9ac0
RBP: 00007fff028b9ac0 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007fff028bab50
R13: 00007f5a7be1089d R14: 00000000000c0b74 R15: 00007fff028bab90
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e3bfa80 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3bfa80 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e3bfa80 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6764
2 locks held by kworker/u8:5/1006:
 #0: ffff888147af4148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc90003befd18 ((work_completion)(&(&nsim_dev->trap_data->trap_report_dw)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
1 lock held by syz-executor/5818:
3 locks held by kworker/1:4/5896:
 #0: ffff88801b480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3213
 #1: ffffc9000465fd18 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3214
 #2: ffff888061d4d240 (&data->fib_lock){+.+.}-{4:4}, at: nsim_fib_event_work+0x1bb/0x2e80 drivers/net/netdevsim/fib.c:1490
6 locks held by kworker/0:2H/6272:
2 locks held by syz-executor/10125:
 #0: ffff88802964a0e0 (&type->s_umount_key#51){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88802964a0e0 (&type->s_umount_key#51){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88802964a0e0 (&type->s_umount_key#51){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff88802964a0e0 (&type->s_umount_key#51){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ceea8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:596
3 locks held by kworker/u8:12/10275:
2 locks held by syz.1.2191/15854:
 #0: ffffffff901cc510 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e7ceea8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xdd/0x1a40 fs/nfsd/nfsctl.c:1923
2 locks held by syz.5.2272/16292:
 #0: ffff88802a2cc0e0 (&type->s_umount_key#51){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88802a2cc0e0 (&type->s_umount_key#51){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88802a2cc0e0 (&type->s_umount_key#51){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff88802a2cc0e0 (&type->s_umount_key#51){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ceea8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:596
2 locks held by syz.3.2488/17677:
 #0: ffff8880411800e0 (&type->s_umount_key#51){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff8880411800e0 (&type->s_umount_key#51){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff8880411800e0 (&type->s_umount_key#51){++++}-{4:4}, at: deactivate_super fs/super.c:505 [inline]
 #0: ffff8880411800e0 (&type->s_umount_key#51){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:502
 #1: ffffffff8e7ceea8 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:596
6 locks held by syz-executor/17898:
 #0: ffff888034256420 (sb_writers#9){.+.+}-{0:0}, at: ksys_write+0x12a/0x240 fs/read_write.c:736
 #1: ffff88807dbc8088 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x28f/0x510 fs/kernfs/file.c:325
 #2: ffffffff8e4194c8 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:369 [inline]
 #2: ffffffff8e4194c8 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0x116/0x520 kernel/cgroup/cgroup.c:1675
 #3: ffffffff8e262970 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_attach_lock kernel/cgroup/cgroup.c:2478 [inline]
 #3: ffffffff8e262970 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_procs_write_start+0x18e/0x660 kernel/cgroup/cgroup.c:2982
 #4: ffffffff8e419290 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_attach_lock kernel/cgroup/cgroup.c:2480 [inline]
 #4: ffffffff8e419290 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_attach_lock kernel/cgroup/cgroup.c:2476 [inline]
 #4: ffffffff8e419290 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_procs_write_start+0x19a/0x660 kernel/cgroup/cgroup.c:2982
 #5: ffffffff8e3cafb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a3/0x3c0 kernel/rcu/tree_exp.h:336
2 locks held by getty/18022:
 #0: ffff8880313090a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000bf932f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x41b/0x14f0 drivers/tty/n_tty.c:2222
2 locks held by syz.6.2560/18054:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Tainted: G     U    I         6.15.0-rc7-syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [I]=FIRMWARE_WORKAROUND
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:274 [inline]
 watchdog+0xf70/0x12c0 kernel/hung_task.c:437
 kthread+0x3c2/0x780 kernel/kthread.c:464
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 5817 Comm: sshd-session Tainted: G     U    I         6.15.0-rc7-syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [I]=FIRMWARE_WORKAROUND
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:lock_release+0x17/0x2f0 kernel/locking/lockdep.c:5874
Code: 0c eb 8c 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 41 57 41 56 41 55 41 54 49 89 f4 53 48 89 fb 48 83 ec 18 <65> 48 8b 05 e9 e6 0b 12 48 89 44 24 10 31 c0 0f 1f 44 00 00 65 8b
RSP: 0018:ffffc9000429f3c0 EFLAGS: 00000286
RAX: 0000000000000001 RBX: ffffffff8e3bfa80 RCX: ffffc900042a0001
RDX: 0000000000000000 RSI: ffffffff81699d54 RDI: ffffffff8e3bfa80
RBP: 0000000000000001 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: 00000000000069bf R12: ffffffff81699d54
R13: ffffc9000429f4c8 R14: ffffc9000429f4c8 R15: ffffc9000429f4fc
FS:  0000000000000000(0000) GS:ffff888124ae7000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffe3263f960 CR3: 000000000e180000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 rcu_lock_release include/linux/rcupdate.h:341 [inline]
 rcu_read_unlock include/linux/rcupdate.h:871 [inline]
 class_rcu_destructor include/linux/rcupdate.h:1155 [inline]
 unwind_next_frame+0x3f9/0x20a0 arch/x86/kernel/unwind_orc.c:479
 __unwind_start+0x45f/0x7f0 arch/x86/kernel/unwind_orc.c:758
 unwind_start arch/x86/include/asm/unwind.h:64 [inline]
 arch_stack_walk+0x73/0x100 arch/x86/kernel/stacktrace.c:24
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 save_stack+0x160/0x1f0 mm/page_owner.c:156
 __reset_page_owner+0x84/0x1a0 mm/page_owner.c:308
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1258 [inline]
 free_unref_folios+0x999/0x1630 mm/page_alloc.c:2778
 folios_put_refs+0x56f/0x740 mm/swap.c:992
 free_pages_and_swap_cache+0x245/0x4a0 mm/swap_state.c:267
 __tlb_batch_free_encoded_pages+0xf9/0x290 mm/mmu_gather.c:136
 tlb_batch_pages_flush mm/mmu_gather.c:149 [inline]
 tlb_flush_mmu_free mm/mmu_gather.c:397 [inline]
 tlb_flush_mmu mm/mmu_gather.c:404 [inline]
 tlb_finish_mmu+0x168/0x7b0 mm/mmu_gather.c:496
 exit_mmap+0x403/0xb90 mm/mmap.c:1297
 __mmput+0x12a/0x410 kernel/fork.c:1380
 mmput+0x62/0x70 kernel/fork.c:1402
 exit_mm kernel/exit.c:589 [inline]
 do_exit+0x9d1/0x2c30 kernel/exit.c:940
 do_group_exit+0xd3/0x2a0 kernel/exit.c:1102
 __do_sys_exit_group kernel/exit.c:1113 [inline]
 __se_sys_exit_group kernel/exit.c:1111 [inline]
 __x64_sys_exit_group+0x3e/0x50 kernel/exit.c:1111
 x64_sys_call+0x1530/0x1730 arch/x86/include/generated/asm/syscalls_64.h:232
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb6490f16c5
Code: Unable to access opcode bytes at 0x7fb6490f169b.
RSP: 002b:00007ffcc9c021b8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 00000000000000ff RCX: 00007fb6490f16c5
RDX: 00000000000000e7 RSI: fffffffffffffea0 RDI: 00000000000000ff
RBP: 00007ffcc9c02300 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fb6497711a0
R13: 00000000ffffffe3 R14: 00007ffcc9c02550 R15: 000000000000000a
 </TASK>

Crashes (2375):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/05/19 03:58 upstream a5806cd506af f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/18 08:43 upstream 5723cc3450bc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/18 05:19 upstream 5723cc3450bc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/17 13:13 upstream 172a9d94339c f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/17 02:50 upstream 3c21441eeffc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/16 12:10 upstream fee3e843b309 cfde8269 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/16 02:32 upstream 088d13246a46 cfde8269 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/15 14:19 upstream 546bce579204 d6b2ee52 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2025/05/15 08:19 upstream c94d59a126cb d6b2ee52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/15 00:19 upstream c94d59a126cb d6b2ee52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/14 06:54 upstream 405e6c37c89e 7344edeb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/14 01:47 upstream 405e6c37c89e 7344edeb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/13 19:05 upstream e9565e23cd89 9497799b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/13 11:16 upstream e9565e23cd89 f6671af7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/12 21:52 upstream 627277ba7c23 f6671af7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/12 19:58 upstream 627277ba7c23 f6671af7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/11 14:30 upstream 3ce9925823c7 77908e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/09 21:38 upstream 3013c33dcbd9 bb813bcc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/09 07:15 upstream 2c89c1b655c0 bb813bcc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/08 17:39 upstream 2c89c1b655c0 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2025/05/07 23:11 upstream 707df3375124 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/07 11:16 upstream 0d8d44db295c 350f4ffc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/07 05:58 upstream 0d8d44db295c 350f4ffc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/07 02:14 upstream 0d8d44db295c 350f4ffc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/07 01:09 upstream 0d8d44db295c 350f4ffc .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/06 23:04 upstream 0d8d44db295c 350f4ffc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/06 13:55 upstream 01f95500a162 ae98e6b9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/06 12:31 upstream 01f95500a162 ae98e6b9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/06 10:01 upstream 01f95500a162 ae98e6b9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/06 06:48 upstream 01f95500a162 ae98e6b9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/05 17:46 upstream 92a09c47464d 6ca47dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/05 15:03 upstream 92a09c47464d 6ca47dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/05 00:01 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/04 21:12 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/04 19:42 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/04 13:00 upstream 2a239ffbebb5 b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/04 07:57 upstream 2a239ffbebb5 b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/04 05:04 upstream 2a239ffbebb5 b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/04 01:03 upstream 2a239ffbebb5 b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/03 21:58 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/03 20:36 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/03 15:33 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/03 13:46 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2025/05/03 10:37 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/05/03 08:02 upstream 00b827f0cffa b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/02 19:10 upstream ebd297a2affa d7f099d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/02 17:47 upstream ebd297a2affa d7f099d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/02 14:45 upstream ebd297a2affa d7f099d1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/05/01 21:33 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/10/08 11:13 linux-next 33ce24234fca 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.