syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 312d, last: 1d21h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:29069 blocked for more than 143 seconds.
      Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:20800 pid:29069 tgid:29069 ppid:1      task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5367 [inline]
 __schedule+0x1ac3/0x5090 kernel/sched/core.c:6748
 __schedule_loop kernel/sched/core.c:6825 [inline]
 schedule+0x163/0x360 kernel/sched/core.c:6840
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6897
 __mutex_lock_common kernel/locking/mutex.c:664 [inline]
 __mutex_lock+0x7fa/0x1000 kernel/locking/mutex.c:732
 nfsd_shutdown_threads+0x4e/0xd0 fs/nfsd/nfssvc.c:596
 nfsd_umount+0x43/0xd0 fs/nfsd/nfsctl.c:1386
 deactivate_locked_super+0xc4/0x130 fs/super.c:473
 cleanup_mnt+0x422/0x4c0 fs/namespace.c:1435
 task_work_run+0x251/0x310 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x13f/0x340 kernel/entry/common.c:218
 do_syscall_64+0x100/0x230 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fd2d438e497
RSP: 002b:00007fffeff5e958 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007fd2d440e08c RCX: 00007fd2d438e497
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007fffeff5ea10
RBP: 00007fffeff5ea10 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007fffeff5faa0
R13: 00007fd2d440e08c R14: 000000000023865d R15: 00007fffeff5fae0
 </TASK>
INFO: task syz.5.7394:31384 blocked for more than 143 seconds.
      Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.5.7394      state:D stack:27944 pid:31384 tgid:31383 ppid:26755  task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5367 [inline]
 __schedule+0x1ac3/0x5090 kernel/sched/core.c:6748
 __schedule_loop kernel/sched/core.c:6825 [inline]
 schedule+0x163/0x360 kernel/sched/core.c:6840
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6897
 rwsem_down_read_slowpath kernel/locking/rwsem.c:1084 [inline]
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0x74c/0xae0 kernel/locking/rwsem.c:1526
 __super_lock fs/super.c:58 [inline]
 super_lock+0x27c/0x400 fs/super.c:120
 super_lock_shared fs/super.c:139 [inline]
 iterate_supers+0x8c/0x190 fs/super.c:931
 ksys_sync+0xc2/0x1d0 fs/sync.c:102
 __do_sys_sync+0xe/0x20 fs/sync.c:113
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4e9938d169
RSP: 002b:00007f4e9a1ef038 EFLAGS: 00000246 ORIG_RAX: 00000000000000a2
RAX: ffffffffffffffda RBX: 00007f4e995a5fa0 RCX: 00007f4e9938d169
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 00007f4e995a5fa0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f4e995a5fa0 R15: 00007fffdfde2938
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x30/0x180 kernel/locking/lockdep.c:6761
2 locks held by getty/5578:
 #0: 
ffff88814e14d0a0
 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fde2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x53d/0x16b0 drivers/tty/n_tty.c:2211
3 locks held by kworker/0:3/21899:
 #0: ffff88801ac80d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801ac80d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc900049b7c60 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900049b7c60 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffff8880662d8240 (&data->fib_lock){+.+.}-{4:4}, at: nsim_fib_event_work+0x316/0x3f10 drivers/net/netdevsim/fib.c:1490
6 locks held by kworker/0:8/23060:
 #0: ffff888020ae9548 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff888020ae9548 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000c9a7c60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000c9a7c60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffff888145f07190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #2: ffff888145f07190 (&dev->mutex){....}-{4:4}, at: hub_event+0x200/0x50f0 drivers/usb/core/hub.c:5861
 #3: ffff8880286e5510 (&port_dev->status_lock){+.+.}-{4:4}, at: usb_lock_port drivers/usb/core/hub.c:3220 [inline]
 #3: ffff8880286e5510 (&port_dev->status_lock){+.+.}-{4:4}, at: hub_port_connect drivers/usb/core/hub.c:5430 [inline]
 #3: ffff8880286e5510 (&port_dev->status_lock){+.+.}-{4:4}, at: hub_port_connect_change drivers/usb/core/hub.c:5673 [inline]
 #3: ffff8880286e5510 (&port_dev->status_lock){+.+.}-{4:4}, at: port_event drivers/usb/core/hub.c:5833 [inline]
 #3: ffff8880286e5510 (&port_dev->status_lock){+.+.}-{4:4}, at: hub_event+0x2494/0x50f0 drivers/usb/core/hub.c:5915
 #4: ffff8880283b8268 (hcd->address0_mutex){+.+.}-{4:4}, at: hub_port_connect drivers/usb/core/hub.c:5431 [inline]
 #4: ffff8880283b8268 (hcd->address0_mutex){+.+.}-{4:4}, at: hub_port_connect_change drivers/usb/core/hub.c:5673 [inline]
 #4: ffff8880283b8268 (hcd->address0_mutex){+.+.}-{4:4}, at: port_event drivers/usb/core/hub.c:5833 [inline]
 #4: ffff8880283b8268 (hcd->address0_mutex){+.+.}-{4:4}, at: hub_event+0x24cd/0x50f0 drivers/usb/core/hub.c:5915
 #5: ffffc90000007bc0 ((&timer.timer)){+.-.}-{0:0}, at: call_timer_fn+0xc2/0x650 kernel/time/timer.c:1786
2 locks held by syz.8.6635/28866:
 #0: ffffffff8ff28150 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8ee07868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0x146/0x1ae0 fs/nfsd/nfsctl.c:1922
2 locks held by syz-executor/29069:
 #0: ffff88805a4f60e0 (&type->s_umount_key#100){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88805a4f60e0 (&type->s_umount_key#100){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88805a4f60e0 (&type->s_umount_key#100){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505
 #1: ffffffff8ee07868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x4e/0xd0 fs/nfsd/nfssvc.c:596
3 locks held by kworker/u8:10/29401:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90004797c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90004797c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec3e48 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:285
3 locks held by kworker/u8:13/29408:
 #0: ffff88814da57148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88814da57148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319
 #1: ffffc900047c7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900047c7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec3e48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #2: ffffffff8fec3e48 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x110/0x16a0 net/ipv6/addrconf.c:4190
2 locks held by kworker/u8:16/29415:
 #0: ffff8880b8639958 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:595
 #1: ffffc900047ffc60 ((work_completion)(&(&bat_priv->nc.work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900047ffc60 ((work_completion)(&(&bat_priv->nc.work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319
1 lock held by syz.5.7394/31384:
 #0: ffff88805a4f60e0 (&type->s_umount_key#100){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88805a4f60e0 (&type->s_umount_key#100){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120
2 locks held by syz-executor/500:
 #0: ffffffff8fec3e48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fec3e48 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x767/0xd70 net/core/rtnetlink.c:6918
 #1: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:304 [inline]
 #1: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x384/0x830 kernel/rcu/tree_exp.h:998
3 locks held by syz.6.7916/671:
 #0: ffff888067731868 (&pipe->mutex){+.+.}-{4:4}, at: splice_file_to_pipe+0x2e/0x500 fs/splice.c:1286
 #1: ffff88805516a0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #2: ffffc900034552f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x53d/0x16b0 drivers/tty/n_tty.c:2211

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x4ab/0x4e0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 23060 Comm: kworker/0:8 Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Workqueue: usb_hub_wq hub_event
RIP: 0010:get_random_u32+0x38d/0xab0 drivers/char/random.c:553
Code: 00 00 4c 01 eb 48 89 d8 48 c1 e8 03 49 be 00 00 00 00 00 fc ff df 42 0f b6 04 30 84 c0 48 8b 54 24 10 0f 85 5c 04 00 00 8b 1b <48> 8b 44 24 28 42 0f b6 04 30 84 c0 0f 85 6a 04 00 00 89 9c 24 00
RSP: 0018:ffffc900000079a0 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 000000006df08baf RCX: ffff88801cb30000
RDX: ffff8880b8635d28 RSI: 0000000000000007 RDI: 0000000000000018
RBP: ffffc90000007b20 R08: ffffffff85871ed5 R09: 0000000000000001
R10: 0000000000000000 R11: 0000000000000000 R12: 1ffff110170c6ba5
R13: ffff8880b8635c90 R14: dffffc0000000000 R15: 0000000000000007
FS:  0000000000000000(0000) GS:ffff88812525a000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9d88979178 CR3: 000000003463a000 CR4: 00000000003526f0
DR0: 0000000000000001 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 __get_random_u32_below+0x14/0x90 drivers/char/random.c:567
 get_random_u32_below include/linux/random.h:63 [inline]
 mrp_join_timer_arm net/802/mrp.c:596 [inline]
 mrp_join_timer+0x116/0x180 net/802/mrp.c:612
 call_timer_fn+0x189/0x650 kernel/time/timer.c:1789
 expire_timers kernel/time/timer.c:1840 [inline]
 __run_timers kernel/time/timer.c:2414 [inline]
 __run_timer_base+0x66e/0x8e0 kernel/time/timer.c:2426
 run_timer_base kernel/time/timer.c:2435 [inline]
 run_timer_softirq+0xb7/0x170 kernel/time/timer.c:2445
 handle_softirqs+0x2d6/0x9b0 kernel/softirq.c:561
 __do_softirq kernel/softirq.c:595 [inline]
 invoke_softirq kernel/softirq.c:435 [inline]
 __irq_exit_rcu+0xfb/0x220 kernel/softirq.c:662
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:678
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
 sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1049
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:console_trylock_spinning kernel/printk/printk.c:2061 [inline]
RIP: 0010:vprintk_emit+0x702/0xa10 kernel/printk/printk.c:2431
Code: 00 e8 22 5d 21 00 4c 8d bc 24 a0 00 00 00 4d 85 e4 75 07 e8 10 5d 21 00 eb 06 e8 09 5d 21 00 fb 49 bc 00 00 00 00 00 fc ff df <48> c7 c7 00 63 a1 8e 31 f6 ba 01 00 00 00 31 c9 41 b8 01 00 00 00
RSP: 0018:ffffc9000c9a7260 EFLAGS: 00000287
RAX: ffffffff81a1f527 RBX: 0000000000000000 RCX: 0000000000100000
RDX: ffffc90019f48000 RSI: 0000000000044a7a RDI: 0000000000044a7b
RBP: ffffc9000c9a7370 R08: ffffffff81a1f500 R09: 1ffffffff2077a6e
R10: dffffc0000000000 R11: fffffbfff2077a6f R12: dffffc0000000000
R13: 1ffff92001934e50 R14: ffffffff81a1f362 R15: ffffc9000c9a7300
 dev_vprintk_emit+0x358/0x420 drivers/base/core.c:4891
 dev_printk_emit+0xdf/0x130 drivers/base/core.c:4902
 _dev_err+0x12d/0x180 drivers/base/core.c:4957
 hub_port_init+0x1d4a/0x26f0 drivers/usb/core/hub.c:5082
 hub_port_connect drivers/usb/core/hub.c:5462 [inline]
 hub_port_connect_change drivers/usb/core/hub.c:5673 [inline]
 port_event drivers/usb/core/hub.c:5833 [inline]
 hub_event+0x281c/0x50f0 drivers/usb/core/hub.c:5915
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xac3/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd30 kernel/workqueue.c:3400
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (2232):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/26 09:35 upstream 2df0c02dab82 89d30d73 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/03/26 07:09 upstream 2df0c02dab82 89d30d73 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/26 05:04 upstream 2df0c02dab82 89d30d73 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/25 03:39 upstream 38fec10eb60d 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/24 13:06 upstream 586de92313fc 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/24 10:46 upstream 586de92313fc 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/24 09:44 upstream 586de92313fc 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/24 08:12 upstream 586de92313fc 875573af .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/23 15:52 upstream 586de92313fc 4e8d3850 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/03/23 12:21 upstream 183601b78a9b 4e8d3850 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/23 12:15 upstream 183601b78a9b 4e8d3850 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/23 08:55 upstream 183601b78a9b 4e8d3850 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/22 18:53 upstream 88d324e69ea9 c6512ef7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/03/22 17:39 upstream 88d324e69ea9 c6512ef7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/22 16:18 upstream 88d324e69ea9 c6512ef7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/22 12:18 upstream 88d324e69ea9 c6512ef7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/20 10:02 upstream a7f2e10ecd8f 3b7445cf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/03/19 03:55 upstream 76b6905c11fd 22a6c2b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/19 02:52 upstream 76b6905c11fd 22a6c2b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/17 22:11 upstream 4701f33a1070 ce3352cd .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/03/17 15:17 upstream 4701f33a1070 948c34e4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/16 18:07 upstream cb82ca153949 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/16 13:59 upstream eb88e6bfbc0a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/16 10:33 upstream eb88e6bfbc0a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/16 01:50 upstream eb88e6bfbc0a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/16 00:15 upstream 3571e8b091f4 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/15 08:58 upstream 83158b21ae9a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/15 07:51 upstream 83158b21ae9a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/15 04:43 upstream 83158b21ae9a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/15 02:43 upstream 83158b21ae9a e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/14 15:50 upstream e3a854b577cb e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/14 05:09 upstream 4003c9e78778 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/13 23:48 upstream 4003c9e78778 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/13 16:53 upstream b7f94fcf5546 44be8b44 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/13 05:04 upstream b7f94fcf5546 1a5d9317 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/13 03:53 upstream b7f94fcf5546 1a5d9317 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/12 17:30 upstream 0fed89a961ea ee70e6db .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/12 12:43 upstream 0fed89a961ea ee70e6db .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/12 06:07 upstream 0fed89a961ea ee70e6db .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2025/03/11 18:53 upstream 0b46b049d6ec f2eee6b3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/03/11 16:32 upstream 4d872d51bc9d f2eee6b3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2025/03/11 05:50 upstream 4d872d51bc9d 16256247 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/11 03:41 upstream 4d872d51bc9d 16256247 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/10 19:50 upstream 80e54e84911a 16256247 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2025/03/10 15:39 upstream 80e54e84911a 16256247 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/12/02 16:20 upstream e70140ba0d2b bb326ffb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/10/08 11:13 linux-next 33ce24234fca 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.