syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 185d, last: now
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:5844 blocked for more than 143 seconds.
      Not tainted 6.12.0-syzkaller-01892-g8f7c8b88bda4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:23760 pid:5844  tgid:5844  ppid:1      flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0xe5a/0x5ae0 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x62b/0xa60 kernel/locking/mutex.c:735
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
 nfsd_umount+0x48/0xe0 fs/nfsd/nfsctl.c:1428
 deactivate_locked_super+0xbe/0x1a0 fs/super.c:473
 deactivate_super+0xde/0x100 fs/super.c:506
 cleanup_mnt+0x222/0x450 fs/namespace.c:1373
 task_work_run+0x14e/0x250 kernel/task_work.c:239
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x27b/0x2a0 kernel/entry/common.c:218
 do_syscall_64+0xda/0x250 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fd65057fb47
RSP: 002b:00007fff5b8998b8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd65057fb47
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007fff5b899970
RBP: 00007fff5b899970 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007fff5b89a9f0
R13: 00007fd6505f15fc R14: 000000000003b41e R15: 00007fff5b89aa30
 </TASK>
INFO: task syz-executor:7056 blocked for more than 143 seconds.
      Not tainted 6.12.0-syzkaller-01892-g8f7c8b88bda4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24032 pid:7056  tgid:7056  ppid:1      flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0xe5a/0x5ae0 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x62b/0xa60 kernel/locking/mutex.c:735
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
 nfsd_umount+0x48/0xe0 fs/nfsd/nfsctl.c:1428
 deactivate_locked_super+0xbe/0x1a0 fs/super.c:473
 deactivate_super+0xde/0x100 fs/super.c:506
 cleanup_mnt+0x222/0x450 fs/namespace.c:1373
 task_work_run+0x14e/0x250 kernel/task_work.c:239
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x27b/0x2a0 kernel/entry/common.c:218
 do_syscall_64+0xda/0x250 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fd77357fb47
RSP: 002b:00007ffe44f38ed8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd77357fb47
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007ffe44f38f90
RBP: 00007ffe44f38f90 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000ffffffff R11: 0000000000000246 R12: 00007ffe44f3a010
R13: 00007fd7735f15fc R14: 000000000003c3ac R15: 00007ffe44f3a050
 </TASK>
INFO: task syz.4.618:9336 blocked for more than 144 seconds.
      Not tainted 6.12.0-syzkaller-01892-g8f7c8b88bda4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.618       state:D stack:24944 pid:9336  tgid:9335  ppid:6568   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0xe5a/0x5ae0 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x62b/0xa60 kernel/locking/mutex.c:735
 nfsd_nl_threads_set_doit+0x694/0xbe0 fs/nfsd/nfsctl.c:1671
 genl_family_rcv_msg_doit+0x202/0x2f0 net/netlink/genetlink.c:1115
 genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline]
 genl_rcv_msg+0x565/0x800 net/netlink/genetlink.c:1210
 netlink_rcv_skb+0x165/0x410 net/netlink/af_netlink.c:2541
 genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219
 netlink_unicast_kernel net/netlink/af_netlink.c:1321 [inline]
 netlink_unicast+0x53c/0x7f0 net/netlink/af_netlink.c:1347
 netlink_sendmsg+0x8b8/0xd70 net/netlink/af_netlink.c:1891
 sock_sendmsg_nosec net/socket.c:711 [inline]
 __sock_sendmsg net/socket.c:726 [inline]
 ____sys_sendmsg+0x9ae/0xb40 net/socket.c:2581
 ___sys_sendmsg+0x135/0x1e0 net/socket.c:2635
 __sys_sendmsg+0x16e/0x220 net/socket.c:2667
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f4ec457e819
RSP: 002b:00007f4ec543f038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f4ec4735fa0 RCX: 00007f4ec457e819
RDX: 00000000000000c4 RSI: 0000000020001580 RDI: 0000000000000003
RBP: 00007f4ec45f175e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f4ec4735fa0 R15: 00007ffed090a768
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8ddba680 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8ddba680 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8ddba680 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x7f/0x390 kernel/locking/lockdep.c:6744
3 locks held by kworker/0:2/1199:
2 locks held by getty/5607:
 #0: ffff8880313df0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900032332f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0xfba/0x1480 drivers/tty/n_tty.c:2211
2 locks held by syz-executor/5844:
 #0: ffff88806d02c0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88806d02c0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88806d02c0e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
3 locks held by kworker/0:4/5881:
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1212/0x1b30 kernel/workqueue.c:3204
 #1: ffffc90003a0fd80 (free_ipc_work){+.+.}-{0:0}, at: process_one_work+0x8bb/0x1b30 kernel/workqueue.c:3205
 #2: ffffffff8ddc5fb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x282/0x3b0 kernel/rcu/tree_exp.h:297
3 locks held by kworker/1:5/5901:
2 locks held by syz-executor/7056:
 #0: ffff888033b8a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff888033b8a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff888033b8a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
3 locks held by kworker/u8:25/8248:
3 locks held by kworker/u8:32/8257:
2 locks held by syz.5.606/9292:
 #0: ffffffff8fb6c890 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x694/0xbe0 fs/nfsd/nfsctl.c:1671
2 locks held by syz.4.618/9336:
 #0: ffffffff8fb6c890 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x694/0xbe0 fs/nfsd/nfsctl.c:1671
2 locks held by syz-executor/9388:
 #0: ffff88805e1e40e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88805e1e40e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88805e1e40e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
2 locks held by syz-executor/9409:
 #0: ffff888029f8a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff888029f8a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff888029f8a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
2 locks held by syz-executor/9440:
 #0: ffff888034f320e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff888034f320e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff888034f320e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
2 locks held by syz-executor/9469:
 #0: ffff88808cd060e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88808cd060e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88808cd060e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
2 locks held by syz-executor/10791:
 #0: ffff8880a1f740e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff8880a1f740e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff8880a1f740e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
2 locks held by syz-executor/10923:
 #0: ffff8880909aa0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff8880909aa0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff8880909aa0e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
2 locks held by syz.3.810/11079:
 #0: ffffffff8fb6c890 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xe3/0x1b40 fs/nfsd/nfsctl.c:1964
2 locks held by syz-executor/11416:
 #0: ffff88809520a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88809520a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88809520a0e0 (&type->s_umount_key#49){++++}-{4:4}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:625
2 locks held by syz.6.927/11962:
 #0: ffffffff8fb6c890 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e1d8868 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x694/0xbe0 fs/nfsd/nfsctl.c:1671
1 lock held by syz.5.976/12200:
4 locks held by syz-executor/12203:
 #0: ffff8880a82bcd80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close+0x26/0x90 net/bluetooth/hci_core.c:481
 #1: ffff8880a82bc078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x34c/0x1260 net/bluetooth/hci_sync.c:5193
 #2: ffffffff8fd34308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:1972 [inline]
 #2: ffffffff8fd34308 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xc4/0x260 net/bluetooth/hci_conn.c:2592
 #3: ffffffff8ddc5fb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x1a4/0x3b0 kernel/rcu/tree_exp.h:329
2 locks held by syz.5.983/12232:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.12.0-syzkaller-01892-g8f7c8b88bda4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xf0c/0x1240 kernel/hung_task.c:379
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 6565 Comm: kworker/u8:19 Not tainted 6.12.0-syzkaller-01892-g8f7c8b88bda4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:hlock_class+0x0/0x130 kernel/locking/lockdep.c:223
Code: df e8 a4 1a 85 00 e9 c9 fe ff ff e8 9a 1a 85 00 e9 95 fe ff ff 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <48> b8 00 00 00 00 00 fc ff df 53 48 89 fb 48 83 c7 20 48 89 fa 48
RSP: 0018:ffffc9000593f928 EFLAGS: 00000002
RAX: 0000000000000000 RBX: ffff88802db6c730 RCX: 0000000000000006
RDX: dffffc0000000000 RSI: ffff88802db6c730 RDI: ffff88802db6c730
RBP: ffffc9000593fa68 R08: 0000000000000000 R09: fffffbfff2d355b6
R10: ffffffff969aadb7 R11: 000000000000004f R12: ffff88802db6bc00
R13: 0000000000000040 R14: 0000000000000006 R15: 1ffff92000b27f2c
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffd38b8efe8 CR3: 000000000db7e000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 mark_lock+0xb5/0xc60 kernel/locking/lockdep.c:4727
 mark_held_locks+0x9f/0xe0 kernel/locking/lockdep.c:4321
 __trace_hardirqs_on_caller kernel/locking/lockdep.c:4347 [inline]
 lockdep_hardirqs_on_prepare+0x27a/0x420 kernel/locking/lockdep.c:4406
 trace_hardirqs_on+0x36/0x40 kernel/trace/trace_preemptirq.c:61
 kasan_quarantine_put+0x10a/0x240 mm/kasan/quarantine.c:234
 kasan_slab_free include/linux/kasan.h:230 [inline]
 slab_free_hook mm/slub.c:2342 [inline]
 slab_free mm/slub.c:4579 [inline]
 kfree+0x14f/0x4b0 mm/slub.c:4727
 skb_kfree_head net/core/skbuff.c:1086 [inline]
 skb_free_head+0x108/0x1d0 net/core/skbuff.c:1098
 skb_release_data+0x560/0x730 net/core/skbuff.c:1125
 skb_release_all net/core/skbuff.c:1190 [inline]
 __kfree_skb net/core/skbuff.c:1204 [inline]
 consume_skb net/core/skbuff.c:1436 [inline]
 consume_skb+0xbf/0x100 net/core/skbuff.c:1430
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:821 [inline]
 nsim_dev_trap_report_work+0x878/0xc90 drivers/net/netdevsim/dev.c:851
 process_one_work+0x958/0x1b30 kernel/workqueue.c:3229
 process_scheduled_works kernel/workqueue.c:3310 [inline]
 worker_thread+0x6c8/0xf00 kernel/workqueue.c:3391
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (974):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/21 06:14 upstream 8f7c8b88bda4 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/21 04:44 upstream 8f7c8b88bda4 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/21 03:34 upstream 8f7c8b88bda4 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/21 02:28 upstream 8f7c8b88bda4 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/21 01:55 upstream 8f7c8b88bda4 4b25d554 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 22:55 upstream 8f7c8b88bda4 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 21:33 upstream 8f7c8b88bda4 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 20:27 upstream 8f7c8b88bda4 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 19:20 upstream 8f7c8b88bda4 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 18:12 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 16:19 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 15:44 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 11:36 upstream bf9aa14fc523 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 08:08 upstream bf9aa14fc523 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 06:29 upstream bf9aa14fc523 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 03:56 upstream 158f238aa69d 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 02:23 upstream 158f238aa69d 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/20 00:27 upstream 158f238aa69d 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 21:53 upstream 158f238aa69d 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 18:57 upstream 158f238aa69d 7d02db5a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 13:41 upstream 158f238aa69d 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 11:19 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 09:44 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 07:44 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 05:41 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 04:18 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 03:06 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 01:45 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/19 00:40 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 22:36 upstream 9fb2cfa4635a 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 20:28 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 20:01 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 18:13 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 17:03 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 15:47 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 13:44 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 12:41 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 11:37 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 11:14 upstream adc218676eef e7bb5d6e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 07:09 upstream f66d6acccbc0 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 05:53 upstream f66d6acccbc0 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 04:40 upstream f66d6acccbc0 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 03:36 upstream f66d6acccbc0 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 03:06 upstream f66d6acccbc0 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/18 01:36 upstream f66d6acccbc0 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/11/17 13:47 upstream 4a5df3796467 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/11/16 21:13 upstream e8bdb3c8be08 cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2024/11/03 15:43 upstream 3e5e6c9900c3 f00eed24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/10/08 11:13 linux-next 33ce24234fca 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.