syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 632d, last: 16h34m
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz.0.3094:21528 blocked for more than 143 seconds.
      Tainted: G     U       L      syzkaller #0
      Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.3094      state:D stack:19624 pid:21528 tgid:21528 ppid:5821   task_flags:0x40064c flags:0x00080003
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0xfe6/0x5fa0 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:6964
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7021
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1353
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 exit_task_work include/linux/task_work.h:40 [inline]
 do_exit+0x829/0x2a30 kernel/exit.c:971
 do_group_exit+0xd5/0x2a0 kernel/exit.c:1112
 get_signal+0x1ec7/0x21e0 kernel/signal.c:3034
 arch_do_signal_or_restart+0x91/0x770 arch/x86/kernel/signal.c:337
 __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:75 [inline]
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 irqentry_exit_to_user_mode_prepare include/linux/irq-entry-common.h:270 [inline]
 irqentry_exit_to_user_mode include/linux/irq-entry-common.h:339 [inline]
 irqentry_exit+0x1f8/0x670 kernel/entry/common.c:196
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
RIP: 0033:0x15
RSP: 002b:000000000000000a EFLAGS: 00010212
RAX: 000000000000000b RBX: 00007f0fd5016450 RCX: 00007f0fd4d9bf79
RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000020003b46
RBP: 00007f0fd4e327e0 R08: 0000000000000002 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f0fd50164e8 R14: 00007f0fd5016450 R15: 00007ffd37d33518
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e5e2de0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:302 [inline]
 #0: ffffffff8e5e2de0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
 #0: ffffffff8e5e2de0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
4 locks held by kworker/u8:9/3018:
 #0: ffff88801c2e7148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x11ae/0x1840 kernel/workqueue.c:3232
 #1: ffffc9000b957d08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x927/0x1840 kernel/workqueue.c:3233
 #2: ffffffff903e97f0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xab/0x830 net/core/net_namespace.c:670
 #3: ffffffff90402128 (rtnl_mutex){+.+.}-{4:4}, at: ops_exit_rtnl_list net/core/net_namespace.c:173 [inline]
 #3: ffffffff90402128 (rtnl_mutex){+.+.}-{4:4}, at: ops_undo_list+0x7ec/0xab0 net/core/net_namespace.c:248
2 locks held by syz.3.2809/19930:
 #0: ffffffff904aef90 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8ea47b68 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x687/0xbc0 fs/nfsd/nfsctl.c:1596
1 lock held by syz.2.2865/20304:
 #0: ffff8880580c9348 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
 #0: ffff8880580c9348 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: __sock_release+0x86/0x260 net/socket.c:661
2 locks held by syz.0.3094/21528:
 #0: ffff888029cda0e0 (&type->s_umount_key#53){+.+.}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888029cda0e0 (&type->s_umount_key#53){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff888029cda0e0 (&type->s_umount_key#53){+.+.}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff888029cda0e0 (&type->s_umount_key#53){+.+.}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ea47b68 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz.5.3306/22752:
 #0: ffffffff904aef90 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8ea47b68 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x687/0xbc0 fs/nfsd/nfsctl.c:1596
3 locks held by kworker/u9:1/22983:
 #0: ffff888069450148 ((wq_completion)hci8){+.+.}-{0:0}, at: process_one_work+0x11ae/0x1840 kernel/workqueue.c:3232
 #1: ffffc9000b5afd08 ((work_completion)(&hdev->power_on)){+.+.}-{0:0}, at: process_one_work+0x927/0x1840 kernel/workqueue.c:3233
 #2: ffff88807a708ec0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_open+0x22/0xb0 net/bluetooth/hci_core.c:428
2 locks held by getty/24249:
 #0: ffff8880341ed0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900044c62f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
1 lock held by syz.2.3668/24575:
 #0: ffffffff8e5ee9f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
2 locks held by syz.6.3670/24581:
 #0: ffffffff903e97f0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff8e5ee9f8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xcc3/0xfe0 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xaf0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 23074 Comm: kworker/u8:3 Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:26 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:109 [inline]
RIP: 0010:check_preemption_disabled+0x2c/0xe0 lib/smp_processor_id.c:19
Code: 55 53 48 83 ec 08 65 8b 1d f5 23 6e 08 65 f7 05 e6 23 6e 08 ff ff ff 7f 74 0f 48 83 c4 08 89 d8 5b 5d 41 5c e9 45 16 03 00 9c <58> f6 c4 02 74 ea 65 4c 8b 25 ae 23 6e 08 48 89 fd 41 f6 44 24 2f
RSP: 0018:ffffc9000bd8f8a8 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff81ab45fa
RDX: 0000000000000000 RSI: ffffffff8dd3de47 RDI: ffffffff8bfa95a0
RBP: ffffffff81ab4436 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffff88802eba8000
R13: ffffffff8e7a6e80 R14: 0000000000000202 R15: 8000000000000063
FS:  0000000000000000(0000) GS:ffff8881245c1000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00002000000c7000 CR3: 000000000e392000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lockdep_recursion_inc kernel/locking/lockdep.c:465 [inline]
 lock_release kernel/locking/lockdep.c:5888 [inline]
 lock_release+0x9a/0x2e0 kernel/locking/lockdep.c:5875
 rcu_lock_release include/linux/rcupdate.h:312 [inline]
 rcu_read_unlock include/linux/rcupdate.h:868 [inline]
 pte_unmap include/linux/pgtable.h:136 [inline]
 __text_poke+0x52b/0xac0 arch/x86/kernel/alternative.c:2586
 smp_text_poke_batch_finish+0x57d/0xc60 arch/x86/kernel/alternative.c:2943
 arch_jump_label_transform_apply+0x1c/0x30 arch/x86/kernel/jump_label.c:146
 jump_label_update+0x37a/0x550 kernel/jump_label.c:919
 static_key_enable_cpuslocked+0x1bc/0x270 kernel/jump_label.c:210
 static_key_enable+0x1a/0x20 kernel/jump_label.c:223
 toggle_allocation_gate mm/kfence/core.c:894 [inline]
 toggle_allocation_gate+0xfe/0x2d0 mm/kfence/core.c:886
 process_one_work+0x9c2/0x1840 kernel/workqueue.c:3257
 process_scheduled_works kernel/workqueue.c:3340 [inline]
 worker_thread+0x5da/0xe40 kernel/workqueue.c:3421
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xaf0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>

Crashes (3592):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/10 12:32 upstream 72c395024dac a076df6f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/10 10:05 upstream 8a5203c630c6 4ab09a02 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 20:45 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 18:47 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 17:32 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 15:43 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 06:00 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 23:52 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 21:52 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 19:30 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 17:41 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 16:40 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 11:56 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 09:24 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/02/08 02:18 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 00:56 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 19:50 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 17:52 upstream 2687c848e578 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 15:17 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 13:50 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 01:39 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/02/06 22:26 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 20:39 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 18:50 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 16:59 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 06:59 upstream 8fdb05de0e2d f03c4191 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 05:13 upstream 8fdb05de0e2d f03c4191 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 22:23 upstream 8fdb05de0e2d f03c4191 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 20:39 upstream f14faaf3a1fb 4936e85c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 14:19 upstream f14faaf3a1fb 4936e85c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 12:59 upstream f14faaf3a1fb 4936e85c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/04 22:49 upstream 5fd0a1df5d05 ea10c935 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/04 10:32 upstream de0674d9bc69 42b01fab .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/04 06:49 upstream de0674d9bc69 42b01fab .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 20:24 upstream 6bd9ed02871f 6df4c87a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 05:05 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 03:30 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 03:09 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 17:57 upstream 18f7fcd5e69a 018ebef2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 15:51 upstream 18f7fcd5e69a 018ebef2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 14:01 upstream 18f7fcd5e69a 018ebef2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 06:18 upstream 9f2693489ef8 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/01 16:37 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/01 11:24 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/01 09:57 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/01/06 15:39 upstream 7f98ab9da046 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/01/23 14:15 linux-next a0c666c25aee 3181850c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/01/23 12:09 linux-next a0c666c25aee 3181850c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.