syzbot


INFO: task hung in vfs_rmdir (2)

Status: upstream: reported C repro on 2024/06/03 03:50
Subsystems: exfat
[Documentation on labels]
Reported-by: syzbot+42986aeeddfd7ed93c8b@syzkaller.appspotmail.com
First crash: 529d, last: 54d
Cause bisection: failed (error log, bisect log)
  
Fix bisection: fixed by (bisect log) :
commit 79c1587b6cda74deb0c86fc7ba194b92958c793c
Author: Namjae Jeon <linkinjeon@kernel.org>
Date: Sat Aug 30 05:44:35 2025 +0000

  exfat: validate cluster allocation bits of the allocation bitmap

  
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] [ext4?] INFO: task hung in vfs_rmdir (2) 5 (10) 2025/10/08 05:52
[syzbot] Monthly exfat report (May 2025) 0 (1) 2025/05/24 10:05
[syzbot] Monthly exfat report (Mar 2025) 0 (1) 2025/03/11 14:05
Similar bugs (1)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in vfs_rmdir fs 1 1 1218d 1218d 0/29 auto-closed as invalid on 2022/09/21 08:53
Last patch testing requests (6)
Created Duration User Patch Repo Result
2024/12/20 23:32 16m retest repro upstream report log
2024/10/11 13:44 15m retest repro upstream report log
2024/06/13 06:12 16m retest repro upstream report log
2024/06/03 10:42 25m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master OK log
2024/06/03 04:21 0m viro@zeniv.linux.org.uk git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 v5.0 error
2024/06/03 03:56 16m viro@zeniv.linux.org.uk git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 v6.9 report log
Fix bisection attempts (9)
Created Duration User Patch Repo Result
2025/10/07 23:23 6h27m bisect fix upstream OK (1) job log
2025/08/30 06:54 1h51m bisect fix upstream OK (0) job log log
2025/07/26 12:17 2h53m bisect fix upstream OK (0) job log log
2025/06/24 00:56 3h20m bisect fix upstream OK (0) job log log
2025/05/23 19:55 4h23m bisect fix upstream OK (0) job log log
2025/04/20 08:26 2h22m bisect fix upstream OK (0) job log log
2025/03/01 10:15 1h46m bisect fix upstream OK (0) job log log
2025/01/28 03:38 2h24m bisect fix upstream OK (0) job log log
2024/07/26 02:14 1h40m bisect fix upstream OK (0) job log log

Sample crash report:
INFO: task syz.7.104:7031 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc7-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.7.104       state:D stack:27552 pid:7031  tgid:6982  ppid:6501   task_flags:0x400040 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6857
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6914
 rwsem_down_write_slowpath+0xeee/0x13b0 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1d7/0x220 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:877 [inline]
 vfs_rmdir+0x101/0x510 fs/namei.c:4385
 do_rmdir+0x3b5/0x580 fs/namei.c:4455
 __do_sys_unlinkat fs/namei.c:4631 [inline]
 __se_sys_unlinkat fs/namei.c:4625 [inline]
 __x64_sys_unlinkat+0xde/0xf0 fs/namei.c:4625
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb13bf8d169
RSP: 002b:00007fb13ce3c038 EFLAGS: 00000246 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 00007fb13c1a6080 RCX: 00007fb13bf8d169
RDX: 0000000000000200 RSI: 0000400000000000 RDI: ffffffffffffff9c
RBP: 00007fb13c00e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fb13c1a6080 R15: 00007ffd33241a48
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:1/13:
 #0: ffff88801b089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90000127c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90000127c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:285
1 lock held by khungtaskd/31:
 #0: ffffffff8eb393e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8eb393e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8eb393e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6746
7 locks held by kworker/u8:5/66:
 #0: ffff88801bef3948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801bef3948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000157fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000157fc60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8feb76d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x17a/0xd60 net/core/net_namespace.c:606
 #3: ffff88807c3260e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #3: ffff88807c3260e8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff88807c3260e8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x13b/0x440 net/devlink/core.c:506
 #4: ffff88806d1e1250 (&devlink->lock_key#21){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff88806d1e1250 (&devlink->lock_key#21){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff88806d1e1250 (&devlink->lock_key#21){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x14d/0x440 net/devlink/core.c:506
 #5: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: nsim_destroy+0xa4/0x620 drivers/net/netdevsim/netdev.c:1016
 #6: ffffffff8eb3e8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:302 [inline]
 #6: ffffffff8eb3e8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x381/0x820 kernel/rcu/tree_exp.h:996
2 locks held by getty/5575:
 #0: ffff888031fa10a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fde2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x616/0x1770 drivers/tty/n_tty.c:2211
3 locks held by kworker/0:5/5875:
 #0: ffff88801b080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc900042ffc60 ((fqdir_free_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900042ffc60 ((fqdir_free_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8eb3e780 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x530 kernel/rcu/tree.c:3741
3 locks held by kworker/0:7/6981:
 #0: ffff88801b080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90003bd7c60 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003bd7c60 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8eb3e8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:334 [inline]
 #2: ffffffff8eb3e8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x820 kernel/rcu/tree_exp.h:996
5 locks held by syz.7.104/6983:
 #0: ffff88807a9c8420 (sb_writers#24){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:547
 #1: ffff88805b8b9810 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:912 [inline]
 #1: ffff88805b8b9810 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: filename_create+0x260/0x540 fs/namei.c:4082
 #2: ffff88805b8b9bf8 (&inode->ei_update_lock){+.+.}-{4:4}, at: __bch2_create+0x355/0xf40 fs/bcachefs/fs.c:550
 #3: ffff888063084378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:164 [inline]
 #3: ffff888063084378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:256 [inline]
 #3: ffff888063084378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7e4/0xd30 fs/bcachefs/btree_iter.c:3408
 #4: ffff8880630a66d0 (&c->gc_lock){++++}-{4:4}, at: bch2_btree_update_start+0x680/0x1540 fs/bcachefs/btree_update_interior.c:1182
3 locks held by syz.7.104/7031:
 #0: ffff88807a9c8420 (sb_writers#24){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:547
 #1: ffff88805b8b9078 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:912 [inline]
 #1: ffff88805b8b9078 (&sb->s_type->i_mutex_key#26/1){+.+.}-{4:4}, at: do_rmdir+0x263/0x580 fs/namei.c:4443
 #2: ffff88805b8b9810 (&sb->s_type->i_mutex_key#26){++++}-{4:4}, at: inode_lock include/linux/fs.h:877 [inline]
 #2: ffff88805b8b9810 (&sb->s_type->i_mutex_key#26){++++}-{4:4}, at: vfs_rmdir+0x101/0x510 fs/namei.c:4385
3 locks held by kworker/u8:10/7953:
 #0: ffff88814d984948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88814d984948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000260fc60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000260fc60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #2: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x10e/0x16a0 net/ipv6/addrconf.c:4193
1 lock held by syz-executor/9049:
 #0: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x47e/0x1bc0 net/ipv4/devinet.c:987
1 lock held by syz-executor/9240:
 #0: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 #0: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xc55/0x1d30 net/core/rtnetlink.c:4021
4 locks held by syz.0.341/9377:
 #0: ffff88802a474d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:481 [inline]
 #0: ffff88802a474d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x203/0x510 net/bluetooth/hci_core.c:2678
 #1: ffff88802a474078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x60d/0x1260 net/bluetooth/hci_sync.c:5185
 #2: ffffffff900298a8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2041 [inline]
 #2: ffffffff900298a8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa6/0x240 net/bluetooth/hci_conn.c:2698
 #3: ffff888024812338 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x71/0x690 net/bluetooth/l2cap_core.c:1761
1 lock held by syz-executor/9379:
 #0: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fec3f08 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x47e/0x1bc0 net/ipv4/devinet.c:987

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.14.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 9373 Comm: syz.7.342 Not tainted 6.14.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
RIP: 0010:rcu_is_watching_curr_cpu include/linux/context_tracking.h:128 [inline]
RIP: 0010:rcu_is_watching+0x3a/0xb0 kernel/rcu/tree.c:716
Code: e8 ab cc 59 0a 89 c3 83 f8 08 73 7a 49 bf 00 00 00 00 00 fc ff df 4c 8d 34 dd 50 eb 53 8e 4c 89 f0 48 c1 e8 03 42 80 3c 38 00 <74> 08 4c 89 f7 e8 bc 52 7f 00 48 c7 c3 98 79 03 00 49 03 1e 48 89
RSP: 0018:ffffc900038674b0 EFLAGS: 00000246
RAX: 1ffffffff1ca7d6a RBX: 0000000000000000 RCX: dffffc0000000000
RDX: ffff88803207da00 RSI: ffffffff8c802e00 RDI: ffffffff8c802dc0
RBP: 0000000000000001 R08: ffffffff812a3d5d R09: ffffc90003867610
R10: ffffc90003867570 R11: ffffffff81ad6ba0 R12: ffff88803207da00
R13: ffffffff81ad6ba0 R14: ffffffff8e53eb50 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005561a1593131 CR3: 000000000e938000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 kernel_text_address+0x82/0xe0 kernel/extable.c:113
 __kernel_text_address+0xd/0x40 kernel/extable.c:79
 unwind_get_return_address+0x4d/0x90 arch/x86/kernel/unwind_orc.c:369
 arch_stack_walk+0xfd/0x150 arch/x86/kernel/stacktrace.c:26
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 save_stack+0xfb/0x1f0 mm/page_owner.c:156
 __reset_page_owner+0x76/0x430 mm/page_owner.c:297
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1127 [inline]
 free_frozen_pages+0xe04/0x10e0 mm/page_alloc.c:2660
 vfree+0x1c3/0x360 mm/vmalloc.c:3383
 kcov_put kernel/kcov.c:439 [inline]
 kcov_close+0x28/0x50 kernel/kcov.c:535
 __fput+0x3e9/0x9f0 fs/file_table.c:464
 task_work_run+0x24f/0x310 kernel/task_work.c:227
 exit_task_work include/linux/task_work.h:40 [inline]
 do_exit+0xa2a/0x28e0 kernel/exit.c:938
 do_group_exit+0x207/0x2c0 kernel/exit.c:1087
 get_signal+0x168c/0x1720 kernel/signal.c:3036
 arch_do_signal_or_restart+0x96/0x860 arch/x86/kernel/signal.c:337
 exit_to_user_mode_loop kernel/entry/common.c:111 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0xce/0x340 kernel/entry/common.c:218
 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3e5b58bc1f
Code: Unable to access opcode bytes at 0x7f3e5b58bbf5.
RSP: 002b:00007f3e5c3dcdf0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: 0000000000f23000 RBX: 0000000001000000 RCX: 00007f3e5b58bc1f
RDX: 0000000001000000 RSI: 00007f3e50e00000 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000097e9
R10: 0000400000012f42 R11: 0000000000000293 R12: 0000000000000003
R13: 00007f3e5c3dcef0 R14: 00007f3e5c3dceb0 R15: 00007f3e50e00000
 </TASK>

Crashes (14):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/17 13:39 upstream 4701f33a1070 948c34e4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/11/14 02:24 upstream 0a9b9d17f3a7 bb3f8425 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/09/19 11:10 upstream 4a39ac5b7d62 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/09/08 11:44 upstream d1f2d51b711a 9750182a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/09/04 07:57 upstream 88fac17500f4 9d47f20a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/08/19 15:50 upstream 47ac09b91bef 9f0ab3fb .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/30 03:37 upstream 4a4be1ad3a6e 34889ee3 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 22:05 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 22:04 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 22:02 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 21:59 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 21:58 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 21:55 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/12 05:48 upstream cf87f46fd34d 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
* Struck through repros no longer work on HEAD.