syzbot


INFO: task hung in vfs_rmdir (2)

Status: upstream: reported C repro on 2024/06/03 03:50
Subsystems: exfat
[Documentation on labels]
Reported-by: syzbot+42986aeeddfd7ed93c8b@syzkaller.appspotmail.com
First crash: 204d, last: 18d
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ext4?] INFO: task hung in vfs_rmdir (2) 5 (9) 2024/06/03 11:27
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in vfs_rmdir fs 1 893d 893d 0/28 auto-closed as invalid on 2022/09/21 08:53
Last patch testing requests (5)
Created Duration User Patch Repo Result
2024/10/11 13:44 15m retest repro upstream report log
2024/06/13 06:12 16m retest repro upstream report log
2024/06/03 10:42 25m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master OK log
2024/06/03 04:21 0m viro@zeniv.linux.org.uk git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 v5.0 error
2024/06/03 03:56 16m viro@zeniv.linux.org.uk git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6 v6.9 report log
Fix bisection attempts (1)
Created Duration User Patch Repo Result
2024/07/26 02:14 1h40m bisect fix upstream OK (0) job log log

Sample crash report:
INFO: task syz-executor150:5089 blocked for more than 143 seconds.
      Not tainted 6.10.0-rc1-syzkaller-00027-g4a4be1ad3a6e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor150 state:D stack:24224 pid:5089  tgid:5087  ppid:5086   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6894
 rwsem_down_write_slowpath+0xeeb/0x13b0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1af/0x200 kernel/locking/rwsem.c:1306
 inode_lock include/linux/fs.h:791 [inline]
 vfs_rmdir+0x101/0x510 fs/namei.c:4203
 do_rmdir+0x3b5/0x580 fs/namei.c:4273
 __do_sys_rmdir fs/namei.c:4292 [inline]
 __se_sys_rmdir fs/namei.c:4290 [inline]
 __x64_sys_rmdir+0x49/0x60 fs/namei.c:4290
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f9b91eaed89
RSP: 002b:00007f9b91e65168 EFLAGS: 00000246 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 00007f9b91f375e8 RCX: 00007f9b91eaed89
RDX: ffffffffffffffb0 RSI: e0f7bef392ce73bd RDI: 0000000020000180
RBP: 00007f9b91f375e0 R08: 00007f9b91e656c0 R09: 0000000000000000
R10: 00007f9b91e656c0 R11: 0000000000000246 R12: 00007f9b91f375ec
R13: 0000000000000006 R14: 00007fff5a763930 R15: 00007fff5a763a18
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e333f60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by getty/4840:
 #0: ffff88802afc40a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f0e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
3 locks held by syz-executor150/5089:
 #0: ffff88807cf1a420 (sb_writers#9){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:409
 #1: ffff88807a2d9650 (&sb->s_type->i_mutex_key#14/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:826 [inline]
 #1: ffff88807a2d9650 (&sb->s_type->i_mutex_key#14/1){+.+.}-{3:3}, at: do_rmdir+0x263/0x580 fs/namei.c:4261
 #2: ffff88807a2d9650 (&sb->s_type->i_mutex_key#15){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:791 [inline]
 #2: ffff88807a2d9650 (&sb->s_type->i_mutex_key#15){+.+.}-{3:3}, at: vfs_rmdir+0x101/0x510 fs/namei.c:4203

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 30 Comm: khungtaskd Not tainted 6.10.0-rc1-syzkaller-00027-g4a4be1ad3a6e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 1093 Comm: kworker/u8:7 Not tainted 6.10.0-rc1-syzkaller-00027-g4a4be1ad3a6e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:hlock_class kernel/locking/lockdep.c:228 [inline]
RIP: 0010:check_wait_context kernel/locking/lockdep.c:4798 [inline]
RIP: 0010:__lock_acquire+0x876/0x1fd0 kernel/locking/lockdep.c:5087
Code: 8b 5d 00 81 e3 ff 1f 00 00 48 89 d8 48 c1 e8 06 48 8d 3c c5 80 15 f7 92 be 08 00 00 00 e8 c2 2b 86 00 48 0f a3 1d ca b2 84 11 <73> 1a 48 69 c3 c8 00 00 00 48 8d 98 80 74 c5 92 48 ba 00 00 00 00
RSP: 0018:ffffc900047af3d0 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 0000000000000021 RCX: ffffffff817262ae
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff92f71580
RBP: 0000000000000003 R08: ffffffff92f71587 R09: 1ffffffff25ee2b0
R10: dffffc0000000000 R11: fffffbfff25ee2b1 R12: 0000000000000005
R13: ffff8880223f8bc8 R14: 0000000000000005 R15: ffff8880223f8bc8
FS:  0000000000000000(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055e370ade680 CR3: 000000000e132000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5754
 __raw_spin_lock include/linux/spinlock_api_smp.h:133 [inline]
 _raw_spin_lock+0x2e/0x40 kernel/locking/spinlock.c:154
 spin_lock include/linux/spinlock.h:351 [inline]
 __pte_offset_map_lock+0x1ba/0x300 mm/pgtable-generic.c:375
 get_locked_pte include/linux/mm.h:2744 [inline]
 __text_poke+0x2c5/0xd30 arch/x86/kernel/alternative.c:1883
 text_poke arch/x86/kernel/alternative.c:1968 [inline]
 text_poke_bp_batch+0x8cd/0xb30 arch/x86/kernel/alternative.c:2357
 text_poke_flush arch/x86/kernel/alternative.c:2470 [inline]
 text_poke_finish+0x30/0x50 arch/x86/kernel/alternative.c:2477
 arch_jump_label_transform_apply+0x1c/0x30 arch/x86/kernel/jump_label.c:146
 static_key_enable_cpuslocked+0x136/0x260 kernel/jump_label.c:205
 static_key_enable+0x1a/0x20 kernel/jump_label.c:218
 toggle_allocation_gate+0xb5/0x250 mm/kfence/core.c:826
 process_one_work kernel/workqueue.c:3231 [inline]
 process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3312
 worker_thread+0x86d/0xd70 kernel/workqueue.c:3393
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 1.739 msecs

Crashes (13):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/30 03:37 upstream 4a4be1ad3a6e 34889ee3 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/11/14 02:24 upstream 0a9b9d17f3a7 bb3f8425 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/09/19 11:10 upstream 4a39ac5b7d62 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/09/08 11:44 upstream d1f2d51b711a 9750182a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/09/04 07:57 upstream 88fac17500f4 9d47f20a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/08/19 15:50 upstream 47ac09b91bef 9f0ab3fb .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 22:05 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 22:04 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 22:02 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 21:59 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 21:58 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/29 21:55 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
2024/05/12 05:48 upstream cf87f46fd34d 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_rmdir
* Struck through repros no longer work on HEAD.