syzbot


INFO: task hung in __start_renaming

Status: upstream: reported C repro on 2025/11/23 22:44
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+2fefb910d2c20c0698d8@syzkaller.appspotmail.com
First crash: 105d, last: 1h21m
Cause bisection: introduced by (bisect log) :
commit 1e3c3784221ac86401aea72e2bae36057062fc9c
Author: Mateusz Guzik <mjguzik@gmail.com>
Date: Fri Oct 10 22:17:36 2025 +0000

  fs: rework I_NEW handling to operate without fences

Crash: INFO: task hung in do_renameat2 (log)
Repro: C syz .config
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ntfs3?] INFO: task hung in __start_renaming 11 (17) 2025/11/25 09:35
Last patch testing requests (8)
Created Duration User Patch Repo Result
2026/01/01 01:28 26m retest repro linux-next OK log
2026/01/01 01:28 27m retest repro linux-next OK log
2026/01/01 01:28 36m retest repro linux-next OK log
2025/11/24 08:08 28m mjguzik@gmail.com git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs-6.19.directory.locking OK log
2025/11/24 08:07 29m mjguzik@gmail.com git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs.git vfs-6.19.inode report log
2025/11/24 06:29 26m mjguzik@gmail.com patch linux-next report log
2025/11/24 03:29 59m mjguzik@gmail.com patch linux-next error
2025/11/24 00:28 28m mjguzik@gmail.com patch linux-next OK log

Sample crash report:
INFO: task syz.0.17:6073 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:28984 pid:6073  tgid:6068  ppid:5928   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0x14de/0x5210 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 rt_mutex_schedule+0x76/0xf0 kernel/sched/core.c:7285
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1f8f/0x25c0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xbd/0x170 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14d/0x730 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1073 [inline]
 lock_rename fs/namei.c:3756 [inline]
 __start_renaming+0x148/0x410 fs/namei.c:3852
 filename_renameat2+0x38c/0x9c0 fs/namei.c:6119
 __do_sys_rename fs/namei.c:6188 [inline]
 __se_sys_rename+0x55/0x2c0 fs/namei.c:6184
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f199cd5c629
RSP: 002b:00007f199c39d028 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f199cfd6090 RCX: 00007f199cd5c629
RDX: 0000000000000000 RSI: 0000200000000400 RDI: 0000200000006200
RBP: 00007f199cdf2b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f199cfd6128 R14: 00007f199cfd6090 R15: 00007ffe3a6f4bb8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/38:
 #0: ffffffff8dbcd480 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8dbcd480 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8dbcd480 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
4 locks held by kworker/u8:2/43:
2 locks held by getty/5557:
 #0: ffff888037e070a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e832e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13c0 drivers/tty/n_tty.c:2211
7 locks held by syz.0.17/6069:
2 locks held by syz.0.17/6073:
 #0: ffff888036938480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff888045321098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff888045321098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff888045321098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
7 locks held by syz.1.18/6098:
2 locks held by syz.1.18/6102:
 #0: ffff8880409e2480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff888056b104b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff888056b104b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff888056b104b0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
6 locks held by syz.2.19/6131:
2 locks held by syz.2.19/6135:
 #0: ffff88803fa10480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff888045309098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff888045309098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff888045309098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
2 locks held by syz.3.20/6159:
 #0: ffff8880381ce480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff888045312868 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff888045312868 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff888045312868 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
7 locks held by syz.3.20/6164:
6 locks held by syz.4.21/6193:
2 locks held by syz.4.21/6197:
 #0: ffff888036ea2480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88804504b450 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff88804504b450 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff88804504b450 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
2 locks held by syz.5.22/6233:
 #0: ffff88802a3dc480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff888045049098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff888045049098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff888045049098 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
6 locks held by syz.5.22/6238:
5 locks held by syz.6.23/6267:
2 locks held by syz.6.23/6271:
 #0: ffff8880291b4480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88804525b450 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff88804525b450 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff88804525b450 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
2 locks held by syz.7.24/6302:
 #0: ffff888060a2e480 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88804504e3f0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1073 [inline]
 #1: ffff88804504e3f0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3756 [inline]
 #1: ffff88804504e3f0 (&type->i_mutex_dir_key#3/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3852
7 locks held by syz.7.24/6306:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xf90/0xfe0 kernel/hung_task.c:515
 kthread+0x388/0x470 kernel/kthread.c:467
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 43 Comm: kworker/u8:2 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: bat_events batadv_tt_purge
RIP: 0010:mark_lock+0x1/0x190 kernel/locking/lockdep.c:4714
Code: 0f b9 3a 90 41 89 e9 48 89 df 48 8b 2c 24 e9 b6 fd ff ff 66 0f 1f 44 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 <41> 57 41 56 41 55 41 54 53 8b 46 20 89 c1 81 e1 00 00 03 00 83 f9
RSP: 0018:ffffc90000b579f8 EFLAGS: 00000006
RAX: 0000000000000000 RBX: ffff88801faa9e00 RCX: ffffffff92d9b110
RDX: 0000000000000006 RSI: ffff88801faaa990 RDI: ffff88801faa9e00
RBP: 1ffff11003f5552a R08: ffffffff8f492877 R09: 1ffffffff1e9250e
R10: dffffc0000000000 R11: fffffbfff1e9250f R12: ffff88801faaa9b8
R13: dffffc0000000000 R14: ffff88801faaa990 R15: 0000000000000001
FS:  0000000000000000(0000) GS:ffff888126595000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005651fc9ba820 CR3: 0000000035f4a000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 mark_held_locks kernel/locking/lockdep.c:4325 [inline]
 __trace_hardirqs_on_caller kernel/locking/lockdep.c:4351 [inline]
 lockdep_hardirqs_on_prepare+0x178/0x260 kernel/locking/lockdep.c:4410
 trace_hardirqs_on+0x28/0x40 kernel/trace/trace_preemptirq.c:78
 __local_bh_enable_ip+0x1ae/0x2b0 kernel/softirq.c:306
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 spin_unlock_bh include/linux/spinlock_rt.h:116 [inline]
 batadv_tt_global_purge net/batman-adv/translation-table.c:2250 [inline]
 batadv_tt_purge+0x475/0xa10 net/batman-adv/translation-table.c:3510
 process_one_work kernel/workqueue.c:3275 [inline]
 process_scheduled_works+0xaec/0x17a0 kernel/workqueue.c:3358
 worker_thread+0xa50/0xfc0 kernel/workqueue.c:3439
 kthread+0x388/0x470 kernel/kthread.c:467
 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (38):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/18 15:03 upstream c22e26bd0906 39751c21 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (clean fs)] ci2-upstream-fs INFO: task hung in __start_renaming
2026/03/05 17:55 upstream c107785c7e8d d20b04c8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/03/04 06:26 upstream 0031c06807cf 4180d919 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/03/04 04:13 upstream 0031c06807cf 4180d919 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/03/02 20:20 upstream 11439c4635ed b9dd6534 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/03/02 19:26 upstream 11439c4635ed b9dd6534 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/03/02 14:32 upstream 11439c4635ed b9dd6534 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/23 13:28 upstream 6de23f81a5e0 6beca497 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/19 10:59 upstream c22e26bd0906 746545b8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/18 22:54 upstream c22e26bd0906 77d4d919 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/18 11:33 upstream c22e26bd0906 39751c21 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/16 12:19 upstream c22e26bd0906 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/15 13:40 upstream ca4ee40bf13d 1e62d198 .config console log report info [disk image] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __start_renaming
2026/02/14 02:03 upstream c22e26bd0906 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/11 10:32 upstream dc855b77719f 441e25b7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/08 09:36 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/02/03 16:56 upstream 6bd9ed02871f 6df4c87a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/23 19:43 upstream c072629f05d7 e2b1b6e6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/19 14:09 upstream 24d479d26b25 a9fc5226 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/14 20:49 upstream c537e12daeec d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/13 05:48 upstream b71e635feefc d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/12 20:05 upstream 0f61b1860cc3 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/07 15:43 upstream f0b9d8eb98df d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2026/01/05 16:31 upstream 3609fa95fb0f d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2025/12/17 23:42 upstream ea1013c15392 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __start_renaming
2025/12/04 11:46 upstream 559e608c4655 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __start_renaming
2025/11/26 00:01 linux-next 92fd6e84175b 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/25 16:52 linux-next 92fd6e84175b 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/23 14:39 linux-next d724c6f85e80 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/23 06:29 linux-next d724c6f85e80 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/22 02:05 linux-next d724c6f85e80 c31c1b0b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/21 06:10 linux-next 88cbd8ac379c 280ea308 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 06:47 linux-next fe4d0dea039f 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 02:02 linux-next fe4d0dea039f 26ee5237 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 01:40 linux-next fe4d0dea039f 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/20 00:04 linux-next fe4d0dea039f 26ee5237 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/19 22:29 linux-next fe4d0dea039f 26ee5237 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
2025/11/19 19:39 linux-next fe4d0dea039f 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __start_renaming
* Struck through repros no longer work on HEAD.