syzbot


INFO: task hung in do_unlinkat (5)

Status: upstream: reported C repro on 2024/06/02 14:09
Subsystems: ntfs3
[Documentation on labels]
Reported-by: syzbot+08b113332e19a9378dd5@syzkaller.appspotmail.com
First crash: 637d, last: 14d
Cause bisection: failed (error log, bisect log)
  
Discussions (4)
Title Replies (including bot) Last reply
[syzbot] Monthly ntfs3 report (Dec 2025) 0 (1) 2025/12/29 08:11
[syzbot] Monthly ntfs3 report (Nov 2025) 0 (1) 2025/11/27 07:44
[syzbot] Monthly kernfs report (Jan 2025) 0 (1) 2025/01/16 10:12
[syzbot] [kernfs?] [bcachefs?] [exfat?] INFO: task hung in do_unlinkat (5) 0 (2) 2024/11/26 01:26
Similar bugs (9)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in do_unlinkat (2) 1 1 1130d 1130d 0/1 auto-obsoleted due to no activity on 2023/03/27 07:50
android-49 INFO: task hung in do_unlinkat 1 5 2682d 2793d 0/3 auto-closed as invalid on 2019/02/24 11:49
upstream INFO: task hung in do_unlinkat (2) fs 1 4 1788d 1788d 0/29 auto-closed as invalid on 2021/05/17 08:41
linux-6.1 INFO: task hung in do_unlinkat origin:upstream missing-backport 1 C inconclusive 12 239d 366d 0/3 upstream: reported C repro on 2024/12/30 08:52
upstream INFO: task hung in do_unlinkat exfat 1 34 2542d 2778d 0/29 closed as dup on 2018/10/27 13:26
upstream INFO: task hung in do_unlinkat (3) fs 1 2 1506d 1549d 0/29 closed as invalid on 2022/02/07 19:19
linux-4.19 INFO: task hung in do_unlinkat 1 1 1260d 1260d 0/1 auto-obsoleted due to no activity on 2022/11/17 10:56
upstream INFO: task hung in do_unlinkat (4) exfat 1 4 1098d 1204d 0/29 auto-obsoleted due to no activity on 2023/04/08 02:53
linux-5.15 INFO: task hung in do_unlinkat 1 3 450d 597d 0/3 auto-obsoleted due to no activity on 2025/01/15 03:04
Last patch testing requests (8)
Created Duration User Patch Repo Result
2025/12/31 13:02 27m retest repro linux-next OK log
2025/06/15 06:54 23m retest repro upstream OK log
2025/06/15 06:54 23m retest repro upstream OK log
2025/04/05 13:56 17m retest repro upstream report log
2025/04/05 13:56 16m retest repro upstream report log
2024/12/21 18:21 17m retest repro upstream report log
2024/12/21 18:21 19m retest repro upstream report log
2025/12/31 13:30 retest repro linux-next running

Sample crash report:
INFO: task syz.6.316:10363 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.6.316       state:D stack:28952 pid:10363 tgid:10339 ppid:9809   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1dfe/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 rwbase_write_lock+0x14f/[  967.071997][   T39]  __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/[  967.071997][   T39]  rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1072 [inline]
 __start_dirop fs/namei.c:2864 [inline]
 start_dirop fs/namei.c:2875 [inline]
 do_unlinkat+0x1b2/0x570 fs/namei.c:5420
 __do_sys_unlinkat fs/namei.c:5469 [inline]
 __se_sys_unlinkat fs/namei.c:5462 [inline]
 __x64_sys_unlinkat+0xd3/0xf0 fs/namei.c:5462
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f60b071f749
RSP: 002b:00007f60ae95d038 EFLAGS: 00000246 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 00007f60b0976090 RCX: 00007f60b071f749
RDX: 0000000000000000 RSI: 0000200000000380 RDI: ffffffffffffff9c
RBP: 00007f60b07a3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f60b0976128 R14: 00007f60b0976090 R15: 00007ffdcc5206c8
 </TASK>
INFO: task syz.6.316:10364 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.6.316       state:D stack:28952 pid:10364 tgid:10339 ppid:9809   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1dfe/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1072 [inline]
 __start_dirop fs/namei.c:2864 [inline]
 start_dirop fs/namei.c:2875 [inline]
 filename_create+0x1fb/0x360 fs/namei.c:4879
 do_symlinkat+0x120/0x3d0 fs/namei.c:5534
 __do_sys_symlinkat fs/namei.c:5562 [inline]
 __se_sys_symlinkat fs/namei.c:5559 [inline]
 __x64_sys_symlinkat+0x95/0xb0 fs/namei.c:5559
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f60b071f749
RSP: 002b:00007f60ae93c038 EFLAGS: 00000246 ORIG_RAX: 000000000000010a
RAX: ffffffffffffffda RBX: 00007f60b0976180 RCX: 00007f60b071f749
RDX: 0000200000000200 RSI: ffffffffffffff9c RDI: 0000200000000080
RBP: 00007f60b07a3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f60b0976218 R14: 00007f60b0976180 R15: 00007ffdcc5206c8
 </TASK>
INFO: task syz.6.316:10367 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.6.316       state:D stack:28488 pid:10367 tgid:10339 ppid:9809   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1dfe/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock include/linux/fs.h:1027 [inline]
 open_last_lookups fs/namei.c:4537 [inline]
 path_openat+0xb53/0x3df0 fs/namei.c:4784
 do_filp_open+0x1fa/0x410 fs/namei.c:4814
 do_sys_openat2+0x121/0x200 fs/open.c:1430
 do_sys_open fs/open.c:1436 [inline]
 __do_sys_open fs/open.c:1444 [inline]
 __se_sys_open fs/open.c:1440 [inline]
 __x64_sys_open+0x11e/0x150 fs/open.c:1440
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f60b071f749
RSP: 002b:00007f60ae0f6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007f60b0976360 RCX: 00007f60b071f749
RDX: 78e22799f4a46ffe RSI: 00000000001607c0 RDI: 0000200000001040
RBP: 00007f60b07a3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f60b09763f8 R14: 00007f60b0976360 R15: 00007ffdcc5206c8
 </TASK>
INFO: task syz.6.316:10368 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.6.316       state:D stack:28952 pid:10368 tgid:10339 ppid:9809   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x1480/0x50a0 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1dfe/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock_nested include/linux/fs.h:1072 [inline]
 lock_rename fs/namei.c:3712 [inline]
 __start_renaming+0x148/0x410 fs/namei.c:3808
 do_renameat2+0x399/0x8f0 fs/namei.c:6022
 __do_sys_rename fs/namei.c:6090 [inline]
 __se_sys_rename fs/namei.c:6088 [inline]
 __x64_sys_rename+0x82/0x90 fs/namei.c:6088
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f60b071f749
RSP: 002b:00007f60adcd3038 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f60b0976450 RCX: 00007f60b071f749
RDX: 0000000000000000 RSI: 0000200000000f40 RDI: 0000200000000600
RBP: 00007f60b07a3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f60b09764e8 R14: 00007f60b0976450 R15: 00007ffdcc5206c8
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/u8:1/13:
 #0: ffff88814047d138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88814047d138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90000127b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000127b80 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffff88804e3e40d0 (&type->s_umount_key#58){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
 #3: ffff88805ee0f6e8 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_commit_inode+0x1ca/0x530 fs/jfs/inode.c:108
1 lock held by khungtaskd/39:
 #0: ffffffff8d5ae8c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d5ae8c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d5ae8c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:14/4006:
 #0: ffff88802fef6938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88802fef6938 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000ddf7b80 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000ddf7b80 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x119/0x15a0 net/ipv6/addrconf.c:4194
1 lock held by dhcpcd/5463:
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_deladdr+0x20f/0x330 net/ipv6/addrconf.c:4799
2 locks held by getty/5555:
 #0: ffff8880350ab0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e762e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x44f/0x1460 drivers/tty/n_tty.c:2211
4 locks held by kworker/u8:16/5949:
 #0: ffff888019ad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888019ad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000571fb80 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000571fb80 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8e898640 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x7b0 net/core/net_namespace.c:670
 #3: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xdc/0x9e0 net/core/dev.c:13022
4 locks held by kworker/u8:18/6564:
3 locks held by kworker/u8:22/10225:
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90005a4fb80 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90005a4fb80 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
4 locks held by syz.6.316/10340:
2 locks held by syz.6.316/10363:
 #0: ffff88804e3e4480 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: do_unlinkat+0x1b2/0x570 fs/namei.c:5420
2 locks held by syz.6.316/10364:
 #0: ffff88804e3e4480 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2864 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2875 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: filename_create+0x1fb/0x360 fs/namei.c:4879
2 locks held by syz.6.316/10367:
 #0: ffff88804e3e4480 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10){++++}-{4:4}, at: inode_lock include/linux/fs.h:1027 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10){++++}-{4:4}, at: open_last_lookups fs/namei.c:4537 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10){++++}-{4:4}, at: path_openat+0xb53/0x3df0 fs/namei.c:4784
2 locks held by syz.6.316/10368:
 #0: ffff88804e3e4480 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1072 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3712 [inline]
 #1: ffff88805ee0fab8 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: __start_renaming+0x148/0x410 fs/namei.c:3808
1 lock held by syz-executor/11027:
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/11045:
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
7 locks held by syz-executor/11081:
 #0: ffff888031bd0480 (sb_writers#7){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2681 [inline]
 #0: ffff888031bd0480 (sb_writers#7){.+.+}-{0:0}, at: vfs_write+0x217/0xb40 fs/read_write.c:682
 #1: ffff88805e170078 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1df/0x540 fs/kernfs/file.c:343
 #2: ffff888027266008 (kn->active#53){.+.+}-{0:0}, at: kernfs_get_active_of fs/kernfs/file.c:80 [inline]
 #2: ffff888027266008 (kn->active#53){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x232/0x540 fs/kernfs/file.c:344
 #3: ffffffff8e12c638 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: new_device_store+0x12c/0x6f0 drivers/net/netdevsim/bus.c:184
 #4: ffff88802328f0d8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:895 [inline]
 #4: ffff88802328f0d8 (&dev->mutex){....}-{4:4}, at: __device_attach+0x88/0x430 drivers/base/dd.c:1006
 #5: ffff888023289300 (&devlink->lock_key#65){+.+.}-{4:4}, at: nsim_drv_probe+0xc3/0xbd0 drivers/net/netdevsim/dev.c:1637
 #6: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #6: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_dev_lock+0x257/0x2f0 net/core/dev.c:2143
1 lock held by syz-executor/11112:
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
2 locks held by syz-executor/11273:
 #0: ffffffff8e898640 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x3cc/0x570 net/core/net_namespace.c:577
 #1: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock_killable include/linux/rtnetlink.h:145 [inline]
 #1: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: register_netdev+0x18/0x60 net/core/dev.c:11504
1 lock held by syz-executor/11386:
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8e8a5778 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 39 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xf95/0xfe0 kernel/hung_task.c:515
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x599/0xb30 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 10340 Comm: syz.6.316 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:get_current arch/x86/include/asm/current.h:25 [inline]
RIP: 0010:__sanitizer_cov_trace_pc+0x8/0x80 kernel/kcov.c:216
Code: 8b 3d 34 52 41 0b 48 89 de 5b e9 c3 75 57 00 cc cc cc 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 48 8b 04 24 <65> 48 8b 0c 25 08 f0 b1 91 65 8b 35 38 8c f1 0f 81 e6 00 00 ff 00
RSP: 0018:ffffc90006007240 EFLAGS: 00000246
RAX: ffffffff83401c6f RBX: 0000000000000004 RCX: ffff88801cf35ac0
RDX: 0000000000000002 RSI: 0000000000000000 RDI: 00000000000000ff
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000000 R12: dffffc0000000000
R13: ffff888039d79000 R14: 0000000000000000 R15: 1ffff110073af202
FS:  00007f60ae97e6c0(0000) GS:ffff888126e01000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f3c5fccec95 CR3: 0000000039d18000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 dtSplitRoot+0x77f/0x16c0 fs/jfs/jfs_dtree.c:1979
 dtSplitUp fs/jfs/jfs_dtree.c:993 [inline]
 dtInsert+0xef8/0x5f40 fs/jfs/jfs_dtree.c:871
 jfs_create+0x6c8/0xa80 fs/jfs/namei.c:137
 lookup_open fs/namei.c:4440 [inline]
 open_last_lookups fs/namei.c:4540 [inline]
 path_openat+0x18d1/0x3df0 fs/namei.c:4784
 do_filp_open+0x1fa/0x410 fs/namei.c:4814
 do_sys_openat2+0x121/0x200 fs/open.c:1430
 do_sys_open fs/open.c:1436 [inline]
 __do_sys_creat fs/open.c:1514 [inline]
 __se_sys_creat fs/open.c:1508 [inline]
 __x64_sys_creat+0x8f/0xc0 fs/open.c:1508
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f60b071f749
Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f60ae97e038 EFLAGS: 00000246 ORIG_RAX: 0000000000000055
RAX: ffffffffffffffda RBX: 00007f60b0975fa0 RCX: 00007f60b071f749
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000740
RBP: 00007f60b07a3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f60b0976038 R14: 00007f60b0975fa0 R15: 00007ffdcc5206c8
 </TASK>

Crashes (129):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/17 12:31 upstream ea1013c15392 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/12/10 01:27 upstream cb015814f8b6 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/12/05 15:47 upstream 2061f18ad76e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/11/12 12:14 upstream 24172e0d7990 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/11/12 05:45 upstream 24172e0d7990 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/10/25 08:19 upstream 2e590d67c2d8 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in do_unlinkat
2025/10/13 06:13 upstream 3a8660878839 ff1712fe .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/10/09 09:35 upstream cd5a0afbdf80 7e2882b3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/10/03 18:17 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/09/29 15:55 upstream e5f0a698b34e 86341da6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/09/03 01:07 upstream e6b9dce0aeeb 96a211bc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/09/02 15:43 upstream b320789d6883 96a211bc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/08/18 22:51 upstream c17b750b3ad9 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/08/12 04:22 upstream 8f5ae30d69d7 c06e8995 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/08/05 03:42 upstream d632ab86aff2 f5bcc8dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/07/24 13:44 upstream 25fae0b93d1d 65d60d73 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/07/10 02:02 upstream 8c2e52ebbe88 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/07/09 11:54 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/06/28 21:58 upstream aaf724ed6926 fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/06/19 06:54 upstream fb4d33ab452e ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/25 08:26 upstream d0c22de9995b ed351ea7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/16 22:59 upstream 3c21441eeffc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/09 18:03 upstream 9c69f8884904 77908e5f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/08 22:39 upstream 2c89c1b655c0 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/08 01:54 upstream 707df3375124 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/04 15:36 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/04 14:08 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/03 10:34 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/03 06:17 upstream 2bfcee565c3a b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/01 19:13 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/28 04:42 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/27 19:45 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/27 11:50 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/22 10:21 upstream a33b5a08cbbd 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/21 22:43 upstream 9d7a0577c9db 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/21 04:56 upstream 6fea5fabd332 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/19 14:43 upstream 3088d26962e8 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/19 07:19 upstream 3088d26962e8 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/21 11:59 upstream b3ee1e460951 62330552 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/14 13:17 upstream 4003c9e78778 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/09 11:37 upstream b7c90e3e717a 163f510d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/06 00:38 upstream bb2281fb05e5 831e3629 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/04 18:21 upstream 99fa936e8e4f c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/28 08:37 upstream 1e15510b71c9 6a8fcbc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/21 00:55 upstream e9a8cac0bf89 0808a665 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/19 05:10 upstream 6537cfb395f3 9a14138f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/30 11:50 upstream 509f806f7f70 68914665 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/26 01:25 upstream 9f16d5e6f220 11dbc254 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/06/23 12:19 upstream 5f583a3162ff edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in do_unlinkat
2024/05/22 04:48 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/05/16 14:46 upstream 3c999d1ae3c7 ef5d53ed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in do_unlinkat
2024/05/10 10:19 upstream 448b3fe5a0ea de979bc2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/05/03 08:20 upstream 49a73b1652c5 ddfc15a1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/04/28 23:56 upstream e67572cd2204 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/11/23 04:33 linux-next d724c6f85e80 4fb8ef37 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/22 02:58 linux-next d724c6f85e80 c31c1b0b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/22 00:30 linux-next d724c6f85e80 c31c1b0b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/10 23:41 linux-next ab40c92c74c6 4e1406b4 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/11/10 21:36 linux-next ab40c92c74c6 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
2025/04/09 11:48 linux-next 46086739de22 988b336c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
* Struck through repros no longer work on HEAD.