syzbot


INFO: task hung in do_unlinkat (5)

Status: upstream: reported C repro on 2024/06/02 14:09
Subsystems: bcachefs jfs
[Documentation on labels]
Reported-by: syzbot+08b113332e19a9378dd5@syzkaller.appspotmail.com
First crash: 450d, last: 8d15h
Cause bisection: failed (error log, bisect log)
  
Discussions (2)
Title Replies (including bot) Last reply
[syzbot] Monthly kernfs report (Jan 2025) 0 (1) 2025/01/16 10:12
[syzbot] [kernfs?] [bcachefs?] [exfat?] INFO: task hung in do_unlinkat (5) 0 (2) 2024/11/26 01:26
Similar bugs (9)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in do_unlinkat (2) 1 943d 943d 0/1 auto-obsoleted due to no activity on 2023/03/27 07:50
android-49 INFO: task hung in do_unlinkat 5 2495d 2607d 0/3 auto-closed as invalid on 2019/02/24 11:49
upstream INFO: task hung in do_unlinkat (2) fs 4 1601d 1601d 0/29 auto-closed as invalid on 2021/05/17 08:41
linux-6.1 INFO: task hung in do_unlinkat origin:upstream missing-backport C error 12 53d 179d 0/3 upstream: reported C repro on 2024/12/30 08:52
upstream INFO: task hung in do_unlinkat exfat 34 2355d 2592d 0/29 closed as dup on 2018/10/27 13:26
upstream INFO: task hung in do_unlinkat (3) fs 2 1319d 1363d 0/29 closed as invalid on 2022/02/07 19:19
linux-4.19 INFO: task hung in do_unlinkat 1 1073d 1073d 0/1 auto-obsoleted due to no activity on 2022/11/17 10:56
upstream INFO: task hung in do_unlinkat (4) exfat 4 912d 1018d 0/29 auto-obsoleted due to no activity on 2023/04/08 02:53
linux-5.15 INFO: task hung in do_unlinkat 3 263d 411d 0/3 auto-obsoleted due to no activity on 2025/01/15 03:04
Last patch testing requests (6)
Created Duration User Patch Repo Result
2025/06/15 06:54 23m retest repro upstream OK log
2025/06/15 06:54 23m retest repro upstream OK log
2025/04/05 13:56 17m retest repro upstream report log
2025/04/05 13:56 16m retest repro upstream report log
2024/12/21 18:21 17m retest repro upstream report log
2024/12/21 18:21 19m retest repro upstream report log

Sample crash report:
INFO: task syz.9.419:9932 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.419       state:D stack:28712 pid:9932  tgid:9902  ppid:8558   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write_nested+0x1b5/0x200 kernel/locking/rwsem.c:1694
 inode_lock_nested include/linux/fs.h:914 [inline]
 do_unlinkat+0x1bf/0x560 fs/namei.c:4646
 __do_sys_unlink fs/namei.c:4705 [inline]
 __se_sys_unlink fs/namei.c:4703 [inline]
 __x64_sys_unlink+0x47/0x50 fs/namei.c:4703
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa4e6f8e929
RSP: 002b:00007fa4e7d2e038 EFLAGS: 00000246 ORIG_RAX: 0000000000000057
RAX: ffffffffffffffda RBX: 00007fa4e71b6160 RCX: 00007fa4e6f8e929
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000200000000680
RBP: 00007fa4e7010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fa4e71b6160 R15: 00007ffff702bb78
 </TASK>
INFO: task syz.9.419:9934 blocked for more than 144 seconds.
      Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.419       state:D stack:28776 pid:9934  tgid:9902  ppid:8558   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:869 [inline]
 open_last_lookups fs/namei.c:3813 [inline]
 path_openat+0x8da/0x3830 fs/namei.c:4052
 do_filp_open+0x1fa/0x410 fs/namei.c:4082
 do_sys_openat2+0x121/0x1c0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x138/0x170 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa4e6f8e929
RSP: 002b:00007fa4e49f4038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fa4e71b6240 RCX: 00007fa4e6f8e929
RDX: 000000000000275a RSI: 0000200000000040 RDI: ffffffffffffff9c
RBP: 00007fa4e7010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fa4e71b6240 R15: 00007ffff702bb78
 </TASK>
INFO: task syz.9.419:9938 blocked for more than 145 seconds.
      Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.419       state:D stack:28776 pid:9938  tgid:9902  ppid:8558   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:869 [inline]
 open_last_lookups fs/namei.c:3813 [inline]
 path_openat+0x8da/0x3830 fs/namei.c:4052
 do_filp_open+0x1fa/0x410 fs/namei.c:4082
 do_sys_openat2+0x121/0x1c0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_open fs/open.c:1460 [inline]
 __se_sys_open fs/open.c:1456 [inline]
 __x64_sys_open+0x11e/0x150 fs/open.c:1456
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa4e6f8e929
RSP: 002b:00007fa4e45d1038 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fa4e71b6320 RCX: 00007fa4e6f8e929
RDX: 0000000000000000 RSI: 0000000000064842 RDI: 0000200000000140
RBP: 00007fa4e7010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fa4e71b6320 R15: 00007ffff702bb78
 </TASK>
INFO: task syz.9.419:9940 blocked for more than 146 seconds.
      Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.419       state:D stack:28784 pid:9940  tgid:9902  ppid:8558   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write_nested+0x1b5/0x200 kernel/locking/rwsem.c:1694
 inode_lock_nested include/linux/fs.h:914 [inline]
 lock_rename fs/namei.c:3281 [inline]
 do_renameat2+0x3dd/0xc50 fs/namei.c:5232
 __do_sys_rename fs/namei.c:5333 [inline]
 __se_sys_rename fs/namei.c:5331 [inline]
 __x64_sys_rename+0x82/0x90 fs/namei.c:5331
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa4e6f8e929
RSP: 002b:00007fa4e41ae038 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007fa4e71b6400 RCX: 00007fa4e6f8e929
RDX: 0000000000000000 RSI: 0000200000006540 RDI: 00002000000002c0
RBP: 00007fa4e7010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fa4e71b6400 R15: 00007ffff702bb78
 </TASK>
INFO: task syz.9.419:9942 blocked for more than 146 seconds.
      Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.419       state:D stack:25160 pid:9942  tgid:9902  ppid:8558   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:869 [inline]
 open_last_lookups fs/namei.c:3813 [inline]
 path_openat+0x8da/0x3830 fs/namei.c:4052
 do_filp_open+0x1fa/0x410 fs/namei.c:4082
 do_sys_openat2+0x121/0x1c0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x138/0x170 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa4e6f8e929
RSP: 002b:00007fa4e3d8b038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fa4e71b64e0 RCX: 00007fa4e6f8e929
RDX: 0000000000000441 RSI: 0000200000000080 RDI: ffffffffffffff9c
RBP: 00007fa4e7010b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000104 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fa4e71b64e0 R15: 00007ffff702bb78
 </TASK>
INFO: task syz.9.419:9945 blocked for more than 147 seconds.
      Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.9.419       state:D stack:28904 pid:9945  tgid:9902  ppid:8558   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5396 [inline]
 __schedule+0x16a2/0x4cb0 kernel/sched/core.c:6785
 __schedule_loop kernel/sched/core.c:6863 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6878
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6935
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write_nested+0x1b5/0x200 kernel/locking/rwsem.c:1694
 inode_lock_nested include/linux/fs.h:914 [inline]
 filename_create+0x1f9/0x470 fs/namei.c:4148
 do_mkdirat+0xa0/0x590 fs/namei.c:4400
 __do_sys_mkdirat fs/namei.c:4425 [inline]
 __se_sys_mkdirat fs/namei.c:4423 [inline]
 __x64_sys_mkdirat+0x87/0xa0 fs/namei.c:4423
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa4e6f8d197
RSP: 002b:00007fa4e3967e68 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 00007fa4e3967ef0 RCX: 00007fa4e6f8d197
RDX: 00000000000001ff RSI: 0000200000000240 RDI: 00000000ffffff9c
RBP: 0000200000000200 R08: 00002000000000c0 R09: 0000000000000000
R10: 0000200000000200 R11: 0000000000000246 R12: 0000200000000240
R13: 00007fa4e3967eb0 R14: 0000000000000000 R15: 0000000000000000
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e13eda0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e13eda0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e13eda0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770
2 locks held by kworker/u8:4/69:
3 locks held by kworker/u8:6/1107:
 #0: ffff8880b873b798 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:606
 #1: ffff8880b8723f08 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x39e/0x6d0 kernel/sched/psi.c:987
 #2: ffff8880b8725958 (&base->lock){-.-.}-{2:2}, at: lock_timer_base kernel/time/timer.c:1004 [inline]
 #2: ffff8880b8725958 (&base->lock){-.-.}-{2:2}, at: __mod_timer+0x1ae/0xf30 kernel/time/timer.c:1085
4 locks held by kworker/u8:8/1347:
 #0: ffff88814269a148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88814269a148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc9000418fbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000418fbc0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffff888067ed00e0 (&type->s_umount_key#99){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:563
 #3: ffff88806e272fe0 (&jfs_ip->commit_mutex){+.+.}-{4:4}, at: jfs_commit_inode+0x1ca/0x530 fs/jfs/inode.c:102
3 locks held by kworker/u8:11/3572:
 #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc9000c697bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000c697bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f4fe008 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
2 locks held by getty/5592:
 #0: ffff888030e410a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
5 locks held by syz.9.419/9904:
2 locks held by syz.9.419/9932:
 #0: ffff888067ed0428 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19/1){+.+.}-{4:4}, at: do_unlinkat+0x1bf/0x560 fs/namei.c:4646
2 locks held by syz.9.419/9934:
 #0: ffff888067ed0428 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: open_last_lookups fs/namei.c:3813 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: path_openat+0x8da/0x3830 fs/namei.c:4052
2 locks held by syz.9.419/9938:
 #0: ffff888067ed0428 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: open_last_lookups fs/namei.c:3813 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: path_openat+0x8da/0x3830 fs/namei.c:4052
2 locks held by syz.9.419/9940:
 #0: ffff888067ed0428 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3281 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19/1){+.+.}-{4:4}, at: do_renameat2+0x3dd/0xc50 fs/namei.c:5232
2 locks held by syz.9.419/9942:
 #0: ffff888067ed0428 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: open_last_lookups fs/namei.c:3813 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19){++++}-{4:4}, at: path_openat+0x8da/0x3830 fs/namei.c:4052
2 locks held by syz.9.419/9945:
 #0: ffff888067ed0428 (sb_writers#32){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #1: ffff88806e273390 (&type->i_mutex_dir_key#19/1){+.+.}-{4:4}, at: filename_create+0x1f9/0x470 fs/namei.c:4148
2 locks held by syz.4.475/10600:
 #0: ffff888067ed00e0 (&type->s_umount_key#99){++++}-{4:4}, at: __super_lock fs/super.c:59 [inline]
 #0: ffff888067ed00e0 (&type->s_umount_key#99){++++}-{4:4}, at: super_lock+0x2a9/0x3b0 fs/super.c:121
 #1: ffff888024e987d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:387 [inline]
 #1: ffff888024e987d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: sync_inodes_sb+0x19f/0xa10 fs/fs-writeback.c:2831
1 lock held by syz.9.541/11713:
 #0: ffff888067ed00e0 (&type->s_umount_key#99){++++}-{4:4}, at: __super_lock fs/super.c:57 [inline]
 #0: ffff888067ed00e0 (&type->s_umount_key#99){++++}-{4:4}, at: super_lock+0x25c/0x3b0 fs/super.c:121
2 locks held by syz.6.562/11952:
 #0: ffffffff8e87e528 (bio_slab_lock){+.+.}-{4:4}, at: bio_put_slab block/bio.c:140 [inline]
 #0: ffffffff8e87e528 (bio_slab_lock){+.+.}-{4:4}, at: bioset_exit+0x44a/0x690 block/bio.c:1758
 #1: ffffffff8e144780 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3782
5 locks held by syz-executor/11957:
 #0: ffff88807b23cd80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:481 [inline]
 #0: ffff88807b23cd80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x1fe/0x500 net/bluetooth/hci_core.c:2691
 #1: ffff88807b23c078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x66a/0x1330 net/bluetooth/hci_sync.c:5238
 #2: ffffffff8f666068 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2066 [inline]
 #2: ffffffff8f666068 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2560
 #3: ffff888025d54338 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x70/0x680 net/bluetooth/l2cap_core.c:1762
 #4: ffffffff8e1448b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #4: ffffffff8e1448b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
3 locks held by syz.8.565/11976:
 #0: ffff88805581cd80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:481 [inline]
 #0: ffff88805581cd80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x1fe/0x500 net/bluetooth/hci_core.c:2691
 #1: ffff88805581c078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x66a/0x1330 net/bluetooth/hci_sync.c:5238
 #2: ffffffff8f666068 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2066 [inline]
 #2: ffffffff8f666068 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2560
3 locks held by syz.2.567/11996:
 #0: ffff888056ef4d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:481 [inline]
 #0: ffff888056ef4d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x1fe/0x500 net/bluetooth/hci_core.c:2691
 #1: ffff888056ef4078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x66a/0x1330 net/bluetooth/hci_sync.c:5238
 #2: ffffffff8f666068 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2066 [inline]
 #2: ffffffff8f666068 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2560
2 locks held by syz.5.566/12006:
 #0: ffffffff8f4fe008 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:633 [inline]
 #0: ffffffff8f4fe008 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3e/0x1c0 drivers/net/tun.c:3396
 #1: ffffffff8e1448b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #1: ffffffff8e1448b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
2 locks held by rm/12061:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:470
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3f9/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 5179 Comm: syslogd Not tainted 6.16.0-rc2-syzkaller-00082-gfb4d33ab452e #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:filter_irq_stacks+0x68/0xa0 kernel/stacktrace.c:397
Code: 04 de 48 3d 30 02 00 81 0f 93 c1 48 3d 70 16 00 81 0f 92 c2 84 d1 75 27 48 3d 80 fc 61 8b 0f 92 c1 48 3d 9b fc 61 8b 0f 93 c0 <08> c8 74 11 48 ff c3 49 83 c7 08 49 39 dc 75 ae 44 89 e3 eb 06 ff
RSP: 0018:ffffc90003077628 EFLAGS: 00000287
RAX: ffffffff8216e800 RBX: 0000000000000003 RCX: 0000000000000001
RDX: 0000000000000000 RSI: 000000000000000b RDI: ffffc900030776d0
RBP: ffffc900030778f8 R08: 0000000000000000 R09: ffffffff81cf4476
R10: ffffc900030775d8 R11: ffffffff81acf5a0 R12: 000000000000000b
R13: dffffc0000000000 R14: ffffc900030776d0 R15: ffffc900030776e8
FS:  00007f24ceeb6c80(0000) GS:ffff888125c85000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f2004713a90 CR3: 0000000031388000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 stack_depot_save_flags+0x40/0x900 lib/stackdepot.c:610
 kasan_save_stack mm/kasan/common.c:48 [inline]
 kasan_save_track+0x4f/0x80 mm/kasan/common.c:68
 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:576
 poison_slab_object mm/kasan/common.c:247 [inline]
 __kasan_slab_free+0x62/0x70 mm/kasan/common.c:264
 kasan_slab_free include/linux/kasan.h:233 [inline]
 slab_free_hook mm/slub.c:2381 [inline]
 slab_free mm/slub.c:4643 [inline]
 kmem_cache_free+0x18f/0x400 mm/slub.c:4745
 __unix_dgram_recvmsg+0xa25/0xde0 net/unix/af_unix.c:2588
 sock_recvmsg_nosec net/socket.c:1017 [inline]
 sock_recvmsg+0x22c/0x270 net/socket.c:1039
 sock_read_iter+0x231/0x2f0 net/socket.c:1109
 new_sync_read fs/read_write.c:491 [inline]
 vfs_read+0x4cd/0x980 fs/read_write.c:572
 ksys_read+0x145/0x250 fs/read_write.c:715
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f24cf006407
Code: 48 89 fa 4c 89 df e8 38 aa 00 00 8b 93 08 03 00 00 59 5e 48 83 f8 fc 74 1a 5b c3 0f 1f 84 00 00 00 00 00 48 8b 44 24 10 0f 05 <5b> c3 0f 1f 80 00 00 00 00 83 e2 39 83 fa 08 75 de e8 23 ff ff ff
RSP: 002b:00007fffdf3846b0 EFLAGS: 00000202 ORIG_RAX: 0000000000000000
RAX: ffffffffffffffda RBX: 00007f24ceeb6c80 RCX: 00007f24cf006407
RDX: 00000000000000ff RSI: 000055fc772bb950 RDI: 0000000000000000
RBP: 000055fc772bb910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000055fc772bb9ac
R13: 0000000000000000 R14: 000055fc772bb950 R15: 000055fc764b1d98
 </TASK>

Crashes (105):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/06/19 06:54 upstream fb4d33ab452e ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/25 08:26 upstream d0c22de9995b ed351ea7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/16 22:59 upstream 3c21441eeffc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/09 18:03 upstream 9c69f8884904 77908e5f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/08 22:39 upstream 2c89c1b655c0 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/08 01:54 upstream 707df3375124 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/04 15:36 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/04 14:08 upstream e8ab83e34bdc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/03 10:34 upstream 95d3481af6dc b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/03 06:17 upstream 2bfcee565c3a b0714e37 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/05/01 19:13 upstream 4f79eaa2ceac 51b137cd .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/28 04:42 upstream b4432656b36e c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/27 19:45 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/27 11:50 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/22 10:21 upstream a33b5a08cbbd 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/21 22:43 upstream 9d7a0577c9db 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/21 04:56 upstream 6fea5fabd332 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/19 14:43 upstream 3088d26962e8 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/19 07:19 upstream 3088d26962e8 2a20f901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/21 11:59 upstream b3ee1e460951 62330552 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/14 13:17 upstream 4003c9e78778 e2826670 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/09 11:37 upstream b7c90e3e717a 163f510d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/06 00:38 upstream bb2281fb05e5 831e3629 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/03/04 18:21 upstream 99fa936e8e4f c3901742 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/28 08:37 upstream 1e15510b71c9 6a8fcbc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/21 00:55 upstream e9a8cac0bf89 0808a665 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/19 05:10 upstream 6537cfb395f3 9a14138f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/12 04:01 upstream 09fbf3d50205 f2baddf5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/02/01 12:20 upstream 69b8923f5003 aa47157c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/01/07 21:05 upstream fbfd64d25c7a f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/01/04 21:45 upstream ab75170520d4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/12/05 18:13 upstream feffde684ac2 29f61fce .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/12/04 21:34 upstream feffde684ac2 b50eb251 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/12/01 09:08 upstream bcc8eda6d349 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/30 11:50 upstream 509f806f7f70 68914665 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/29 13:54 upstream 7af08b57bcb9 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/26 01:25 upstream 9f16d5e6f220 11dbc254 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/11/14 06:15 upstream 0a9b9d17f3a7 bb3f8425 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/10/08 00:52 upstream 87d6aab2389e d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/09/26 05:55 upstream aa486552a110 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/09/16 19:46 upstream adfc3ded5c33 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/09/15 08:28 upstream 0babf683783d 08d8a733 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/09/15 06:47 upstream 0babf683783d 08d8a733 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/09/15 02:52 upstream 0babf683783d 08d8a733 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/09/13 11:13 upstream fdf042df0463 73e8a465 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/09/05 06:51 upstream c7fb1692dc01 dfbe2ed4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/08/30 15:39 upstream 20371ba12063 ee2602b8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/08/27 09:38 upstream 3e9bff3bbe13 9aee4e0b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/07/31 11:51 upstream 22f546873149 6fde257d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in do_unlinkat
2024/06/23 12:19 upstream 5f583a3162ff edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in do_unlinkat
2024/05/22 04:48 upstream b6394d6f7159 1014eca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/05/16 14:46 upstream 3c999d1ae3c7 ef5d53ed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in do_unlinkat
2024/05/10 10:19 upstream 448b3fe5a0ea de979bc2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/05/03 08:20 upstream 49a73b1652c5 ddfc15a1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2024/04/28 23:56 upstream e67572cd2204 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in do_unlinkat
2025/04/09 11:48 linux-next 46086739de22 988b336c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in do_unlinkat
* Struck through repros no longer work on HEAD.