syzbot


INFO: task hung in vfs_unlink (2)

Status: auto-closed as invalid on 2022/10/05 08:13
Subsystems: fs
[Documentation on labels]
First crash: 628d, last: 628d
Similar bugs (13)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in vfs_unlink (3) 2 25d 36d 0/3 upstream: reported on 2024/03/19 12:40
linux-6.1 INFO: task hung in vfs_unlink (2) 1 153d 153d 0/3 auto-obsoleted due to no activity on 2024/03/02 11:45
linux-4.14 INFO: task hung in vfs_unlink (2) 1 1102d 1102d 0/1 auto-closed as invalid on 2021/08/17 05:41
linux-4.14 INFO: task hung in vfs_unlink 8 1347d 1589d 0/1 auto-closed as invalid on 2020/12/15 00:37
linux-4.19 INFO: task hung in vfs_unlink (4) 1 533d 533d 0/1 auto-obsoleted due to no activity on 2023/03/09 01:26
linux-4.19 INFO: task hung in vfs_unlink (2) 2 1356d 1418d 0/1 auto-closed as invalid on 2020/12/05 18:54
upstream INFO: task hung in vfs_unlink (3) ext4 1 451d 451d 0/26 auto-obsoleted due to no activity on 2023/04/30 04:19
linux-5.15 INFO: task hung in vfs_unlink 29 38d 395d 0/3 upstream: reported on 2023/03/26 16:46
upstream INFO: task hung in vfs_unlink ext4 32 1369d 1646d 0/26 auto-closed as invalid on 2020/11/23 01:14
linux-6.1 INFO: task hung in vfs_unlink 2 343d 354d 0/3 auto-obsoleted due to no activity on 2023/08/26 02:49
linux-4.19 INFO: task hung in vfs_unlink 6 1547d 1672d 0/1 auto-closed as invalid on 2020/05/28 17:30
linux-4.19 INFO: task hung in vfs_unlink (3) 1 995d 995d 0/1 auto-closed as invalid on 2021/12/01 23:35
upstream INFO: task hung in vfs_unlink (4) fs 6 189d 350d 0/26 auto-obsoleted due to no activity on 2024/01/16 15:08

Sample crash report:
INFO: task syz-fuzzer:3599 blocked for more than 143 seconds.
      Not tainted 5.19.0-rc4-next-20220628-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-fuzzer      state:D stack:24760 pid: 3599 ppid:  3590 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5184 [inline]
 __schedule+0xa09/0x4f10 kernel/sched/core.c:6496
 schedule+0xd2/0x1f0 kernel/sched/core.c:6568
 rwsem_down_write_slowpath+0x68a/0x11a0 kernel/locking/rwsem.c:1172
 __down_write_common kernel/locking/rwsem.c:1287 [inline]
 __down_write_common kernel/locking/rwsem.c:1284 [inline]
 __down_write kernel/locking/rwsem.c:1296 [inline]
 down_write+0x135/0x150 kernel/locking/rwsem.c:1543
 inode_lock include/linux/fs.h:761 [inline]
 vfs_unlink+0xd5/0x920 fs/namei.c:4181
 do_unlinkat+0x3cc/0x650 fs/namei.c:4260
 __do_sys_unlinkat fs/namei.c:4303 [inline]
 __se_sys_unlinkat fs/namei.c:4296 [inline]
 __x64_sys_unlinkat+0xbd/0x130 fs/namei.c:4296
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x46/0xb0
RIP: 0033:0x49dfbb
RSP: 002b:000000c011df3578 EFLAGS: 00000212 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 000000c00001e000 RCX: 000000000049dfbb
RDX: 0000000000000000 RSI: 000000c02bb8ff30 RDI: 000000000000001c
RBP: 000000c011df35d8 R08: 00000000000002ad R09: 00007ffd2d5b5080
R10: 00007ffd2d5b5090 R11: 0000000000000212 R12: 000000c011df34b8
R13: 0000000000000001 R14: 000000c000288d00 R15: 00000000000001f3
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8bd864f0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8bd861f0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by khungtaskd/28:
 #0: ffffffff8bd87040 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6491
2 locks held by getty/3280:
 #0: ffff88814ad33098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244
 #1: ffffc90002d162f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xe50/0x13c0 drivers/tty/n_tty.c:2177
3 locks held by syz-fuzzer/3599:
 #0: ffff88814b582460 (sb_writers#4){.+.+}-{0:0}, at: do_unlinkat+0x17f/0x650 fs/namei.c:4239
 #1: ffff888071c18400 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:796 [inline]
 #1: ffff888071c18400 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: do_unlinkat+0x26c/0x650 fs/namei.c:4243
 #2: ffff88803e146850 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: inode_lock include/linux/fs.h:761 [inline]
 #2: ffff88803e146850 (&sb->s_type->i_mutex_key#9){++++}-{3:3}, at: vfs_unlink+0xd5/0x920 fs/namei.c:4181
6 locks held by syz-executor.0/16883:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 5.19.0-rc4-next-20220628-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x47/0x144 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1e6/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline]
 watchdog+0xc18/0xf50 kernel/hung_task.c:369
 kthread+0x2e9/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:302
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 8 Comm: kworker/u4:0 Not tainted 5.19.0-rc4-next-20220628-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/22/2022
Workqueue: phy10 ieee80211_iface_work
RIP: 0010:check_preemption_disabled+0x2/0x170 lib/smp_processor_id.c:13
Code: 8b 1d 4e 3a 89 76 31 ff 89 de 0f 1f 44 00 00 85 db 74 07 0f 1f 44 00 00 0f 0b 0f 1f 44 00 00 5b e9 93 fb ff ff cc cc cc 41 56 <41> 55 49 89 f5 41 54 55 48 89 fd 53 0f 1f 44 00 00 65 44 8b 25 ad
RSP: 0018:ffffc900000d7898 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 0000000000000003 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffffffff89cc8900 RDI: ffffffff8a2874e0
RBP: ffffffff8bd86f80 R08: 0000000000000800 R09: 0000000000000920
R10: 0000000000000001 R11: 0000000000000001 R12: ffff88813fe957c0
R13: 0000000000000000 R14: 00000000ffffffff R15: ffff88813fe96260
FS:  0000000000000000(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fd2b8d1bf80 CR3: 000000008ddc1000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 lockdep_recursion_finish kernel/locking/lockdep.c:466 [inline]
 lock_is_held_type+0xd7/0x140 kernel/locking/lockdep.c:5709
 lock_is_held include/linux/lockdep.h:279 [inline]
 rcu_read_lock_sched_held+0x3a/0x70 kernel/rcu/update.c:125
 trace_kmalloc include/trace/events/kmem.h:52 [inline]
 trace_kmalloc+0x32/0x100 include/trace/events/kmem.h:52
 kmem_cache_alloc_trace+0x1f2/0x3e0 mm/slub.c:3283
 kmalloc include/linux/slab.h:600 [inline]
 kzalloc include/linux/slab.h:733 [inline]
 ieee802_11_parse_elems_crc+0xd5/0x1050 net/mac80211/util.c:1502
 ieee802_11_parse_elems net/mac80211/ieee80211_i.h:2269 [inline]
 ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1606 [inline]
 ieee80211_ibss_rx_queued_mgmt+0xc82/0x30d0 net/mac80211/ibss.c:1640
 ieee80211_iface_process_skb net/mac80211/iface.c:1594 [inline]
 ieee80211_iface_work+0xa7f/0xd20 net/mac80211/iface.c:1648
 process_one_work+0x991/0x1610 kernel/workqueue.c:2289
 worker_thread+0x665/0x1080 kernel/workqueue.c:2436
 kthread+0x2e9/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:302
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/08/06 08:08 linux-next cb71b93c2dc3 e853abd9 .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in vfs_unlink
* Struck through repros no longer work on HEAD.