syzbot


INFO: task hung in vfs_unlink (3)

Status: auto-obsoleted due to no activity on 2024/07/08 21:39
Reported-by: syzbot+56b3aec222f3a8f4d18b@syzkaller.appspotmail.com
First crash: 234d, last: 223d
Similar bugs (13)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in vfs_unlink (2) 1 351d 351d 0/3 auto-obsoleted due to no activity on 2024/03/02 11:45
linux-4.14 INFO: task hung in vfs_unlink (2) 1 1299d 1299d 0/1 auto-closed as invalid on 2021/08/17 05:41
linux-4.14 INFO: task hung in vfs_unlink 8 1544d 1786d 0/1 auto-closed as invalid on 2020/12/15 00:37
linux-4.19 INFO: task hung in vfs_unlink (4) 1 730d 730d 0/1 auto-obsoleted due to no activity on 2023/03/09 01:26
linux-4.19 INFO: task hung in vfs_unlink (2) 2 1554d 1615d 0/1 auto-closed as invalid on 2020/12/05 18:54
upstream INFO: task hung in vfs_unlink (3) ext4 1 648d 648d 0/28 auto-obsoleted due to no activity on 2023/04/30 04:19
linux-5.15 INFO: task hung in vfs_unlink 29 235d 593d 0/3 auto-obsoleted due to no activity on 2024/06/26 00:49
upstream INFO: task hung in vfs_unlink ext4 32 1566d 1843d 0/28 auto-closed as invalid on 2020/11/23 01:14
linux-6.1 INFO: task hung in vfs_unlink 2 540d 552d 0/3 auto-obsoleted due to no activity on 2023/08/26 02:49
linux-4.19 INFO: task hung in vfs_unlink 6 1745d 1869d 0/1 auto-closed as invalid on 2020/05/28 17:30
linux-4.19 INFO: task hung in vfs_unlink (3) 1 1192d 1192d 0/1 auto-closed as invalid on 2021/12/01 23:35
upstream INFO: task hung in vfs_unlink (4) fs 6 387d 547d 0/28 auto-obsoleted due to no activity on 2024/01/16 15:08
upstream INFO: task hung in vfs_unlink (2) fs 1 825d 825d 0/28 auto-closed as invalid on 2022/10/05 08:13

Sample crash report:
INFO: task syz-fuzzer:3557 blocked for more than 143 seconds.
      Not tainted 6.1.83-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-fuzzer      state:D stack:22120 pid:3557  ppid:3538   flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5245 [inline]
 __schedule+0x142d/0x4550 kernel/sched/core.c:6558
 schedule+0xbf/0x180 kernel/sched/core.c:6634
 rwsem_down_write_slowpath+0xea1/0x14b0 kernel/locking/rwsem.c:1189
 inode_lock include/linux/fs.h:758 [inline]
 vfs_unlink+0xe0/0x5f0 fs/namei.c:4313
 do_unlinkat+0x4a5/0x820 fs/namei.c:4392
 __do_sys_unlinkat fs/namei.c:4435 [inline]
 __se_sys_unlinkat fs/namei.c:4428 [inline]
 __x64_sys_unlinkat+0xca/0xf0 fs/namei.c:4428
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x40720e
RSP: 002b:000000c000cb7638 EFLAGS: 00000202 ORIG_RAX: 0000000000000107
RAX: ffffffffffffffda RBX: 0000000000000011 RCX: 000000000040720e
RDX: 0000000000000000 RSI: 000000c000ffd878 RDI: 0000000000000011
RBP: 000000c000cb7678 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 0000000000000000
R13: 000000c000250800 R14: 000000c000a1c340 R15: 0000000000000105
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8d12ab10 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8d12b310 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:516
1 lock held by khungtaskd/28:
 #0: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8d12a940 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6494
2 locks held by getty/3302:
 #0: ffff888028724098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2188
3 locks held by syz-fuzzer/3557:
 #0: ffff88807f1de460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3b/0x80 fs/namespace.c:393
 #1: ffff888058cbd440 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:793 [inline]
 #1: ffff888058cbd440 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: do_unlinkat+0x266/0x820 fs/namei.c:4375
 #2: ffff888058dfde48 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:758 [inline]
 #2: ffff888058dfde48 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: vfs_unlink+0xe0/0x5f0 fs/namei.c:4313
9 locks held by syz-executor.2/7169:
1 lock held by syz-executor.4/7282:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.83-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xf88/0xfd0 kernel/hung_task.c:377
 kthread+0x28d/0x320 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:307
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 7169 Comm: syz-executor.2 Not tainted 6.1.83-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:get_current arch/x86/include/asm/current.h:15 [inline]
RIP: 0010:write_comp_data kernel/kcov.c:235 [inline]
RIP: 0010:__sanitizer_cov_trace_const_cmp4+0x4/0x80 kernel/kcov.c:304
Code: 89 f8 89 f6 49 ff c2 4c 89 11 48 c7 44 0a 08 03 00 00 00 48 89 44 0a 10 48 89 74 0a 18 4c 89 44 0a 20 c3 0f 1f 00 4c 8b 04 24 <65> 48 8b 15 e4 e6 77 7e 65 8b 05 e5 e6 77 7e a9 00 01 ff 00 74 10
RSP: 0018:ffffc90007b56680 EFLAGS: 00000246
RAX: ffffea00014a9400 RBX: ffffea00014a9400 RCX: ffffffff81d621ac
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000001
RBP: 0000000000000000 R08: ffffffff81deeb79 R09: fffffbfff1ce6c9e
R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000000
R13: dffffc0000000000 R14: ffff888022f3d940 R15: 1ffff92000f6acd8
FS:  00007ff5f59746c0(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000559627221680 CR3: 000000007401f000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 folio_alloc+0x29/0x50
 filemap_alloc_folio+0xda/0x4f0 mm/filemap.c:971
 page_cache_ra_unbounded+0x1ee/0x7b0 mm/readahead.c:248
 do_sync_mmap_readahead+0x7ae/0x980 mm/filemap.c:3107
 filemap_fault+0x813/0x17e0 mm/filemap.c:3199
 __do_fault+0x136/0x4f0 mm/memory.c:4261
 do_read_fault mm/memory.c:4612 [inline]
 do_fault mm/memory.c:4741 [inline]
 handle_pte_fault mm/memory.c:5013 [inline]
 __handle_mm_fault mm/memory.c:5155 [inline]
 handle_mm_fault+0x3412/0x5340 mm/memory.c:5276
 faultin_page mm/gup.c:1009 [inline]
 __get_user_pages+0x4f3/0x1190 mm/gup.c:1233
 __get_user_pages_locked mm/gup.c:1437 [inline]
 get_user_pages_unlocked+0x23b/0x8a0 mm/gup.c:2346
 __gup_longterm_unlocked mm/gup.c:2957 [inline]
 internal_get_user_pages_fast+0x27a4/0x2ff0 mm/gup.c:3047
 __iov_iter_get_pages_alloc+0x3b1/0xa70 lib/iov_iter.c:1460
 iov_iter_get_pages2+0xcb/0x120 lib/iov_iter.c:1503
 __bio_iov_iter_get_pages block/bio.c:1220 [inline]
 bio_iov_iter_get_pages+0x359/0x1340 block/bio.c:1290
 iomap_dio_bio_iter+0xc80/0x15f0 fs/iomap/direct-io.c:331
 __iomap_dio_rw+0x127e/0x2140 fs/iomap/direct-io.c:602
 iomap_dio_rw+0x42/0xa0 fs/iomap/direct-io.c:690
 ext4_dio_write_iter fs/ext4/file.c:557 [inline]
 ext4_file_write_iter+0x1464/0x1880 fs/ext4/file.c:677
 call_write_iter include/linux/fs.h:2265 [inline]
 new_sync_write fs/read_write.c:491 [inline]
 vfs_write+0x7ae/0xba0 fs/read_write.c:584
 ksys_write+0x19c/0x2c0 fs/read_write.c:637
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7ff5f4c7dda9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007ff5f59740c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007ff5f4dabf80 RCX: 00007ff5f4c7dda9
RDX: 0000000000043400 RSI: 0000000020000200 RDI: 0000000000000006
RBP: 00007ff5f4cca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007ff5f4dabf80 R15: 00007ffc728938c8
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/03/30 21:38 linux-6.1.y e5cd595e23c1 6baf5069 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in vfs_unlink
2024/03/19 12:40 linux-6.1.y d7543167affd e104824c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in vfs_unlink
* Struck through repros no longer work on HEAD.