syzbot


INFO: task hung in read_cache_folio

Status: upstream: reported syz repro on 2025/08/13 08:15
Bug presence: origin:lts-only
[Documentation on labels]
Reported-by: syzbot+9bce49400894bc31bb8c@syzkaller.appspotmail.com
First crash: 19d, last: 13d
Bug presence (2)
Date Name Commit Repro Result
2025/08/19 linux-6.1.y (ToT) 0bc96de781b4 syz [report] INFO: task hung in read_cache_folio
2025/08/19 upstream (ToT) be48bcf004f9 syz Didn't crash
Similar bugs (4)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in read_cache_folio block 1 1 758d 758d 0/29 auto-obsoleted due to no activity on 2023/11/03 07:38
upstream INFO: task hung in read_cache_folio (4) block 1 1 192d 192d 0/29 auto-obsoleted due to no activity on 2025/05/22 08:15
upstream INFO: task hung in read_cache_folio (2) block 1 6 430d 623d 0/29 auto-obsoleted due to no activity on 2024/09/26 05:52
upstream INFO: task hung in read_cache_folio (3) block 1 1 319d 319d 0/29 auto-obsoleted due to no activity on 2025/01/15 05:53

Sample crash report:
INFO: task syz.2.19:4606 blocked for more than 143 seconds.
      Not tainted 6.1.147-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.19        state:D stack:0     pid:4606  ppid:4485   flags:0x00000001
Call trace:
 __switch_to+0x2f4/0x568 arch/arm64/kernel/process.c:555
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0xddc/0x1b18 kernel/sched/core.c:6561
 schedule+0xc4/0x170 kernel/sched/core.c:6637
 io_schedule+0x84/0x154 kernel/sched/core.c:8797
 folio_wait_bit_common+0x56c/0x9c4 mm/filemap.c:1324
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0xa8/0x544 mm/filemap.c:3609
 read_cache_folio+0x68/0x88 mm/filemap.c:3659
 erofs_bread+0x13c/0x520 fs/erofs/data.c:50
 erofs_find_target_block fs/erofs/namei.c:102 [inline]
 erofs_namei+0x1fc/0xc00 fs/erofs/namei.c:175
 erofs_lookup+0x158/0x450 fs/erofs/namei.c:204
 __lookup_slow+0x24c/0x370 fs/namei.c:1690
 lookup_slow+0x5c/0x80 fs/namei.c:1707
 walk_component fs/namei.c:1998 [inline]
 link_path_walk+0x76c/0xc6c fs/namei.c:2325
 path_openat+0x1c0/0x2680 fs/namei.c:3779
 do_filp_open+0x174/0x344 fs/namei.c:3810
 do_sys_openat2+0x128/0x3d8 fs/open.c:1318
 do_sys_open fs/open.c:1334 [inline]
 __do_sys_openat fs/open.c:1350 [inline]
 __se_sys_openat fs/open.c:1345 [inline]
 __arm64_sys_openat+0x120/0x154 fs/open.c:1345
 __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
 invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
 el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
 do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
 el0_svc+0x58/0x138 arch/arm64/kernel/entry-common.c:637
 el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
 el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585
INFO: task syz.3.20:4612 blocked for more than 143 seconds.
      Not tainted 6.1.147-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.20        state:D stack:0     pid:4612  ppid:4494   flags:0x00000001
Call trace:
 __switch_to+0x2f4/0x568 arch/arm64/kernel/process.c:555
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0xddc/0x1b18 kernel/sched/core.c:6561
 schedule+0xc4/0x170 kernel/sched/core.c:6637
 io_schedule+0x84/0x154 kernel/sched/core.c:8797
 folio_wait_bit_common+0x56c/0x9c4 mm/filemap.c:1324
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0xa8/0x544 mm/filemap.c:3609
 read_cache_folio+0x68/0x88 mm/filemap.c:3659
 erofs_bread+0x13c/0x520 fs/erofs/data.c:50
 erofs_find_target_block fs/erofs/namei.c:102 [inline]
 erofs_namei+0x1fc/0xc00 fs/erofs/namei.c:175
 erofs_lookup+0x158/0x450 fs/erofs/namei.c:204
 __lookup_slow+0x24c/0x370 fs/namei.c:1690
 lookup_slow+0x5c/0x80 fs/namei.c:1707
 walk_component fs/namei.c:1998 [inline]
 link_path_walk+0x76c/0xc6c fs/namei.c:2325
 path_openat+0x1c0/0x2680 fs/namei.c:3779
 do_filp_open+0x174/0x344 fs/namei.c:3810
 do_sys_openat2+0x128/0x3d8 fs/open.c:1318
 do_sys_open fs/open.c:1334 [inline]
 __do_sys_openat fs/open.c:1350 [inline]
 __se_sys_openat fs/open.c:1345 [inline]
 __arm64_sys_openat+0x120/0x154 fs/open.c:1345
 __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
 invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
 el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
 do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
 el0_svc+0x58/0x138 arch/arm64/kernel/entry-common.c:637
 el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
 el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585
INFO: task syz.4.21:4618 blocked for more than 143 seconds.
      Not tainted 6.1.147-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.21        state:D stack:0     pid:4618  ppid:4486   flags:0x00000001
Call trace:
 __switch_to+0x2f4/0x568 arch/arm64/kernel/process.c:555
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0xddc/0x1b18 kernel/sched/core.c:6561
 schedule+0xc4/0x170 kernel/sched/core.c:6637
 io_schedule+0x84/0x154 kernel/sched/core.c:8797
 folio_wait_bit_common+0x56c/0x9c4 mm/filemap.c:1324
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0xa8/0x544 mm/filemap.c:3609
 read_cache_folio+0x68/0x88 mm/filemap.c:3659
 erofs_bread+0x13c/0x520 fs/erofs/data.c:50
 erofs_find_target_block fs/erofs/namei.c:102 [inline]
 erofs_namei+0x1fc/0xc00 fs/erofs/namei.c:175
 erofs_lookup+0x158/0x450 fs/erofs/namei.c:204
 __lookup_slow+0x24c/0x370 fs/namei.c:1690
 lookup_slow+0x5c/0x80 fs/namei.c:1707
 walk_component fs/namei.c:1998 [inline]
 link_path_walk+0x76c/0xc6c fs/namei.c:2325
 path_openat+0x1c0/0x2680 fs/namei.c:3779
 do_filp_open+0x174/0x344 fs/namei.c:3810
 do_sys_openat2+0x128/0x3d8 fs/open.c:1318
 do_sys_open fs/open.c:1334 [inline]
 __do_sys_openat fs/open.c:1350 [inline]
 __se_sys_openat fs/open.c:1345 [inline]
 __arm64_sys_openat+0x120/0x154 fs/open.c:1345
 __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline]
 invoke_syscall+0x98/0x2bc arch/arm64/kernel/syscall.c:52
 el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:140
 do_el0_svc+0x58/0x13c arch/arm64/kernel/syscall.c:204
 el0_svc+0x58/0x138 arch/arm64/kernel/entry-common.c:637
 el0t_64_sync_handler+0x84/0xf0 arch/arm64/kernel/entry-common.c:655
 el0t_64_sync+0x18c/0x190 arch/arm64/kernel/entry.S:585

Showing all locks held in the system:
4 locks held by kworker/u4:1/11:
 #0: ffff0000c0845138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x6b4/0x13a8 kernel/workqueue.c:2265
 #1: ffff80001c877c20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x6f8/0x13a8 kernel/workqueue.c:2267
 #2: 
ffff800017712ed0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x12c/0xa74 net/core/net_namespace.c:594
 #3: ffff80001771f548 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
1 lock held by rcu_tasks_kthre/12:
 #0: ffff800015287770 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x40/0xbb4 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: 
ffff800015287f90 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x40/0xbb4 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
 #0: ffff800015286e00 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:349
3 locks held by kworker/u4:5/1698:
1 lock held by udevd/3936:
2 locks held by getty/4080:
 #0: ffff0000d669a098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x3c/0x4c drivers/tty/tty_ldsem.c:340
 #1: ffff80002084e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x2ec/0xf9c drivers/tty/n_tty.c:2198
3 locks held by kworker/0:14/4462:
 #0: 
ffff0000d609e138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x6b4/0x13a8 kernel/workqueue.c:2265
 #1: ffff800021297c20 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6f8/0x13a8 kernel/workqueue.c:2267
 #2: ffff80001771f548 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
3 locks held by kworker/0:15/4463:
 #0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x6b4/0x13a8 kernel/workqueue.c:2265
 #1: ffff800021397c20 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x6f8/0x13a8 kernel/workqueue.c:2267
 #2: ffff80001771f548 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
3 locks held by kworker/1:5/4483:
 #0: ffff0000d609e138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x6b4/0x13a8 kernel/workqueue.c:2265
 #1: ffff800020f97c20 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x6f8/0x13a8 kernel/workqueue.c:2267
 #2: ffff80001771f548 (rtnl_mutex){+.+.}-{3:3}
, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
1 lock held by syz.0.17/4542:
1 lock held by syz.0.17/4543:
1 lock held by syz.2.19/4605:
1 lock held by syz.2.19/4606:
 #0: ffff0000e53f8c48 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
 #0: ffff0000e53f8c48 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: lookup_slow+0x4c/0x80 fs/namei.c:1706
1 lock held by syz.3.20/4609:
1 lock held by syz.3.20/4612:
 #0: ffff0000f41d8198 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
 #0: ffff0000f41d8198 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: lookup_slow+0x4c/0x80 fs/namei.c:1706
1 lock held by syz.1.18/4611:
1 lock held by syz.1.18/4613:
1 lock held by syz.4.21/4617:
1 lock held by syz.4.21/4618:
 #0: ffff0000f41d96f8 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
 #0: ffff0000f41d96f8 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: lookup_slow+0x4c/0x80 fs/namei.c:1706
1 lock held by syz.5.22/4637:
1 lock held by syz.5.22/4638:
1 lock held by syz.8.25/4780:
1 lock held by syz.8.25/4784:
 #0: 
ffff0000f41dac58
 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: lookup_slow+0x4c/0x80 fs/namei.c:1706
1 lock held by syz.6.23/4782:
1 lock held by syz.6.23/4785:
 #0: ffff0000f41db708 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
 #0: ffff0000f41db708 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: lookup_slow+0x4c/0x80 fs/namei.c:1706
1 lock held by syz.7.24/4789:
1 lock held by syz.7.24/4790:
 #0: ffff0000f41dc1b8 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
 #0: ffff0000f41dc1b8 (&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: lookup_slow+0x4c/0x80 fs/namei.c:1706
1 lock held by syz.9.26/4795:
1 lock held by syz.9.26/4796:
 #0: 
ffff0000f41dcc68
 (
&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: inode_lock_shared include/linux/fs.h:768 [inline]
&type->i_mutex_dir_key#8){.+.+}-{3:3}, at: lookup_slow+0x4c/0x80 fs/namei.c:1706
3 locks held by kworker/u4:8/5072:
 #0: ffff0000c0029138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x6b4/0x13a8 kernel/workqueue.c:2265
 #1: 
ffff800022587c20 ((linkwatch_work).work){+.+.}-{0:0}
, at: process_one_work+0x6f8/0x13a8 kernel/workqueue.c:2267
 #2: 
ffff80001771f548
 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:74
1 lock held by syz-executor/10853:
 #0: ffff80001771f548 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffff80001771f548 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x6ec/0xce4 net/core/rtnetlink.c:6150

=============================================


Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/13 08:14 linux-6.1.y 3594f306da12 22ec1469 .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci2-linux-6-1-kasan-arm64 INFO: task hung in read_cache_folio
2025/08/15 17:49 linux-6.1.y 0bc96de781b4 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in read_cache_folio
* Struck through repros no longer work on HEAD.