syzbot


INFO: task hung in page_cache_ra_unbounded

Status: upstream: reported on 2024/09/24 05:33
Reported-by: syzbot+26340c6e6534ac58946d@syzkaller.appspotmail.com
First crash: 137d, last: 33d
Similar bugs (5)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in page_cache_ra_unbounded fs mm 2 1151d 1171d 0/28 closed as invalid on 2022/02/08 09:40
upstream INFO: task hung in page_cache_ra_unbounded (2) mm fs C done 3624 6d02h 159d 0/28 upstream: reported C repro on 2024/09/02 00:15
linux-5.15 INFO: task hung in page_cache_ra_unbounded (2) 2 141d 160d 0/3 auto-obsoleted due to no activity on 2024/12/29 12:02
linux-5.15 INFO: task hung in page_cache_ra_unbounded 3 317d 380d 0/3 auto-obsoleted due to no activity on 2024/07/06 02:06
linux-5.15 INFO: task hung in page_cache_ra_unbounded (3) 1 33d 33d 0/3 upstream: reported on 2025/01/06 06:01

Sample crash report:
INFO: task syz.0.1467:9866 blocked for more than 143 seconds.
      Not tainted 6.1.123-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.1467      state:D stack:25864 pid:9866  ppid:4254   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5241 [inline]
 __schedule+0x143f/0x4570 kernel/sched/core.c:6558
 schedule+0xbf/0x180 kernel/sched/core.c:6634
 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693
 rwsem_down_read_slowpath kernel/locking/rwsem.c:1094 [inline]
 __down_read_common kernel/locking/rwsem.c:1261 [inline]
 __down_read kernel/locking/rwsem.c:1274 [inline]
 down_read+0x6ff/0xa30 kernel/locking/rwsem.c:1522
 filemap_invalidate_lock_shared include/linux/fs.h:813 [inline]
 page_cache_ra_unbounded+0xed/0x7b0 mm/readahead.c:226
 do_sync_mmap_readahead+0x7ae/0x980 mm/filemap.c:3135
 filemap_fault+0x813/0x17e0 mm/filemap.c:3227
 __do_fault+0x136/0x4f0 mm/memory.c:4278
 do_cow_fault mm/memory.c:4659 [inline]
 do_fault mm/memory.c:4760 [inline]
 handle_pte_fault mm/memory.c:5029 [inline]
 __handle_mm_fault mm/memory.c:5171 [inline]
 handle_mm_fault+0x2fbc/0x5340 mm/memory.c:5292
 do_user_addr_fault arch/x86/mm/fault.c:1340 [inline]
 handle_page_fault arch/x86/mm/fault.c:1431 [inline]
 exc_page_fault+0x26f/0x620 arch/x86/mm/fault.c:1487
 asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:608
RIP: 0033:0x7f8e33c5c6d9
RSP: 002b:00007fffb33d6830 EFLAGS: 00010246
RAX: 00000000000044c0 RBX: 0000000000000002 RCX: ffffffffffffff7f
RDX: 83f127caab923999 RSI: 0000000020000080 RDI: 00005555771f93c8
RBP: 00007fffb33d6908 R08: 00007f8e33a00000 R09: 0000000000000002
R10: 0000000000000000 R11: 0000000000000001 R12: 00000000000ab1d0
R13: 00007f8e33f75fa0 R14: 0000000000000032 R15: fffffffffffffffe
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8d32b290 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:517
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8d32ba90 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:517
1 lock held by khungtaskd/28:
 #0: ffffffff8d32b0c0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8d32b0c0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8d32b0c0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6510
2 locks held by getty/4007:
 #0: ffff88807f4f0098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000325e2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2198
2 locks held by kworker/u4:12/4525:
5 locks held by kworker/u4:13/4548:
4 locks held by kworker/u4:18/5741:
 #0: ffff88814c9f6138 ((wq_completion)l2tp){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
 #1: ffffc9000d7a7d20 ((work_completion)(&tunnel->del_work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267
 #2: ffffffff8e50b428 (rtnl_mutex){+.+.}-{3:3}, at: l2tp_eth_delete+0x1b/0xf0 net/l2tp/l2tp_eth.c:175
 #3: ffffffff8d3306b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
 #3: ffffffff8d3306b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x4f0/0x930 kernel/rcu/tree_exp.h:962
6 locks held by kworker/0:4/7824:
1 lock held by syz.3.1153/8654:
1 lock held by syz.0.1467/9866:
 #0: ffff88801810a9c0 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:813 [inline]
 #0: ffff88801810a9c0 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xed/0x7b0 mm/readahead.c:226

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.123-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1ae/0x3f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xf88/0xfd0 kernel/hung_task.c:377
 kthread+0x28d/0x320 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 4525 Comm: kworker/u4:12 Not tainted 6.1.123-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue: bat_events batadv_nc_worker
RIP: 0010:validate_chain+0x6/0x5950 kernel/locking/lockdep.c:3781
Code: e1 07 80 c1 03 38 c1 7c 92 48 c7 c7 38 3c 9b 8e e8 7f d2 76 00 eb 84 e8 f8 07 47 09 0f 1f 84 00 00 00 00 00 55 48 89 e5 41 57 <41> 56 41 55 41 54 53 48 83 e4 e0 48 81 ec 80 02 00 00 49 89 ce 89
RSP: 0018:ffffc90000007a28 EFLAGS: 00000086
RAX: 1ffffffff217b4ec RBX: ffffffff90bda760 RCX: 71fa0e13f0c06cbc
RDX: 0000000000000000 RSI: ffff88802d84a918 RDI: ffff88802d849dc0
RBP: ffffc90000007a30 R08: dffffc0000000000 R09: fffffbfff2249e4e
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff88802d84a898
R13: ffff88802d849dc0 R14: 0000000000000000 R15: 1ffff11005b09527
FS:  0000000000000000(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f8f4e347ab8 CR3: 000000000d08e000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 __lock_acquire+0x125b/0x1f80 kernel/locking/lockdep.c:5049
 lock_acquire+0x1f8/0x5a0 kernel/locking/lockdep.c:5662
 seqcount_lockdep_reader_access+0xf8/0x220 include/linux/seqlock.h:102
 timekeeping_get_delta kernel/time/timekeeping.c:254 [inline]
 timekeeping_get_ns kernel/time/timekeeping.c:388 [inline]
 ktime_get_update_offsets_now+0x89/0x420 kernel/time/timekeeping.c:2320
 hrtimer_update_base kernel/time/hrtimer.c:632 [inline]
 hrtimer_run_softirq+0x9e/0x2c0 kernel/time/hrtimer.c:1769
 handle_softirqs+0x2ee/0xa40 kernel/softirq.c:571
 __do_softirq kernel/softirq.c:605 [inline]
 invoke_softirq kernel/softirq.c:445 [inline]
 __irq_exit_rcu+0x157/0x240 kernel/softirq.c:654
 irq_exit_rcu+0x5/0x20 kernel/softirq.c:666
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1118 [inline]
 sysvec_apic_timer_interrupt+0xa0/0xc0 arch/x86/kernel/apic/apic.c:1118
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:691
RIP: 0010:lock_release+0x637/0xa20 kernel/locking/lockdep.c:5686
Code: 3c 3b 00 74 08 4c 89 f7 e8 66 62 77 00 f6 84 24 91 00 00 00 02 75 6f 41 f7 c5 00 02 00 00 74 01 fb 48 c7 44 24 60 0e 36 e0 45 <4b> c7 04 27 00 00 00 00 4b c7 44 27 08 00 00 00 00 65 48 8b 04 25
RSP: 0018:ffffc9000528fac0 EFLAGS: 00000206
RAX: 0000000000000001 RBX: 1ffff92000a51f6a RCX: ffffc9000528fb03
RDX: 0000000000000002 RSI: ffffffff8b0c14c0 RDI: ffffffff8b5e6840
RBP: ffffc9000528fbe8 R08: dffffc0000000000 R09: fffffbfff1d360ee
R10: 0000000000000000 R11: dffffc0000000001 R12: 1ffff92000a51f64
R13: 0000000000000246 R14: ffffc9000528fb50 R15: dffffc0000000000
 rcu_lock_release include/linux/rcupdate.h:355 [inline]
 rcu_read_unlock include/linux/rcupdate.h:824 [inline]
 batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:412 [inline]
 batadv_nc_worker+0x28c/0x610 net/batman-adv/network-coding.c:719
 process_one_work+0x8a9/0x11d0 kernel/workqueue.c:2292
 worker_thread+0xa47/0x1200 kernel/workqueue.c:2439
 kthread+0x28d/0x320 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>

Crashes (7):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/06 06:35 linux-6.1.y 7dc732d24ff7 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in page_cache_ra_unbounded
2025/01/06 06:14 linux-6.1.y 7dc732d24ff7 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in page_cache_ra_unbounded
2024/11/08 00:42 linux-6.1.y 7c15117f9468 867e44df .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in page_cache_ra_unbounded
2024/11/07 07:17 linux-6.1.y 7c15117f9468 df3dc63b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in page_cache_ra_unbounded
2024/09/25 06:12 linux-6.1.y e526b12bf916 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in page_cache_ra_unbounded
2024/09/24 05:33 linux-6.1.y e526b12bf916 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in page_cache_ra_unbounded
2025/01/06 05:57 linux-6.1.y 7dc732d24ff7 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in page_cache_ra_unbounded
* Struck through repros no longer work on HEAD.