syzbot


INFO: task hung in filemap_fault (6)

Status: upstream: reported on 2024/07/09 16:36
Subsystems: net serial
[Documentation on labels]
Reported-by: syzbot+cbbcd52813dd10467cfe@syzkaller.appspotmail.com
First crash: 299d, last: 3d15h
Discussions (2)
Title Replies (including bot) Last reply
[syzbot] Monthly serial report (Jul 2024) 0 (1) 2024/07/12 10:05
[syzbot] [net?] [serial?] INFO: task hung in filemap_fault (6) 0 (1) 2024/07/09 16:36
Similar bugs (9)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in filemap_fault (3) mm 1 949d 949d 0/27 closed as invalid on 2022/02/08 09:40
upstream INFO: task hung in filemap_fault (5) mm fs 12 393d 771d 0/27 auto-obsoleted due to no activity on 2023/09/13 22:48
linux-6.1 INFO: task hung in filemap_fault 2 407d 431d 0/3 auto-obsoleted due to no activity on 2023/09/09 11:34
linux-5.15 INFO: task hung in filemap_fault 1 326d 326d 0/3 auto-obsoleted due to no activity on 2023/11/29 22:47
upstream INFO: task hung in filemap_fault mm 24 2378d 2399d 0/27 closed as invalid on 2018/02/13 19:52
android-44 INFO: task hung in filemap_fault 3 2308d 2309d 0/2 auto-closed as invalid on 2019/02/22 14:09
android-49 INFO: task hung in filemap_fault 4 2201d 2247d 0/3 auto-closed as invalid on 2019/02/22 14:49
upstream INFO: task hung in filemap_fault (2) mm 5 2091d 2295d 0/27 auto-closed as invalid on 2019/04/20 06:20
upstream INFO: task hung in filemap_fault (4) mm 1 867d 867d 0/27 auto-closed as invalid on 2022/05/27 12:14

Sample crash report:
INFO: task syz.2.3226:14450 blocked for more than 145 seconds.
      Not tainted 6.10.0-rc7-syzkaller-00012-g34afb82a3c67 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.3226      state:D stack:21880 pid:14450 tgid:14449 ppid:13237  flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6894
 rwsem_down_read_slowpath kernel/locking/rwsem.c:1086 [inline]
 __down_read_common kernel/locking/rwsem.c:1250 [inline]
 __down_read kernel/locking/rwsem.c:1263 [inline]
 down_read+0x705/0xa40 kernel/locking/rwsem.c:1528
 filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 filemap_fault+0xb5b/0x1760 mm/filemap.c:3320
 __do_fault+0x135/0x460 mm/memory.c:4556
 do_read_fault mm/memory.c:4921 [inline]
 do_fault mm/memory.c:5051 [inline]
 do_pte_missing mm/memory.c:3897 [inline]
 handle_pte_fault+0x3d15/0x7090 mm/memory.c:5381
 __handle_mm_fault mm/memory.c:5524 [inline]
 handle_mm_fault+0xfb0/0x19d0 mm/memory.c:5689
 faultin_page mm/gup.c:1290 [inline]
 __get_user_pages+0x6ef/0x1590 mm/gup.c:1589
 __get_user_pages_locked mm/gup.c:1857 [inline]
 get_user_pages_unlocked+0x2a8/0x9d0 mm/gup.c:2761
 hva_to_pfn_slow arch/x86/kvm/../../../virt/kvm/kvm_main.c:2817 [inline]
 hva_to_pfn+0x277/0xe70 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2955
 __kvm_faultin_pfn arch/x86/kvm/mmu/mmu.c:4326 [inline]
 kvm_faultin_pfn+0x750/0x1c00 arch/x86/kvm/mmu/mmu.c:4438
 kvm_tdp_mmu_page_fault arch/x86/kvm/mmu/mmu.c:4605 [inline]
 kvm_tdp_page_fault+0x465/0x580 arch/x86/kvm/mmu/mmu.c:4658
 kvm_mmu_do_page_fault+0x57b/0xbe0 arch/x86/kvm/mmu/mmu_internal.h:330
 kvm_mmu_page_fault+0x29b/0x840 arch/x86/kvm/mmu/mmu.c:5889
 __vmx_handle_exit arch/x86/kvm/vmx/vmx.c:6625 [inline]
 vmx_handle_exit+0x11f1/0x1f80 arch/x86/kvm/vmx/vmx.c:6642
 vcpu_enter_guest arch/x86/kvm/x86.c:11147 [inline]
 vcpu_run+0x6ad0/0x87f0 arch/x86/kvm/x86.c:11251
 kvm_arch_vcpu_ioctl_run+0xa7e/0x1920 arch/x86/kvm/x86.c:11477
 kvm_vcpu_ioctl+0x7f5/0xd00 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4424
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:907 [inline]
 __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:893
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f448df75bd9
RSP: 002b:00007f448edc8048 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f448e103f60 RCX: 00007f448df75bd9
RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000008
RBP: 00007f448dfe4e60 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f448e103f60 R15: 00007ffef923ed58
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by getty/4839:
 #0: ffff88802ae160a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f162f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2211
3 locks held by kworker/u9:2/5085:
 #0: ffff8880656a6148 ((wq_completion)hci8){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3223 [inline]
 #0: ffff8880656a6148 ((wq_completion)hci8){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3329
 #1: ffffc9000365fd00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3224 [inline]
 #1: ffffc9000365fd00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3329
 #2: ffff8880687e0d88 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:322
2 locks held by kworker/u8:15/7377:
1 lock held by syz.1.3148/14197:
4 locks held by syz.2.3226/14450:
 #0: ffff88802f2b2970 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x1d9/0xd00 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4401
 #1: ffffc900043528b8 (&kvm->srcu){.+.+}-{0:0}, at: srcu_lock_release include/linux/srcu.h:122 [inline]
 #1: ffffc900043528b8 (&kvm->srcu){.+.+}-{0:0}, at: srcu_read_unlock include/linux/srcu.h:287 [inline]
 #1: ffffc900043528b8 (&kvm->srcu){.+.+}-{0:0}, at: kvm_vcpu_srcu_read_unlock include/linux/kvm_host.h:932 [inline]
 #1: ffffc900043528b8 (&kvm->srcu){.+.+}-{0:0}, at: vcpu_enter_guest arch/x86/kvm/x86.c:10980 [inline]
 #1: ffffc900043528b8 (&kvm->srcu){.+.+}-{0:0}, at: vcpu_run+0x5595/0x87f0 arch/x86/kvm/x86.c:11251
 #2: ffff8880677f3a98 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock_killable include/linux/mmap_lock.h:153 [inline]
 #2: ffff8880677f3a98 (&mm->mmap_lock){++++}-{3:3}, at: __get_user_pages_locked mm/gup.c:1832 [inline]
 #2: ffff8880677f3a98 (&mm->mmap_lock){++++}-{3:3}, at: get_user_pages_unlocked+0x14f/0x9d0 mm/gup.c:2761
 #3: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #3: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_fault+0xb5b/0x1760 mm/filemap.c:3320
4 locks held by syz.4.3262/14576:
 #0: ffff88802f2b00b0 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x1d9/0xd00 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4401
 #1: ffffc9000438e8b8 (&kvm->srcu){.+.+}-{0:0}, at: srcu_lock_release include/linux/srcu.h:122 [inline]
 #1: ffffc9000438e8b8 (&kvm->srcu){.+.+}-{0:0}, at: srcu_read_unlock include/linux/srcu.h:287 [inline]
 #1: ffffc9000438e8b8 (&kvm->srcu){.+.+}-{0:0}, at: kvm_vcpu_srcu_read_unlock include/linux/kvm_host.h:932 [inline]
 #1: ffffc9000438e8b8 (&kvm->srcu){.+.+}-{0:0}, at: vcpu_enter_guest arch/x86/kvm/x86.c:10980 [inline]
 #1: ffffc9000438e8b8 (&kvm->srcu){.+.+}-{0:0}, at: vcpu_run+0x5595/0x87f0 arch/x86/kvm/x86.c:11251
 #2: ffff88802d7b6a18 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock_killable include/linux/mmap_lock.h:153 [inline]
 #2: ffff88802d7b6a18 (&mm->mmap_lock){++++}-{3:3}, at: __get_user_pages_locked mm/gup.c:1832 [inline]
 #2: ffff88802d7b6a18 (&mm->mmap_lock){++++}-{3:3}, at: get_user_pages_unlocked+0x14f/0x9d0 mm/gup.c:2761
 #3: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #3: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_fault+0xb5b/0x1760 mm/filemap.c:3320
4 locks held by syz.0.3316/14720:
 #0: ffff88802f2b5230 (&vcpu->mutex){+.+.}-{3:3}, at: kvm_vcpu_ioctl+0x1d9/0xd00 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4401
 #1: ffffc900048808b8 (&kvm->srcu){.+.+}-{0:0}, at: srcu_lock_release include/linux/srcu.h:122 [inline]
 #1: ffffc900048808b8 (&kvm->srcu){.+.+}-{0:0}, at: srcu_read_unlock include/linux/srcu.h:287 [inline]
 #1: ffffc900048808b8 (&kvm->srcu){.+.+}-{0:0}, at: kvm_vcpu_srcu_read_unlock include/linux/kvm_host.h:932 [inline]
 #1: ffffc900048808b8 (&kvm->srcu){.+.+}-{0:0}, at: vcpu_enter_guest arch/x86/kvm/x86.c:10980 [inline]
 #1: ffffc900048808b8 (&kvm->srcu){.+.+}-{0:0}, at: vcpu_run+0x5595/0x87f0 arch/x86/kvm/x86.c:11251
 #2: ffff8880657a6a18 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_lock_killable include/linux/mmap_lock.h:153 [inline]
 #2: ffff8880657a6a18 (&mm->mmap_lock){++++}-{3:3}, at: __get_user_pages_locked mm/gup.c:1832 [inline]
 #2: ffff8880657a6a18 (&mm->mmap_lock){++++}-{3:3}, at: get_user_pages_unlocked+0x14f/0x9d0 mm/gup.c:2761
 #3: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #3: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_fault+0xb5b/0x1760 mm/filemap.c:3320
1 lock held by syz.0.3753/15944:
 #0: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88801d487c48 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xf7/0x7f0 mm/readahead.c:225
2 locks held by syz.4.3993/16610:
 #0: ffff88807aabd010 (&sb->s_type->i_mutex_key#9){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:791 [inline]
 #0: ffff88807aabd010 (&sb->s_type->i_mutex_key#9){+.+.}-{3:3}, at: __sock_release net/socket.c:658 [inline]
 #0: ffff88807aabd010 (&sb->s_type->i_mutex_key#9){+.+.}-{3:3}, at: sock_close+0x90/0x240 net/socket.c:1421
 #1: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #1: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:939

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 30 Comm: khungtaskd Not tainted 6.10.0-rc7-syzkaller-00012-g34afb82a3c67 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 12 Comm: kworker/u8:1 Not tainted 6.10.0-rc7-syzkaller-00012-g34afb82a3c67 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Workqueue: bat_events batadv_nc_worker
RIP: 0010:should_resched arch/x86/include/asm/preempt.h:103 [inline]
RIP: 0010:__local_bh_enable_ip+0x170/0x200 kernel/softirq.c:389
Code: 8b e8 14 58 23 0a 65 66 8b 05 84 d7 a9 7e 66 85 c0 75 5d bf 01 00 00 00 e8 8d 9f 0b 00 e8 78 8d 43 00 fb 65 8b 05 48 d7 a9 7e <85> c0 75 05 e8 07 b9 a6 ff 48 c7 44 24 20 0e 36 e0 45 49 c7 04 1c
RSP: 0018:ffffc90000117a00 EFLAGS: 00000286
RAX: 0000000080000000 RBX: 1ffff92000022f44 RCX: ffffffff8172d97a
RDX: dffffc0000000000 RSI: ffffffff8bcabb40 RDI: ffffffff8c1f15c0
RBP: ffffc90000117ab0 R08: ffffffff92f7165f R09: 1ffffffff25ee2cb
R10: dffffc0000000000 R11: fffffbfff25ee2cc R12: dffffc0000000000
R13: 1ffff92000022f48 R14: ffffc90000117a40 R15: 0000000000000201
FS:  0000000000000000(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f14d1ed0ab8 CR3: 000000007ed88000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 batadv_nc_purge_paths+0x30f/0x3b0 net/batman-adv/network-coding.c:471
 batadv_nc_worker+0x365/0x610 net/batman-adv/network-coding.c:722
 process_one_work kernel/workqueue.c:3248 [inline]
 process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3329
 worker_thread+0x86d/0xd50 kernel/workqueue.c:3409
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (39):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/07/09 19:11 upstream 34afb82a3c67 79d68ada .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/09 02:58 upstream 4376e966ecb7 bc23a442 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/08 03:19 upstream 256abd8e550c bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:26 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:25 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:25 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:25 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:25 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:19 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:18 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:18 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 16:12 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 11:32 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 11:30 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 11:30 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 11:22 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/05 03:38 upstream 795c58e4c7fc dc6bbff0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in filemap_fault
2024/07/04 12:05 upstream 795c58e4c7fc 409d975c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/04 12:05 upstream 795c58e4c7fc 409d975c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/04 12:05 upstream 795c58e4c7fc 409d975c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/04 09:52 upstream 795c58e4c7fc 409d975c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/07/01 14:18 upstream 22a40d14b572 b294e901 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/05/26 17:39 upstream c13320499ba0 a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/05/18 03:21 upstream 7ee332c9f12b c0f1611a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in filemap_fault
2024/05/15 05:57 upstream b850dc206a57 fdb4c10c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/05/07 22:54 upstream dccb07f2914c cb2dcc0e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/05/06 10:21 upstream dd5a440a31fa d884b519 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/05/06 08:04 upstream dd5a440a31fa 610f2a54 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/05/05 09:43 upstream 7367539ad4b0 610f2a54 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in filemap_fault
2024/04/24 14:22 upstream 9d1ddab261f3 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/04/23 20:45 upstream 71b1543c83d6 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/04/23 20:42 upstream 71b1543c83d6 21339d7b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2024/03/23 14:17 upstream fe46a7dd189e 0ea90952 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in filemap_fault
2024/03/14 14:42 upstream 480e035fc4c7 f919f202 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2024/01/02 08:45 upstream 610a9b8f49fb fb427a07 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in filemap_fault
2023/12/22 07:07 upstream 9a6b294ab496 4f9530a3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in filemap_fault
2023/12/09 21:00 upstream f2e8a57ee903 28b24332 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in filemap_fault
2023/09/17 17:38 upstream f0b0d403eabb 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in filemap_fault
2023/12/16 21:50 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci d5b235ec8eab 3222d10c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in filemap_fault
* Struck through repros no longer work on HEAD.