syzbot


INFO: task hung in bch2_page_fault

Status: upstream: reported on 2024/12/13 22:41
Subsystems: bcachefs
[Documentation on labels]
Reported-by: syzbot+32415e0466b02533303c@syzkaller.appspotmail.com
First crash: 315d, last: 47d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [bcachefs?] INFO: task hung in bch2_page_fault 0 (1) 2024/12/13 22:41

Sample crash report:
INFO: task syz.1.24:6064 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc1-syzkaller-00181-g7ee983c850b4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.24        state:D stack:11152 pid:6064  tgid:6063  ppid:5832   task_flags:0x400140 flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5377 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6764
 __schedule_loop kernel/sched/core.c:6841 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6856
 __bch2_two_state_lock+0x229/0x2c0 fs/bcachefs/two_state_shared_lock.c:7
 bch2_two_state_lock fs/bcachefs/two_state_shared_lock.h:55 [inline]
 bch2_page_fault+0x31f/0x960 fs/bcachefs/fs-io-pagecache.c:592
 __do_fault+0x135/0x390 mm/memory.c:4977
 do_read_fault mm/memory.c:5392 [inline]
 do_fault mm/memory.c:5526 [inline]
 do_pte_missing mm/memory.c:4047 [inline]
 handle_pte_fault mm/memory.c:5889 [inline]
 __handle_mm_fault+0x4c44/0x70f0 mm/memory.c:6032
 handle_mm_fault+0x2c1/0x7e0 mm/memory.c:6201
 faultin_page mm/gup.c:1196 [inline]
 __get_user_pages+0x1a92/0x4140 mm/gup.c:1491
 populate_vma_page_range+0x264/0x330 mm/gup.c:1929
 __mm_populate+0x27a/0x460 mm/gup.c:2032
 mm_populate include/linux/mm.h:3386 [inline]
 vm_mmap_pgoff+0x303/0x430 mm/util.c:580
 ksys_mmap_pgoff+0x4eb/0x720 mm/mmap.c:607
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f352d58cde9
RSP: 002b:00007f352e407038 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007f352d7a5fa0 RCX: 00007f352d58cde9
RDX: 00000000027fffff RSI: 0000000000600000 RDI: 0000400000000000
RBP: 00007f352d60e2a0 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000004002011 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f352d7a5fa0 R15: 00007ffe7fb30b28
 </TASK>
INFO: task syz.1.24:6123 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc1-syzkaller-00181-g7ee983c850b4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.24        state:D stack:12752 pid:6123  tgid:6063  ppid:5832   task_flags:0x440040 flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5377 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6764
 __schedule_loop kernel/sched/core.c:6841 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6856
 __bch2_two_state_lock+0x229/0x2c0 fs/bcachefs/two_state_shared_lock.c:7
 bch2_two_state_lock fs/bcachefs/two_state_shared_lock.h:55 [inline]
 bch2_readahead+0x9e1/0x1240 fs/bcachefs/fs-io-buffered.c:272
 read_pages+0x179/0x570 mm/readahead.c:161
 page_cache_ra_order+0xa36/0xca0 mm/readahead.c:516
 filemap_readahead mm/filemap.c:2549 [inline]
 filemap_get_pages+0x9e4/0x1fb0 mm/filemap.c:2594
 filemap_splice_read+0x68e/0xef0 mm/filemap.c:2971
 do_splice_read fs/splice.c:985 [inline]
 splice_direct_to_actor+0x4af/0xc80 fs/splice.c:1089
 do_splice_direct_actor fs/splice.c:1207 [inline]
 do_splice_direct+0x289/0x3e0 fs/splice.c:1233
 do_sendfile+0x564/0x8a0 fs/read_write.c:1363
 __do_sys_sendfile64 fs/read_write.c:1424 [inline]
 __se_sys_sendfile64+0x17c/0x1e0 fs/read_write.c:1410
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f352d58cde9
RSP: 002b:00007f352e3e6038 EFLAGS: 00000246 ORIG_RAX: 0000000000000028
RAX: ffffffffffffffda RBX: 00007f352d7a6080 RCX: 00007f352d58cde9
RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000005
RBP: 00007f352d60e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0001000000201005 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f352d7a6080 R15: 00007ffe7fb30b28
 </TASK>
INFO: task syz.1.24:6126 blocked for more than 144 seconds.
      Not tainted 6.14.0-rc1-syzkaller-00181-g7ee983c850b4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.24        state:D stack:20024 pid:6126  tgid:6063  ppid:5832   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5377 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6764
 __schedule_loop kernel/sched/core.c:6841 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6856
 io_schedule+0x8d/0x110 kernel/sched/core.c:7689
 folio_wait_bit_common+0x839/0xee0 mm/filemap.c:1318
 folio_lock include/linux/pagemap.h:1163 [inline]
 bch2_mark_pagecache_reserved+0x388/0xc60 fs/bcachefs/fs-io-pagecache.c:314
 __bchfs_fallocate+0x180f/0x2770 fs/bcachefs/fs-io.c:706
 bchfs_fallocate+0x31b/0x730 fs/bcachefs/fs-io.c:762
 bch2_fallocate_dispatch+0x3ac/0x540 fs/bcachefs/fs-io.c:809
 vfs_fallocate+0x623/0x7a0 fs/open.c:338
 ksys_fallocate fs/open.c:362 [inline]
 __do_sys_fallocate fs/open.c:367 [inline]
 __se_sys_fallocate fs/open.c:365 [inline]
 __x64_sys_fallocate+0xbc/0x110 fs/open.c:365
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f352d58cde9
RSP: 002b:00007f352e3c5038 EFLAGS: 00000246 ORIG_RAX: 000000000000011d
RAX: ffffffffffffffda RBX: 00007f352d7a6160 RCX: 00007f352d58cde9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000004
RBP: 00007f352d60e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 00000000001001f0 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007f352d7a6160 R15: 00007ffe7fb30b28
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6746
3 locks held by kworker/u8:5/440:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90003117c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90003117c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:285
5 locks held by kworker/u8:8/4491:
 #0: ffff88801baf3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801baf3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc9000f147c60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc9000f147c60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fca1cd0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x17a/0xd60 net/core/net_namespace.c:606
 #3: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xe9/0xaa0 net/core/dev.c:12337
 #4: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:334 [inline]
 #4: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:996
2 locks held by syslogd/5179:
 #0: ffff8880b863e7d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:598
 #1: ffff8880b8628948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x41d/0x7a0 kernel/sched/psi.c:987
1 lock held by dhcpcd/5491:
 #0: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #0: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x31a/0x1ac0 net/ipv4/devinet.c:1129
2 locks held by getty/5581:
 #0: ffff8880315e40a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
1 lock held by syz.1.24/6064:
 #0: ffff888033e8cfe0 (&mm->mmap_lock){++++}-{4:4}, at: mmap_read_lock include/linux/mmap_lock.h:190 [inline]
 #0: ffff888033e8cfe0 (&mm->mmap_lock){++++}-{4:4}, at: __mm_populate+0x1b0/0x460 mm/gup.c:2011
1 lock held by syz.1.24/6123:
 #0: ffff888055d902e8 (mapping.invalidate_lock#7){.+.+}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:932 [inline]
 #0: ffff888055d902e8 (mapping.invalidate_lock#7){.+.+}-{4:4}, at: page_cache_ra_order+0x45d/0xca0 mm/readahead.c:492
3 locks held by syz.1.24/6126:
 #0: ffff888079088420 (sb_writers#17){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:3035 [inline]
 #0: ffff888079088420 (sb_writers#17){.+.+}-{0:0}, at: vfs_fallocate+0x59d/0x7a0 fs/open.c:337
 #1: ffff888055d90148 (&sb->s_type->i_mutex_key#23){++++}-{4:4}, at: inode_lock include/linux/fs.h:877 [inline]
 #1: ffff888055d90148 (&sb->s_type->i_mutex_key#23){++++}-{4:4}, at: bch2_fallocate_dispatch+0x1e2/0x540 fs/bcachefs/fs-io.c:800
 #2: ffff888058384378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:164 [inline]
 #2: ffff888058384378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:256 [inline]
 #2: ffff888058384378 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7e4/0xd30 fs/bcachefs/btree_iter.c:3377
3 locks held by syz-executor/6169:
 #0: ffff888054764d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:480 [inline]
 #0: ffff888054764d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x203/0x510 net/bluetooth/hci_core.c:2677
 #1: ffff888054764078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x5c8/0x11c0 net/bluetooth/hci_sync.c:5185
 #2: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:334 [inline]
 #2: ffffffff8e93dcb8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:996
7 locks held by syz-executor/9205:
 #0: ffff888035f8c420 (sb_writers#8){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:3035 [inline]
 #0: ffff888035f8c420 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x225/0xd10 fs/read_write.c:675
 #1: ffff8880309e6888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1ea/0x500 fs/kernfs/file.c:325
 #2: ffff8880278b5878 (kn->active#49){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20e/0x500 fs/kernfs/file.c:326
 #3: ffffffff8f55dc48 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xfc/0x480 drivers/net/netdevsim/bus.c:216
 #4: ffff88805c07c0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1030 [inline]
 #4: ffff88805c07c0e8 (&dev->mutex){....}-{4:4}, at: __device_driver_lock drivers/base/dd.c:1095 [inline]
 #4: ffff88805c07c0e8 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0xce/0x7c0 drivers/base/dd.c:1293
 #5: ffff88805c07d250 (&devlink->lock_key#4){+.+.}-{4:4}, at: nsim_drv_remove+0x50/0x160 drivers/net/netdevsim/dev.c:1675
 #6: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #6: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: unregister_netdevice_notifier_net+0x89/0x3a0 net/core/dev.c:2057
2 locks held by syz-executor/9380:
 #0: ffffffff8f45af20 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8f45af20 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8f45af20 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x22/0x250 net/core/rtnetlink.c:564
 #1: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 #1: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4020
2 locks held by syz.3.374/9442:
 #0: ffff888034d28d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:480 [inline]
 #0: ffff888034d28d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x203/0x510 net/bluetooth/hci_core.c:2677
 #1: ffff888034d28078 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x5c8/0x11c0 net/bluetooth/hci_sync.c:5185
1 lock held by syz.8.373/9445:
 #0: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fcae248 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.14.0-rc1-syzkaller-00181-g7ee983c850b4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:106 [inline]
NMI backtrace for cpu 0 skipped: idling at acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:111

Crashes (35):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/02/08 08:09 upstream 7ee983c850b4 ef44b750 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2025/01/12 03:09 upstream b62cef9a5c67 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2025/01/11 13:28 upstream 77a903cd8e5a 6dbc6a9b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/27 07:37 upstream d6ef8b40d075 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/13 11:25 upstream f932fb9b4074 3547e30f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/09 07:07 upstream 62b5a46999c7 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/01 16:24 upstream bcc8eda6d349 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/11/29 16:04 upstream 7af08b57bcb9 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/11/26 14:58 upstream 7eef7e306d3c e9a9a9f2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/10/25 22:51 upstream ae90f6a6170d 045e728d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/10/20 08:38 upstream f9e4825524aa cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/28 02:53 upstream 3630400697a3 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/27 15:19 upstream 075dbe9f6e3c 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/25 15:07 upstream 684a64bf32b6 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/25 14:19 upstream 684a64bf32b6 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/24 14:57 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/24 14:56 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/23 22:35 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/09/19 05:30 upstream 4a39ac5b7d62 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/07/07 13:16 upstream 22f902dfc51e bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/07/07 13:06 upstream 22f902dfc51e bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/07/07 06:56 upstream 22f902dfc51e 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/07/02 22:16 upstream 1dfe225e9af5 8373af66 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/07/02 08:49 upstream 73e931504f8e b294e901 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/06/13 12:08 upstream cea2a26553ac 2aa5052f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in bch2_page_fault
2024/06/08 07:49 upstream 96e09b8f8166 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/06/05 12:06 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in bch2_page_fault
2024/06/05 09:34 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/31 03:18 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/31 02:22 upstream 4a4be1ad3a6e 34889ee3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/20 01:12 upstream 61307b7be41a c0f1611a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/05/17 02:14 upstream 3c999d1ae3c7 c2e07261 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in bch2_page_fault
2024/12/09 22:24 linux-next af2ea8ab7a54 9ac0fdc6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in bch2_page_fault
2024/09/28 16:13 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 5f5673607153 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in bch2_page_fault
2024/07/25 20:01 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci c912bf709078 32fcf98f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in bch2_page_fault
* Struck through repros no longer work on HEAD.