syzbot


INFO: task hung in gfs2_recover_journal (4)

Status: upstream: reported C repro on 2026/03/23 20:13
Subsystems: gfs2
[Documentation on labels]
Reported-by: syzbot+9013411dc43f3582823a@syzkaller.appspotmail.com
First crash: 281d, last: 17h51m
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
e8162464-150e-4201-be78-61372b01593a repro INFO: task hung in gfs2_recover_journal (4) 2026/03/07 14:45 2026/03/07 14:45 2026/03/07 14:54 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Cause bisection: failed (error log, bisect log)
  
Discussions (4)
Title Replies (including bot) Last reply
[PATCH v2] gfs2: reject journal extents with gaps 3 (3) 2026/04/14 02:02
[PATCH] gfs2: fix hung task in gfs2_jhead_process_page 5 (5) 2026/03/25 23:54
[syzbot] [gfs2?] INFO: task hung in gfs2_recover_journal (4) 3 (10) 2026/03/25 11:24
[PATCH] gfs2: prevent corrupt data from entering jextent 1 (1) 2026/03/25 07:50
Similar bugs (5)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in gfs2_recover_journal 1 1 782d 782d 0/3 auto-obsoleted due to no activity on 2024/06/09 15:26
upstream INFO: task hung in gfs2_recover_journal (3) gfs2 1 1 772d 772d 0/29 auto-obsoleted due to no activity on 2024/06/09 17:03
linux-5.15 INFO: task hung in gfs2_recover_journal 1 1 364d 364d 0/3 auto-obsoleted due to no activity on 2025/08/01 12:08
upstream INFO: task hung in gfs2_recover_journal (2) gfs2 1 2 881d 892d 0/29 auto-obsoleted due to no activity on 2024/02/21 00:05
upstream INFO: task hung in gfs2_recover_journal gfs2 1 5 1066d 1242d 0/29 auto-obsoleted due to no activity on 2023/08/20 05:04
Last patch testing requests (6)
Created Duration User Patch Repo Result
2026/03/25 10:25 51m eadavis@qq.com patch https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git 09c0f7f1bcdb report log
2026/03/25 07:49 26m eadavis@qq.com patch linux-next report log
2026/03/25 07:37 25m eadavis@qq.com patch linux-next report log
2026/03/24 09:27 17m kartikey406@gmail.com patch linux-next report log
2026/03/24 02:30 16m kartikey406@gmail.com patch linux-next report log
2026/03/24 01:49 17m kartikey406@gmail.com patch linux-next report log

Sample crash report:
INFO: task kworker/0:2:821 blocked in I/O wait for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/0:2     state:D stack:22144 pid:821   tgid:821   ppid:2      task_flags:0x4208060 flags:0x00080000
Workqueue: gfs2_recovery gfs2_recover_func
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5387 [inline]
 __schedule+0x169e/0x54f0 kernel/sched/core.c:7188
 __schedule_loop kernel/sched/core.c:7267 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7282
 io_schedule+0x7f/0xd0 kernel/sched/core.c:8109
 folio_wait_bit_common+0x6dd/0xbc0 mm/filemap.c:1324
 folio_wait_locked include/linux/pagemap.h:1234 [inline]
 gfs2_jhead_process_page+0x175/0x670 fs/gfs2/lops.c:470
 gfs2_find_jhead+0xbd2/0xd30 fs/gfs2/lops.c:586
 gfs2_recover_func+0x6cf/0x1f60 fs/gfs2/recovery.c:459
 process_one_work+0x9a3/0x1710 kernel/workqueue.c:3312
 process_scheduled_works kernel/workqueue.c:3403 [inline]
 worker_thread+0xba8/0x11e0 kernel/workqueue.c:3489
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz.0.17:6020 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.17        state:D stack:23424 pid:6020  tgid:6020  ppid:5961   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5387 [inline]
 __schedule+0x169e/0x54f0 kernel/sched/core.c:7188
 __schedule_loop kernel/sched/core.c:7267 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7282
 bit_wait+0x11/0xd0 kernel/sched/wait_bit.c:240
 __wait_on_bit+0xb9/0x300 kernel/sched/wait_bit.c:52
 out_of_line_wait_on_bit+0x13b/0x190 kernel/sched/wait_bit.c:67
 wait_on_bit include/linux/wait_bit.h:77 [inline]
 gfs2_recover_journal+0xda/0x140 fs/gfs2/recovery.c:579
 init_journal+0x16ad/0x2280 fs/gfs2/ops_fstype.c:794
 init_inodes+0xdb/0x320 fs/gfs2/ops_fstype.c:844
 gfs2_fill_super+0x1a92/0x2220 fs/gfs2/ops_fstype.c:1250
 get_tree_bdev_flags+0x431/0x4f0 fs/super.c:1694
 gfs2_get_tree+0x51/0x1e0 fs/gfs2/ops_fstype.c:1332
 vfs_get_tree+0x92/0x2a0 fs/super.c:1754
 fc_mount fs/namespace.c:1193 [inline]
 do_new_mount_fc fs/namespace.c:3758 [inline]
 do_new_mount+0x341/0xd30 fs/namespace.c:3834
 do_mount fs/namespace.c:4167 [inline]
 __do_sys_mount fs/namespace.c:4383 [inline]
 __se_sys_mount+0x31d/0x420 fs/namespace.c:4360
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x15f/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f12ff6dda8a
RSP: 002b:00007ffe6dab6108 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007ffe6dab6190 RCX: 00007f12ff6dda8a
RDX: 000020000001f680 RSI: 000020000001f6c0 RDI: 00007ffe6dab6150
RBP: 000020000001f680 R08: 00007ffe6dab6190 R09: 0000000000000084
R10: 0000000000000084 R11: 0000000000000246 R12: 000020000001f6c0
R13: 00007ffe6dab6150 R14: 000000000001f707 R15: 0000200000000000
 </TASK>

Showing all locks held in the system:
4 locks held by pr/legacy/17:
 #0: ffffffff8dfba3a0 (console_lock){+.+.}-{0:0}, at: legacy_kthread_func+0x1a3/0x250 kernel/printk/printk.c:3711
 #1: ffffffff8dea1c98 (console_srcu){....}-{0:0}, at: rcu_try_lock_acquire include/linux/rcupdate.h:305 [inline]
 #1: ffffffff8dea1c98 (console_srcu){....}-{0:0}, at: srcu_read_lock_nmisafe include/linux/srcu.h:428 [inline]
 #1: ffffffff8dea1c98 (console_srcu){....}-{0:0}, at: console_srcu_read_lock kernel/printk/printk.c:291 [inline]
 #1: ffffffff8dea1c98 (console_srcu){....}-{0:0}, at: console_flush_one_record+0xfa/0xb90 kernel/printk/printk.c:3246
 #2: ffffffff99bd6738 (&port_lock_key){+.+.}-{3:3}, at: spin_lock include/linux/spinlock_rt.h:45 [inline]
 #2: ffffffff99bd6738 (&port_lock_key){+.+.}-{3:3}, at: uart_port_lock_irqsave include/linux/serial_core.h:717 [inline]
 #2: ffffffff99bd6738 (&port_lock_key){+.+.}-{3:3}, at: serial8250_console_write+0x179/0x1b90 drivers/tty/serial/8250/8250_port.c:3316
 #3: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
 #3: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
 #3: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: __rt_spin_lock kernel/locking/spinlock_rt.c:50 [inline]
 #3: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: rt_spin_lock+0x1e0/0x400 kernel/locking/spinlock_rt.c:57
1 lock held by khungtaskd/38:
 #0: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
 #0: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
 #0: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6777
2 locks held by kworker/0:2/821:
 #0: ffff88801f330538 ((wq_completion)gfs2_recovery){+.+.}-{0:0}, at: process_one_work+0x890/0x1710 kernel/workqueue.c:3284
 #1: ffffc90005167c40 ((work_completion)(&jd->jd_work)){+.+.}-{0:0}, at: process_one_work+0x8b7/0x1710 kernel/workqueue.c:3285
2 locks held by getty/5580:
 #0: ffff88802cf230a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003cbe2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13a0 drivers/tty/n_tty.c:2211
1 lock held by syz.0.17/6020:
 #0: ffff88803d1c40d0 (&type->s_umount_key#54/1){+.+.}-{4:4}, at: alloc_super+0x28c/0xac0 fs/super.c:345
3 locks held by syz-executor/6024:
 #0: ffff8880350fc480 (sb_writers#5){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:493
 #1: ffff88803ae00388 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1074 [inline]
 #1: ffff88803ae00388 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: __start_dirop fs/namei.c:2914 [inline]
 #1: ffff88803ae00388 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: start_dirop fs/namei.c:2938 [inline]
 #1: ffff88803ae00388 (&type->i_mutex_dir_key#5/1){+.+.}-{4:4}, at: filename_rmdir+0x1cd/0x520 fs/namei.c:5414
 #2: ffffffff8e6e7878 (tomoyo_ss){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:187 [inline]
 #2: ffffffff8e6e7878 (tomoyo_ss){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:294 [inline]
 #2: ffffffff8e6e7878 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_read_lock security/tomoyo/common.h:1112 [inline]
 #2: ffffffff8e6e7878 (tomoyo_ss){.+.+}-{0:0}, at: tomoyo_path_perm+0x251/0x560 security/tomoyo/file.c:826
3 locks held by udevd/6028:
 #0: ffff88802330b108 (&sb->s_type->i_mutex_key#10){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:1044 [inline]
 #0: ffff88802330b108 (&sb->s_type->i_mutex_key#10){++++}-{4:4}, at: blkdev_read_iter+0x2ff/0x440 block/fops.c:854
 #1: ffff88802330b2d8 (mapping.invalidate_lock){++++}-{4:4}, at: filemap_invalidate_lock_shared include/linux/fs.h:1094 [inline]
 #1: ffff88802330b2d8 (mapping.invalidate_lock){++++}-{4:4}, at: do_page_cache_ra mm/readahead.c:333 [inline]
 #1: ffff88802330b2d8 (mapping.invalidate_lock){++++}-{4:4}, at: force_page_cache_ra+0x263/0x2e0 mm/readahead.c:364
 #2: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
 #2: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
 #2: ffffffff8dfc8100 (rcu_read_lock){....}-{1:3}, at: blk_mq_dispatch_queue_requests+0x552/0x800 block/blk-mq.c:2908

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:353 [inline]
 watchdog+0xfd3/0x1030 kernel/hung_task.c:561
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 17 Comm: pr/legacy Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:preempt_count_sub+0x5/0x170 kernel/sched/core.c:5877
Code: ff ff ff 48 c7 c7 14 55 8d 8f e8 46 63 9a 00 e9 62 ff ff ff 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 55 <53> 48 bb 00 00 00 00 00 fc ff df 48 c7 c0 e0 b3 81 99 48 c1 e8 03
RSP: 0018:ffffc900001679c0 EFLAGS: 00000216
RAX: 0000000057f985da RBX: 0000000000000899 RCX: 0000000000000000
RDX: 00000000000000b7 RSI: ffffffff8ba84940 RDI: 0000000000000001
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: ffffffff8b350800 R12: 1ffffffff337ad3e
R13: 00000000ffffffff R14: 000000b757f97d32 R15: 00000000000008a8
FS:  0000000000000000(0000) GS:ffff888125eba000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055558e5c4a68 CR3: 000000003b756000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 delay_tsc+0xa1/0xc0 arch/x86/lib/delay.c:96
 udelay include/asm-generic/delay.h:62 [inline]
 wait_for_lsr+0x166/0x2f0 drivers/tty/serial/8250/8250_port.c:1976
 serial8250_fifo_wait_for_lsr_thre drivers/tty/serial/8250/8250_port.c:3207 [inline]
 serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:3272 [inline]
 serial8250_console_write+0x120d/0x1b90 drivers/tty/serial/8250/8250_port.c:3357
 console_emit_next_record kernel/printk/printk.c:3163 [inline]
 console_flush_one_record+0x68b/0xb90 kernel/printk/printk.c:3269
 legacy_kthread_func+0x1b6/0x250 kernel/printk/printk.c:3712
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (56):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/18 20:55 linux-next c7275b05bc42 303e2802 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/28 16:37 linux-next 3b058d1aeeef 356bdfc9 .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/22 05:52 linux-next 785f0eb2f85d 5b92003d .config console log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro (corrupt fs)] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/22 06:06 upstream 4ee64205ffaa 0b6ab7ec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/01/30 01:12 upstream 8dfce8991b95 aeb6fdd5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/01/25 17:46 upstream d91a46d6805a 40acda8a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/01/18 14:31 upstream e84d960149e7 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/11/14 06:08 upstream 2ccec5944606 07e030de .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/10/14 17:36 upstream 3a8660878839 b6605ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/09/16 22:20 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/09/16 09:15 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/09/12 09:33 upstream 02ffd6f89c50 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/08/14 04:28 upstream 91325f31afc1 22ec1469 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/07 21:40 linux-next cc13002a9f98 628666c6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/07 07:08 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/07 05:42 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/07 02:39 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 22:31 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 19:24 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 16:27 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 15:22 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 07:43 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 05:35 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 04:33 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 02:28 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/06 00:57 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/05 17:50 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/05 13:29 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/04 23:38 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/04 22:36 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/04 21:13 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/04/04 07:28 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/30 04:21 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/30 01:52 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/29 23:34 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/29 21:22 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/29 20:07 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/29 02:25 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/29 00:48 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/28 22:41 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/23 10:11 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/22 23:13 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/22 22:33 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/22 12:46 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/22 11:29 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/22 03:32 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2026/03/22 02:16 linux-next 785f0eb2f85d 5b92003d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/12/16 10:18 linux-next 4a5663c04bb6 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/10/08 11:19 linux-next 68842969e138 7e2882b3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
2025/07/15 22:20 linux-next 0be23810e32e 03fcfc4b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in gfs2_recover_journal
* Struck through repros no longer work on HEAD.