syzbot


INFO: task hung in process_measurement (3)

Status: upstream: reported C repro on 2025/05/21 17:47
Subsystems: integrity lsm
[Documentation on labels]
Reported-by: syzbot+cb9e66807bcb882cd0c5@syzkaller.appspotmail.com
First crash: 215d, last: 15d
Cause bisection: introduced by (bisect log) :
commit 1d16c605cc55ef26f0c65b362665a6c99080ccbc
Author: Kent Overstreet <kent.overstreet@linux.dev>
Date: Thu Nov 9 19:22:46 2023 +0000

  bcachefs: Disk space accounting rewrite

Crash: INFO: task hung in __closure_sync (log)
Repro: C syz .config
  
Discussions (5)
Title Replies (including bot) Last reply
[syzbot] Monthly integrity report (Oct 2025) 0 (1) 2025/10/13 08:32
[syzbot] Monthly lsm report (Sep 2025) 0 (1) 2025/09/27 20:43
[syzbot] Monthly integrity report (Sep 2025) 0 (1) 2025/09/02 21:06
[syzbot] Monthly integrity report (Jun 2025) 0 (1) 2025/06/25 14:15
[syzbot] [integrity?] [lsm?] INFO: task hung in process_measurement (3) 0 (1) 2025/05/21 17:47
Similar bugs (12)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in process_measurement (2) 1 1 334d 334d 0/3 auto-obsoleted due to no activity on 2025/04/20 01:13
linux-5.15 INFO: task hung in process_measurement (3) origin:lts-only 1 C done 3 9d17h 217d 0/3 upstream: reported C repro on 2025/05/07 00:26
linux-5.15 INFO: task hung in process_measurement 1 1 871d 871d 0/3 auto-obsoleted due to no activity on 2023/10/31 09:35
linux-4.19 INFO: task hung in process_measurement 1 2 2201d 2277d 0/1 auto-closed as invalid on 2020/03/29 20:33
linux-6.1 INFO: task hung in process_measurement (3) 1 1 27d 27d 0/3 upstream: reported on 2025/11/13 09:59
upstream INFO: task hung in process_measurement integrity lsm 1 C done inconclusive 52 1595d 2627d 0/29 closed as invalid on 2022/02/08 10:56
upstream INFO: task hung in process_measurement (2) lsm integrity 1 C done 607 219d 823d 28/29 fixed on 2025/05/06 15:33
linux-4.19 INFO: task hung in process_measurement (3) 1 5 1009d 1014d 0/1 upstream: reported on 2023/03/02 09:48
linux-4.14 INFO: task hung in process_measurement 1 3 2096d 2152d 0/1 auto-closed as invalid on 2020/07/13 00:59
linux-6.1 INFO: task hung in process_measurement 1 39 513d 522d 0/3 auto-obsoleted due to no activity on 2024/09/23 07:47
linux-5.15 INFO: task hung in process_measurement (2) 1 97 413d 522d 0/3 auto-obsoleted due to no activity on 2025/01/01 08:25
linux-4.19 INFO: task hung in process_measurement (2) 1 1 1871d 1871d 0/1 auto-closed as invalid on 2021/02/23 17:00
Last patch testing requests (2)
Created Duration User Patch Repo Result
2025/12/09 06:28 32m retest repro upstream OK log
2025/06/25 05:41 17m retest repro upstream report log

Sample crash report:
INFO: task syz.2.282:7965 blocked for more than 163 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.282       state:D stack:26120 pid:7965  tgid:7960  ppid:5799   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x16f3/0x4c20 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7307
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1e04/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 inode_lock include/linux/fs.h:980 [inline]
 process_measurement+0x3de/0x1a40 security/integrity/ima/ima_main.c:280
 ima_file_check+0xd7/0x120 security/integrity/ima/ima_main.c:633
 security_file_post_open+0xbb/0x290 security/security.c:3199
 do_open fs/namei.c:3977 [inline]
 path_openat+0x2f32/0x3840 fs/namei.c:4134
 do_filp_open+0x1fa/0x410 fs/namei.c:4161
 do_sys_openat2+0x121/0x1c0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_open fs/open.c:1460 [inline]
 __se_sys_open fs/open.c:1456 [inline]
 __x64_sys_open+0x11e/0x150 fs/open.c:1456
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f09f933f749
RSP: 002b:00007f09f757d038 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007f09f9596090 RCX: 00007f09f933f749
RDX: 0000000000000000 RSI: 000000000014977e RDI: 0000200000000180
RBP: 00007f09f93c3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f09f9596128 R14: 00007f09f9596090 R15: 00007ffc8b7731f8
 </TASK>
INFO: task syz.2.282:7966 blocked for more than 163 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.282       state:D stack:27544 pid:7966  tgid:7960  ppid:5799   task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x16f3/0x4c20 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7307
 rt_mutex_slowlock_block kernel/locking/rtmutex.c:1647 [inline]
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked+0x1e04/0x25e0 kernel/locking/rtmutex.c:1760
 rt_mutex_slowlock+0xb5/0x160 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 rwbase_write_lock+0x14f/0x750 kernel/locking/rwbase_rt.c:244
 vfs_rename+0x665/0xe80 fs/namei.c:5187
 do_renameat2+0x6a2/0xa50 fs/namei.c:5364
 __do_sys_rename fs/namei.c:5411 [inline]
 __se_sys_rename fs/namei.c:5409 [inline]
 __x64_sys_rename+0x82/0x90 fs/namei.c:5409
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f09f933f749
RSP: 002b:00007f09f755c038 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f09f9596180 RCX: 00007f09f933f749
RDX: 0000000000000000 RSI: 0000200000000400 RDI: 0000200000006200
RBP: 00007f09f93c3f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f09f9596218 R14: 00007f09f9596180 R15: 00007ffc8b7731f8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/38:
 #0: ffffffff8d5aa880 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d5aa880 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d5aa880 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
4 locks held by kworker/u8:5/69:
4 locks held by kworker/u8:8/1182:
 #0: ffff88814045b138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88814045b138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc9000501fba0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc9000501fba0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffff888034df60d0 (&type->s_umount_key#69){++++}-{4:4}, at: super_trylock_shared+0x20/0xf0 fs/super.c:562
 #3: ffff8880555c59f8 (&sbi->gc_lock){+.+.}-{4:4}, at: f2fs_down_write fs/f2fs/f2fs.h:2294 [inline]
 #3: ffff8880555c59f8 (&sbi->gc_lock){+.+.}-{4:4}, at: f2fs_issue_checkpoint+0x3a8/0x610 fs/f2fs/checkpoint.c:1893
5 locks held by kworker/u8:9/3568:
2 locks held by dhcpcd/5463:
 #0: ffff8880275b6920 (nlk_cb_mutex-ROUTE){+.+.}-{4:4}, at: netlink_dump+0xbd/0xe90 net/netlink/af_netlink.c:2269
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dumpit+0x92/0x200 net/core/rtnetlink.c:6819
2 locks held by getty/5561:
 #0: ffff88823bf8e0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x444/0x1400 drivers/tty/n_tty.c:2222
4 locks held by syz.2.282/7961:
2 locks held by syz.2.282/7965:
 #0: ffff888034df6480 (sb_writers#17){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c9a69f8 (&sb->s_type->i_mutex_key#24){++++}-{4:4}, at: inode_lock include/linux/fs.h:980 [inline]
 #1: ffff88805c9a69f8 (&sb->s_type->i_mutex_key#24){++++}-{4:4}, at: process_measurement+0x3de/0x1a40 security/integrity/ima/ima_main.c:280
3 locks held by syz.2.282/7966:
 #0: ffff888034df6480 (sb_writers#17){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:499
 #1: ffff88805c9a6078 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:1025 [inline]
 #1: ffff88805c9a6078 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: lock_rename fs/namei.c:3360 [inline]
 #1: ffff88805c9a6078 (&type->i_mutex_dir_key#10/1){+.+.}-{4:4}, at: do_renameat2+0x3b9/0xa50 fs/namei.c:5311
 #2: ffff88805c9a69f8 (&sb->s_type->i_mutex_key#24/4){+.+.}-{4:4}, at: vfs_rename+0x665/0xe80 fs/namei.c:5187
3 locks held by kworker/u8:12/8199:
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc900044d7ba0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc900044d7ba0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
2 locks held by syz-executor/8842:
 #0: ffffffff8ed65d30 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8ed65d30 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8ed65d30 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8e9/0x1c80 net/core/rtnetlink.c:4064
2 locks held by syz-executor/9034:
 #0: ffffffff8dffaa60 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8dffaa60 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8dffaa60 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8e9/0x1c80 net/core/rtnetlink.c:4064
1 lock held by syz-executor/9083:
 #0: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x71c/0xb70 net/core/rtnetlink.c:6957
1 lock held by syz-executor/9101:
 #0: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: __tun_chr_ioctl+0x37d/0x1df0 drivers/net/tun.c:3078
1 lock held by syz.9.387/9196:
 #0: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff8e8639b8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x41/0x1c0 drivers/net/tun.c:3436

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf60/0xfa0 kernel/hung_task.c:495
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 69 Comm: kworker/u8:5 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Workqueue: netns cleanup_net
RIP: 0010:chain_hlock_class_idx kernel/locking/lockdep.c:438 [inline]
RIP: 0010:remove_class_from_lock_chain kernel/locking/lockdep.c:6198 [inline]
RIP: 0010:remove_class_from_lock_chains kernel/locking/lockdep.c:6236 [inline]
RIP: 0010:zap_class+0xe5/0x360 kernel/locking/lockdep.c:6281
Code: c1 75 d7 45 8b 7d 00 41 f6 c7 fc 0f 84 e2 00 00 00 41 c1 ef 08 4e 8d 34 7d 10 43 ab 91 49 81 ff 00 00 50 00 73 30 41 0f b7 06 <25> ff 1f 00 00 48 39 c3 74 33 49 ff c7 41 8b 45 00 89 c1 c1 e9 08
RSP: 0018:ffffc9000154f518 EFLAGS: 00000083
RAX: 000000000000091a RBX: 0000000000001100 RCX: 00000000000a4200
RDX: ffffffff92a84368 RSI: ffffffff92a84368 RDI: ffffffff92588b20
RBP: ffffffff92aa47b0 R08: 0000000000000000 R09: ffffffff81a91358
R10: dffffc0000000000 R11: fffffbfff1dac84f R12: 0000000000023b20
R13: ffffffff92e61990 R14: ffffffff91bf427a R15: 000000000009ffb5
FS:  0000000000000000(0000) GS:ffff888126df6000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055a476201b68 CR3: 000000001a2d4000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 __lockdep_free_key_range kernel/locking/lockdep.c:6407 [inline]
 lockdep_unregister_key+0x1a6/0x310 kernel/locking/lockdep.c:6609
 __qdisc_destroy+0x166/0x420 net/sched/sch_generic.c:1086
 netdev_for_each_tx_queue include/linux/netdevice.h:2664 [inline]
 dev_shutdown+0x93/0x440 net/sched/sch_generic.c:1497
 unregister_netdevice_many_notify+0x118c/0x2380 net/core/dev.c:12272
 unregister_netdevice_many net/core/dev.c:12347 [inline]
 default_device_exit_batch+0x819/0x890 net/core/dev.c:12851
 ops_exit_list net/core/net_namespace.c:205 [inline]
 ops_undo_list+0x525/0x990 net/core/net_namespace.c:252
 cleanup_net+0x4de/0x820 net/core/net_namespace.c:695
 process_one_work kernel/workqueue.c:3263 [inline]
 process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (50):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/11/24 23:51 upstream ac3fd01e4c1e bf6fe8fe .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/11/18 14:37 upstream e7c375b18160 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/11/06 22:44 upstream c2c2ccfd4ba7 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/11/05 11:44 upstream 284922f4c563 a6c9c731 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/10/22 00:51 upstream 6548d364a3e8 9832ed61 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2025/10/09 04:31 upstream cd5a0afbdf80 7e2882b3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/09/23 02:56 upstream cec1e6e5d1ab 0ac7291c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/29 20:33 upstream fb679c832b64 3e1beec6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/25 10:18 upstream c330cb607721 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/22 12:15 upstream 3957a5720157 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/22 02:36 upstream 068a56e56fa8 3e79b825 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/20 09:08 upstream b19a97d57c15 79512909 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/18 06:11 upstream 8d561baae505 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/18 01:11 upstream 8d561baae505 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/17 19:05 upstream 99bade344cfa 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/17 02:53 upstream 90d970cade8e 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/08/13 21:26 upstream 8742b2d8935f 22ec1469 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/07/21 12:33 upstream 89be9a83ccf1 56d87229 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/07/21 08:16 upstream 89be9a83ccf1 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/07/21 01:05 upstream f4a40a4282f4 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/07/09 14:59 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/07/06 23:34 upstream 1f988d0788f5 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2025/07/02 08:29 upstream 66701750d556 bc80e4f0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/07/02 05:43 upstream 66701750d556 bc80e4f0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/07/02 00:55 upstream 66701750d556 091a06cd .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/06/30 10:40 upstream d0b3b7b22dfa fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/06/30 04:10 upstream afa9a6f4f574 fc9d8ee5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/06/10 02:21 upstream 19272b37aa4f 4826c28e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/31 00:02 upstream 8477ab143069 3d2f584d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/23 02:52 upstream 94305e83eccb fa44301a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/22 06:00 upstream d608703fcdd9 0919b50b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/17 17:33 upstream 172a9d94339c f41472b0 .config strace log report syz / log C [disk image] [vmlinux] [kernel image] [mounted in repro #1] [mounted in repro #2] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/17 12:01 upstream 172a9d94339c f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/10 05:28 upstream 0e1329d4045c 77908e5f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/09 22:20 upstream 9c69f8884904 77908e5f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/05/09 11:57 upstream 2c89c1b655c0 bb813bcc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2025/10/19 19:10 linux-next 93f3bab4310d 1c8c8cd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/09/14 05:15 linux-next 590b221ed425 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/09/10 15:20 linux-next 5f540c4aade9 fdeaa69b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/09/08 00:40 linux-next be5d4872e528 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/08/25 02:49 linux-next 7fa4d8dc380f bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/08/18 00:53 linux-next 931e46dcbc7e 1804e95e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/07/20 23:33 linux-next d086c886ceb9 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/07/16 01:01 linux-next 0be23810e32e 03fcfc4b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/07/15 03:52 linux-next 0be23810e32e 03fcfc4b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/07/09 17:26 linux-next 835244aba90d f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/07/08 00:02 linux-next 26ffb3d6f02c 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/07/07 17:48 linux-next 26ffb3d6f02c 4f67c4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/06/04 17:05 linux-next 911483b25612 fd5e6e61 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2025/06/01 02:04 linux-next 3a83b350b5be 3d2f584d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
* Struck through repros no longer work on HEAD.