syzbot


INFO: task hung in process_measurement (2)

Status: upstream: reported on 2023/09/09 08:36
Subsystems: lsm integrity
[Documentation on labels]
Reported-by: syzbot+1de5a37cb85a2d536330@syzkaller.appspotmail.com
First crash: 844d, last: 16h18m
Discussions (4)
Title Replies (including bot) Last reply
[syzbot] Monthly integrity report (Nov 2024) 0 (1) 2024/11/06 08:33
[syzbot] Monthly lsm report (Sep 2024) 11 (12) 2024/09/28 17:40
[syzbot] Monthly lsm report (May 2024) 0 (1) 2024/05/09 09:09
[syzbot] [integrity?] [lsm?] INFO: task hung in process_measurement (2) 0 (1) 2023/09/09 08:36
Similar bugs (8)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in process_measurement 1 486d 486d 0/3 auto-obsoleted due to no activity on 2023/10/31 09:35
linux-4.19 INFO: task hung in process_measurement 2 1817d 1892d 0/1 auto-closed as invalid on 2020/03/29 20:33
upstream INFO: task hung in process_measurement integrity lsm C done inconclusive 52 1211d 2242d 0/28 closed as invalid on 2022/02/08 10:56
linux-4.19 INFO: task hung in process_measurement (3) 5 625d 629d 0/1 upstream: reported on 2023/03/02 09:48
linux-4.14 INFO: task hung in process_measurement 3 1712d 1768d 0/1 auto-closed as invalid on 2020/07/13 00:59
linux-6.1 INFO: task hung in process_measurement 39 128d 138d 0/3 auto-obsoleted due to no activity on 2024/09/23 07:47
linux-5.15 INFO: task hung in process_measurement (2) 97 28d 138d 0/3 upstream: reported on 2024/07/05 20:51
linux-4.19 INFO: task hung in process_measurement (2) 1 1486d 1486d 0/1 auto-closed as invalid on 2021/02/23 17:00

Sample crash report:
INFO: task syz.4.345:8988 blocked for more than 143 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.345       state:D stack:20368 pid:8988  tgid:8941  ppid:7898   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x17fb/0x4be0 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 rwsem_down_write_slowpath+0xeee/0x13b0 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1d7/0x220 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:818 [inline]
 process_measurement+0x439/0x1fb0 security/integrity/ima/ima_main.c:250
 ima_file_check+0xd9/0x120 security/integrity/ima/ima_main.c:572
 security_file_post_open+0xb9/0x280 security/security.c:3128
 do_open fs/namei.c:3830 [inline]
 path_openat+0x2ccd/0x3590 fs/namei.c:3987
 do_filp_open+0x27f/0x4e0 fs/namei.c:4014
 do_sys_openat2+0x13e/0x1d0 fs/open.c:1398
 do_sys_open fs/open.c:1413 [inline]
 __do_sys_open fs/open.c:1421 [inline]
 __se_sys_open fs/open.c:1417 [inline]
 __x64_sys_open+0x225/0x270 fs/open.c:1417
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fc30cd7e819
RSP: 002b:00007fc30db90038 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fc30cf36080 RCX: 00007fc30cd7e819
RDX: 0000000000000000 RSI: 0000000000040542 RDI: 0000000020000100
RBP: 00007fc30cdf175e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fc30cf36080 R15: 00007fff0c672538
 </TASK>
INFO: task syz.4.345:8990 blocked for more than 143 seconds.
      Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.345       state:D stack:26864 pid:8990  tgid:8941  ppid:7898   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x17fb/0x4be0 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 rwsem_down_write_slowpath+0xeee/0x13b0 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1d7/0x220 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:818 [inline]
 process_measurement+0x439/0x1fb0 security/integrity/ima/ima_main.c:250
 ima_file_check+0xd9/0x120 security/integrity/ima/ima_main.c:572
 security_file_post_open+0xb9/0x280 security/security.c:3128
 do_open fs/namei.c:3830 [inline]
 path_openat+0x2ccd/0x3590 fs/namei.c:3987
 do_filp_open+0x27f/0x4e0 fs/namei.c:4014
 do_sys_openat2+0x13e/0x1d0 fs/open.c:1398
 do_sys_open fs/open.c:1413 [inline]
 __do_sys_open fs/open.c:1421 [inline]
 __se_sys_open fs/open.c:1417 [inline]
 __x64_sys_open+0x225/0x270 fs/open.c:1417
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fc30cd7e819
RSP: 002b:00007fc30db6f038 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fc30cf36160 RCX: 00007fc30cd7e819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000020000000
RBP: 00007fc30cdf175e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fc30cf36160 R15: 00007fff0c672538
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u8:0/11:
3 locks held by kworker/1:0/25:
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900001f7d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900001f7d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffff88806938a240 (&data->fib_lock){+.+.}-{4:4}, at: nsim_fib_event_work+0x2d1/0x4130 drivers/net/netdevsim/fib.c:1490
1 lock held by khungtaskd/30:
 #0: ffffffff8e93c7e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e93c7e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e93c7e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744
3 locks held by kworker/u8:3/53:
 #0: ffff88803075f948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88803075f948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90000be7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90000be7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4196
2 locks held by kworker/u8:4/68:
 #0: ffff8880b863ea58 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:598
 #1: ffff8880b8628948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x41d/0x7a0 kernel/sched/psi.c:987
4 locks held by kworker/u8:6/1109:
 #0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801baeb148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc90003f57d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90003f57d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fcbe890 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xcc0 net/core/net_namespace.c:580
 #3: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: wg_destruct+0x25/0x2e0 drivers/net/wireguard/device.c:246
3 locks held by kworker/u8:7/1132:
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900040c7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900040c7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
3 locks held by kworker/0:2/1202:
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac80948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1850 kernel/workqueue.c:3310
 #1: ffffc900042f7d00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc900042f7d00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1850 kernel/workqueue.c:3310
 #2: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by dhcpcd/5503:
 #0: ffff88801e7e36c8 (nlk_cb_mutex-ROUTE){+.+.}-{4:4}, at: __netlink_dump_start+0x119/0x790 net/netlink/af_netlink.c:2395
 #1: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dumpit+0x99/0x200 net/core/rtnetlink.c:6534
2 locks held by getty/5590:
 #0: ffff888030ead0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
5 locks held by syz.4.345/8942:
 #0: ffff88805f986420 (sb_writers#24){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff8880583cc4c8 (&sb->s_type->i_mutex_key#29){++++}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #1: ffff8880583cc4c8 (&sb->s_type->i_mutex_key#29){++++}-{4:4}, at: do_truncate+0x20c/0x310 fs/open.c:63
 #2: ffff88805aa00a38 (&c->snapshot_create_lock){.+.+}-{4:4}, at: bch2_truncate+0x166/0x2d0 fs/bcachefs/io_misc.c:292
 #3: ffff88805aa04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:158 [inline]
 #3: ffff88805aa04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:249 [inline]
 #3: ffff88805aa04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7e1/0xd30 fs/bcachefs/btree_iter.c:3228
 #4: ffff88805aa266d0 (&c->gc_lock){++++}-{4:4}, at: bch2_btree_update_start+0x682/0x14e0 fs/bcachefs/btree_update_interior.c:1197
1 lock held by syz.4.345/8988:
 #0: ffff8880583cc4c8 (&sb->s_type->i_mutex_key#29){++++}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #0: ffff8880583cc4c8 (&sb->s_type->i_mutex_key#29){++++}-{4:4}, at: process_measurement+0x439/0x1fb0 security/integrity/ima/ima_main.c:250
1 lock held by syz.4.345/8990:
 #0: ffff8880583cc4c8 (&sb->s_type->i_mutex_key#29){++++}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #0: ffff8880583cc4c8 (&sb->s_type->i_mutex_key#29){++++}-{4:4}, at: process_measurement+0x439/0x1fb0 security/integrity/ima/ima_main.c:250
4 locks held by bch-reclaim/loo/8977:
 #0: ffff88805aa4b0a8 (&j->reclaim_lock){+.+.}-{4:4}, at: bch2_journal_reclaim_thread+0x167/0x560 fs/bcachefs/journal_reclaim.c:739
 #1: ffff88805aa04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:158 [inline]
 #1: ffff88805aa04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:249 [inline]
 #1: ffff88805aa04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: __bch2_trans_get+0x7e1/0xd30 fs/bcachefs/btree_iter.c:3228
 #2: ffff88805aa04740 (&wb->flushing.lock){+.+.}-{4:4}, at: btree_write_buffer_flush_seq+0x1b19/0x1cc0 fs/bcachefs/btree_write_buffer.c:516
 #3: ffff88805aa266d0 (&c->gc_lock){++++}-{4:4}, at: bch2_btree_update_start+0x682/0x14e0 fs/bcachefs/btree_update_interior.c:1197
2 locks held by syz-executor/11033:
 #0: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
 #1: ffffffff8e941d78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:297 [inline]
 #1: ffffffff8e941d78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:976
1 lock held by syz-executor/11508:
 #0: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6672
1 lock held by syz-executor/11746:
 #0: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fccb3c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6672
1 lock held by syz.2.632/11937:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xff4/0x1040 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 11936 Comm: syz.2.632 Not tainted 6.12.0-syzkaller-01782-gbf9aa14fc523 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
RIP: 0033:0x7f468465cdec
Code: fe ff ff 48 83 e8 01 4c 89 f6 bf 01 00 00 00 48 c1 e0 0e 48 c1 ee 06 48 01 c8 4c 89 f1 81 e6 ff 3f 00 00 48 c1 e9 03 83 e1 07 <d3> e7 40 84 bc 06 20 20 00 00 40 0f 95 c6 40 08 74 24 1e 80 7c 24
RSP: 002b:00007ffc4afb5e00 EFLAGS: 00000206
RAX: 000000110f284000 RBX: 00007f4685465720 RCX: 0000000000000006
RDX: ffffffff8208a934 RSI: 00000000000022a4 RDI: 0000000000000001
RBP: ffffffff8208a054 R08: 00007f4684936038 R09: 00007f4684922000
R10: 00007f4683dff008 R11: 0000000000000010 R12: 0000000000000010
R13: 0000000000000000 R14: ffffffff8208a934 R15: 000000000000ab39
FS:  000055556547f500 GS:  0000000000000000

Crashes (409):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/11/20 14:21 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/11/11 18:26 upstream 2d5404caa8c7 0c4b1325 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/11/09 21:35 upstream da4373fbcf00 6b856513 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/11/08 12:38 upstream 906bd684e4b1 179b040e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/11/08 05:31 upstream 906bd684e4b1 179b040e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/11/07 09:31 upstream ff7afaeca1a1 df3dc63b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/11/04 04:26 upstream a33ab3f94f51 f00eed24 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/25 00:40 upstream 4e46774408d9 c79b8ca5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/24 13:04 upstream c2ee9f594da8 0d144d1a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/24 07:12 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/24 04:55 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/24 00:44 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/23 21:32 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/23 16:04 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/23 13:12 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/23 05:45 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/23 04:32 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/23 03:31 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/23 01:48 upstream c2ee9f594da8 15fa2979 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/22 07:44 upstream c2ee9f594da8 a93682b3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/21 03:48 upstream 42f7652d3eb5 cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/10/21 00:10 upstream 715ca9dd687f cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/29 14:40 upstream 3efc57369a0c ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/29 10:36 upstream 3efc57369a0c ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/29 00:54 upstream 3efc57369a0c ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/28 21:32 upstream ad46e8f95e93 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/28 21:27 upstream ad46e8f95e93 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/28 18:23 upstream ad46e8f95e93 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/28 14:28 upstream ad46e8f95e93 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/28 13:10 upstream ad46e8f95e93 ba29ff75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/28 03:57 upstream 3630400697a3 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/27 20:59 upstream 3630400697a3 440b26ec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/27 13:53 upstream 075dbe9f6e3c 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/27 02:07 upstream 11a299a7933e 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/26 23:10 upstream 11a299a7933e 9314348a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/26 09:40 upstream aa486552a110 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/26 06:02 upstream aa486552a110 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/25 22:24 upstream aa486552a110 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/25 11:24 upstream 684a64bf32b6 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/25 10:01 upstream 684a64bf32b6 349a68c4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/25 02:31 upstream 97d8894b6f4c 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 16:30 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 16:27 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 13:40 upstream abf2050f51fd 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2024/09/24 10:34 upstream abf2050f51fd 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 05:12 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 04:03 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 02:58 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 01:31 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/24 00:10 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/09/23 23:21 upstream f8eb5bd9a818 89298aad .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/06/08 22:57 upstream dc772f8237f9 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in process_measurement
2024/06/06 13:10 upstream 2df0193e62cf 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in process_measurement
2023/09/07 18:53 upstream 7ba2090ca64e 72324844 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/09/05 08:28 upstream 3f86ed6ec0b3 0b6286dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/02/11 14:02 upstream 7521f258ea30 77b23aa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in process_measurement
2023/08/26 11:11 upstream 382d4cd18475 03d9c195 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2024/09/30 15:13 linux-next cea5425829f7 bbd4e0a4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2022/07/30 15:30 linux-next cb71b93c2dc3 fef302b1 .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
* Struck through repros no longer work on HEAD.