syzbot


INFO: task hung in process_measurement (2)

Status: upstream: reported on 2023/09/09 08:36
Subsystems: integrity lsm
[Documentation on labels]
Reported-by: syzbot+1de5a37cb85a2d536330@syzkaller.appspotmail.com
First crash: 628d, last: 54d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [integrity?] [lsm?] INFO: task hung in process_measurement (2) 0 (1) 2023/09/09 08:36
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in process_measurement 1 270d 270d 0/3 auto-obsoleted due to no activity on 2023/10/31 09:35
linux-4.19 INFO: task hung in process_measurement 2 1601d 1676d 0/1 auto-closed as invalid on 2020/03/29 20:33
upstream INFO: task hung in process_measurement integrity lsm C done inconclusive 52 995d 2026d 0/26 closed as invalid on 2022/02/08 10:56
linux-4.19 INFO: task hung in process_measurement (3) 5 409d 413d 0/1 upstream: reported on 2023/03/02 09:48
linux-4.14 INFO: task hung in process_measurement 3 1496d 1552d 0/1 auto-closed as invalid on 2020/07/13 00:59
linux-4.19 INFO: task hung in process_measurement (2) 1 1270d 1270d 0/1 auto-closed as invalid on 2021/02/23 17:00

Sample crash report:
INFO: task syz-executor.2:8085 blocked for more than 143 seconds.
      Not tainted 6.8.0-rc5-syzkaller-00278-g603c04e27c3e #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.2  state:D stack:25392 pid:8085  tgid:8053  ppid:5099   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5400 [inline]
 __schedule+0x177f/0x49a0 kernel/sched/core.c:6727
 __schedule_loop kernel/sched/core.c:6802 [inline]
 schedule+0x149/0x260 kernel/sched/core.c:6817
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6874
 rwsem_down_write_slowpath+0xeea/0x13b0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1ae/0x200 kernel/locking/rwsem.c:1306
 inode_lock include/linux/fs.h:804 [inline]
 process_measurement+0x44d/0x21d0 security/integrity/ima/ima_main.c:248
 ima_file_check+0xf1/0x170 security/integrity/ima/ima_main.c:557
 do_open fs/namei.c:3643 [inline]
 path_openat+0x28b6/0x3240 fs/namei.c:3798
 do_filp_open+0x234/0x490 fs/namei.c:3825
 do_sys_openat2+0x13e/0x1d0 fs/open.c:1404
 do_sys_open fs/open.c:1419 [inline]
 __do_sys_open fs/open.c:1427 [inline]
 __se_sys_open fs/open.c:1423 [inline]
 __x64_sys_open+0x225/0x270 fs/open.c:1423
 do_syscall_64+0xf9/0x240
 entry_SYSCALL_64_after_hwframe+0x6f/0x77
RIP: 0033:0x7fea76e7dda9
RSP: 002b:00007fea77bdc0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fea76fac050 RCX: 00007fea76e7dda9
RDX: 0000000000000000 RSI: 00000000001ad042 RDI: 0000000020000180
RBP: 00007fea76eca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007fea76fac050 R15: 00007ffdb245cf78
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
 #0: ffffffff8e130c60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #0: ffffffff8e130c60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #0: ffffffff8e130c60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by getty/4818:
 #0: ffff88802b2670a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b4/0x1e10 drivers/tty/n_tty.c:2201
4 locks held by kworker/u4:0/2815:
 #0: ffff8880192d2138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2608 [inline]
 #0: ffff8880192d2138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2706
 #1: ffffc90004fbfd20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2608 [inline]
 #1: ffffc90004fbfd20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2706
 #2: ffff88802c94a0e0 (&type->s_umount_key#53){++++}-{3:3}, at: super_trylock_shared+0x22/0xf0 fs/super.c:566
 #3: ffff88803aa892a8 (&sbi->gc_lock){+.+.}-{3:3}, at: f2fs_down_write fs/f2fs/f2fs.h:2138 [inline]
 #3: ffff88803aa892a8 (&sbi->gc_lock){+.+.}-{3:3}, at: f2fs_balance_fs+0x500/0x730 fs/f2fs/segment.c:437
3 locks held by kworker/u5:0/4827:
 #0: ffff88802a420938 ((wq_completion)hci5){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2608 [inline]
 #0: ffff88802a420938 ((wq_completion)hci5){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2706
 #1: ffffc900119a7d20 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2608 [inline]
 #1: ffffc900119a7d20 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2706
 #2: ffff888030419060 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:305
1 lock held by syz-executor.2/8085:
 #0: ffff88807fc4c810 (&sb->s_type->i_mutex_key#23){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
 #0: ffff88807fc4c810 (&sb->s_type->i_mutex_key#23){++++}-{3:3}, at: process_measurement+0x44d/0x21d0 security/integrity/ima/ima_main.c:248
8 locks held by syz-executor.2/8090:
2 locks held by syz-executor.4/9847:
 #0: ffff88803724e420 (sb_writers#16){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2794 [inline]
 #0: ffff88803724e420 (sb_writers#16){.+.+}-{0:0}, at: vfs_write+0x233/0xcb0 fs/read_write.c:586
 #1: ffff88808702e298 (&sb->s_type->i_mutex_key#23){++++}-{3:3}, at: inode_lock include/linux/fs.h:804 [inline]
 #1: ffff88808702e298 (&sb->s_type->i_mutex_key#23){++++}-{3:3}, at: f2fs_file_write_iter+0x297/0x2340 fs/f2fs/file.c:4778
7 locks held by syz-executor.4/9849:
2 locks held by dhcpcd/9911:
 #0: ffff888023688130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1691 [inline]
 #0: ffff888023688130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x32/0xcb0 net/packet/af_packet.c:3202
 #1: ffffffff8e1365f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
 #1: ffffffff8e1365f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3a3/0x890 kernel/rcu/tree_exp.h:995

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.8.0-rc5-syzkaller-00278-g603c04e27c3e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xfaf/0xff0 kernel/hung_task.c:379
 kthread+0x2ef/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1b/0x30 arch/x86/entry/entry_64.S:242
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 9849 Comm: syz-executor.4 Not tainted 6.8.0-rc5-syzkaller-00278-g603c04e27c3e #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
RIP: 0010:__lock_is_held kernel/locking/lockdep.c:5495 [inline]
RIP: 0010:lock_is_held_type+0xa8/0x190 kernel/locking/lockdep.c:5825
Code: 38 a4 74 41 83 bd b8 0a 00 00 00 7e 47 4c 89 eb 48 81 c3 c0 0a 00 00 31 ed 48 83 fd 31 73 24 48 89 df 4c 89 fe e8 68 02 00 00 <85> c0 75 2a 48 ff c5 49 63 85 b8 0a 00 00 48 83 c3 28 48 39 c5 7c
RSP: 0018:ffffc90004a6e498 EFLAGS: 00000046
RAX: 0000000000000000 RBX: ffff8880324a28f8 RCX: ffff8880324a1dc0
RDX: 0000000000000000 RSI: ffffffff8e130cc0 RDI: ffff8880324a28f8
RBP: 0000000000000003 R08: ffffffff83e47dd6 R09: 0000000000000000
R10: ffffc90004a6ec70 R11: fffff5200094dd90 R12: 0000000000000246
R13: ffff8880324a1dc0 R14: 00000000ffffffff R15: ffffffff8e130cc0
FS:  00007f5f185a26c0(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005559b2d96680 CR3: 0000000024fc6000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 lock_is_held include/linux/lockdep.h:231 [inline]
 __might_resched+0xa5/0x780 kernel/sched/core.c:10138
 down_read+0x8e/0xa40 kernel/locking/rwsem.c:1525
 check_valid_map fs/f2fs/gc.c:985 [inline]
 gc_data_segment fs/f2fs/gc.c:1534 [inline]
 do_garbage_collect+0x21e7/0x80a0 fs/f2fs/gc.c:1763
 f2fs_gc+0xeda/0x3140 fs/f2fs/gc.c:1866
 f2fs_balance_fs+0x549/0x730 fs/f2fs/segment.c:439
 f2fs_map_blocks+0x29c1/0x43c0 fs/f2fs/data.c:1780
 f2fs_iomap_begin+0x2b3/0xbc0 fs/f2fs/data.c:4225
 iomap_iter+0x681/0xef0 fs/iomap/iter.c:91
 __iomap_dio_rw+0xdd7/0x2330 fs/iomap/direct-io.c:658
 f2fs_dio_write_iter fs/f2fs/file.c:4691 [inline]
 f2fs_file_write_iter+0x1248/0x2340 fs/f2fs/file.c:4800
 call_write_iter include/linux/fs.h:2087 [inline]
 new_sync_write fs/read_write.c:497 [inline]
 vfs_write+0xa81/0xcb0 fs/read_write.c:590
 ksys_write+0x1a0/0x2c0 fs/read_write.c:643
 do_syscall_64+0xf9/0x240
 entry_SYSCALL_64_after_hwframe+0x6f/0x77
RIP: 0033:0x7f5f1787dda9
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f5f185a20c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f5f179ac120 RCX: 00007f5f1787dda9
RDX: 000000007ffff000 RSI: 0000000020000140 RDI: 0000000000000005
RBP: 00007f5f178ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f5f179ac120 R15: 00007fff797445f8
 </TASK>

Crashes (45):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/02/24 18:33 upstream 603c04e27c3e 8d446f15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/02/24 14:20 upstream 603c04e27c3e 8d446f15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/02/22 11:36 upstream 39133352cbed 345111b5 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/02/17 22:07 upstream ced590523156 578f7538 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/01/17 13:46 upstream 052d534373b7 c9a1c95b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/01/03 06:39 upstream 610a9b8f49fb fb427a07 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/11/20 22:06 upstream 98b1cc82c4af cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/11/20 04:39 upstream eb3479bc23fa cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/11/19 13:52 upstream 037266a5f723 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/10/31 04:15 upstream 14ab6d425e80 b5729d82 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/10/01 07:37 upstream 3b517966c561 8e26a358 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2023/09/19 16:10 upstream 2cf0f7156238 0b6a67ac .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/09/07 18:53 upstream 7ba2090ca64e 72324844 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/09/05 08:28 upstream 3f86ed6ec0b3 0b6286dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/09/03 19:44 upstream 6e32dfcccfcc 696ea0d2 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2024/02/11 14:02 upstream 7521f258ea30 77b23aa1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in process_measurement
2023/08/26 11:11 upstream 382d4cd18475 03d9c195 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2023/07/01 17:56 upstream a507db1d8fdc bfc47836 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/05/08 23:00 upstream ba0ad6ed89fd c7a5e2a0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/30 16:01 upstream 825a0714d2b3 62df2017 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/28 06:15 upstream 91ec4b0d11fe 70a605de .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/28 01:41 upstream 6e98b09da931 6f3d6fa7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/25 19:49 upstream 173ea743bf7a 65320f8e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/25 11:13 upstream 1a0beef98b58 65320f8e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/22 18:47 upstream 2caeeb9d4a1b 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/21 07:26 upstream 6a66fdd29ea1 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/18 10:05 upstream 6a8f57ae2eb0 436577a9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/17 17:00 upstream 6a8f57ae2eb0 c6ec7083 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/14 08:52 upstream 44149752e998 3cfcaa1b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/12 18:31 upstream 0bcc40255504 1a1596b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/04/11 10:22 upstream 0d3eb744aed4 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in process_measurement
2023/04/04 08:25 upstream 148341f0a2f5 41147e3e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/03/23 07:16 upstream fff5a5e7f528 f94b4a29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2023/02/03 23:09 upstream 66a87fff1a87 1b2f701a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in process_measurement
2022/12/10 21:12 upstream 296a7b7eb792 67be1ae7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2022/12/05 12:52 upstream 76dcd734eca2 e080de16 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2022/12/04 21:44 upstream c2bf05db6c78 e080de16 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2022/12/04 04:31 upstream 97ee9d1c1696 e080de16 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2022/11/29 19:19 upstream ca57f02295f1 05dc7993 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2022/10/14 17:23 upstream 55be6084c8e0 4954e4b2 .config console log report info [disk image] [vmlinux] ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2022/09/15 18:39 upstream 3245cb65fd91 dd9a85ff .config console log report info [disk image] [vmlinux] ci-upstream-kasan-gce-smack-root INFO: task hung in process_measurement
2022/08/06 08:37 upstream 200e340f2196 e853abd9 .config console log report info ci-upstream-kasan-gce-root INFO: task hung in process_measurement
2023/04/22 11:16 upstream 8e41e0a57566 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in process_measurement
2023/10/29 17:07 linux-next 66f1e1ea3548 3c418d72 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
2022/07/30 15:30 linux-next cb71b93c2dc3 fef302b1 .config console log report info ci-upstream-linux-next-kasan-gce-root INFO: task hung in process_measurement
* Struck through repros no longer work on HEAD.