syzbot


INFO: task hung in cgroup_can_fork (2)

Status: auto-obsoleted due to no activity on 2024/12/20 05:36
Subsystems: cgroups
[Documentation on labels]
First crash: 180d, last: 100d
Similar bugs (3)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in cgroup_can_fork 1 69d 69d 0/3 upstream: reported on 2024/10/22 16:24
linux-5.15 INFO: task hung in cgroup_can_fork 1 86d 86d 0/3 upstream: reported on 2024/10/05 02:23
upstream INFO: task hung in cgroup_can_fork cgroups C error error 6 1083d 1103d 0/28 auto-obsoleted due to no activity on 2023/09/02 21:24

Sample crash report:
INFO: task kworker/u8:3:51 blocked for more than 143 seconds.
      Not tainted 6.10.0-rc6-syzkaller-00223-gc6653f49e4fd #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:3    state:D stack:20496 pid:51    tgid:51    ppid:2      flags:0x00004000
Workqueue: events_unbound call_usermodehelper_exec_work
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 percpu_rwsem_wait+0x3c2/0x450 kernel/locking/percpu-rwsem.c:162
 __percpu_down_read+0xee/0x130 kernel/locking/percpu-rwsem.c:177
 percpu_down_read include/linux/percpu-rwsem.h:65 [inline]
 cgroup_threadgroup_change_begin include/linux/cgroup-defs.h:787 [inline]
 cgroup_css_set_fork kernel/cgroup/cgroup.c:6402 [inline]
 cgroup_can_fork+0xb97/0xc80 kernel/cgroup/cgroup.c:6531
 copy_process+0x21f1/0x3dc0 kernel/fork.c:2486
 kernel_clone+0x223/0x870 kernel/fork.c:2797
 user_mode_thread+0x132/0x1a0 kernel/fork.c:2875
 call_usermodehelper_exec_sync kernel/umh.c:133 [inline]
 call_usermodehelper_exec_work+0x9b/0x230 kernel/umh.c:164
 process_one_work kernel/workqueue.c:3248 [inline]
 process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3329
 worker_thread+0x86d/0xd50 kernel/workqueue.c:3409
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task jbd2/sda1-8:4498 blocked for more than 143 seconds.
      Not tainted 6.10.0-rc6-syzkaller-00223-gc6653f49e4fd #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:jbd2/sda1-8     state:D stack:24208 pid:4498  tgid:4498  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 io_schedule+0x8d/0x110 kernel/sched/core.c:9043
 bit_wait_io+0x12/0xd0 kernel/sched/wait_bit.c:209
 __wait_on_bit+0xb0/0x2f0 kernel/sched/wait_bit.c:49
 out_of_line_wait_on_bit+0x1d5/0x260 kernel/sched/wait_bit.c:64
 wait_on_buffer include/linux/buffer_head.h:415 [inline]
 journal_wait_on_commit_record fs/jbd2/commit.c:171 [inline]
 jbd2_journal_commit_transaction+0x3d7f/0x6760 fs/jbd2/commit.c:887
 kjournald2+0x463/0x850 fs/jbd2/journal.c:201
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task syz-executor:6836 blocked for more than 144 seconds.
      Not tainted 6.10.0-rc6-syzkaller-00223-gc6653f49e4fd #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:20784 pid:6836  tgid:6836  ppid:6823   flags:0x00000000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6894
 __mutex_lock_common kernel/locking/mutex.c:684 [inline]
 __mutex_lock+0x6a4/0xd70 kernel/locking/mutex.c:752
 cgroup_lock include/linux/cgroup.h:368 [inline]
 cgroup_kn_lock_live+0xe6/0x290 kernel/cgroup/cgroup.c:1662
 __cgroup_procs_write+0xf4/0x4f0 kernel/cgroup/cgroup.c:5148
 cgroup_procs_write+0x29/0x50 kernel/cgroup/cgroup.c:5188
 cgroup_file_write+0x2ce/0x6d0 kernel/cgroup/cgroup.c:4092
 kernfs_fop_write_iter+0x3a1/0x500 fs/kernfs/file.c:334
 new_sync_write fs/read_write.c:497 [inline]
 vfs_write+0xa72/0xc90 fs/read_write.c:590
 ksys_write+0x1a0/0x2c0 fs/read_write.c:643
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f2fcd37475f
RSP: 002b:00007ffef5ae4ce0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f2fcd37475f
RDX: 0000000000000001 RSI: 00007ffef5ae4d30 RDI: 0000000000000003
RBP: 00007ffef5ae52e0 R08: 0000000000000000 R09: 00007ffef5ae4b37
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000001
R13: 00007ffef5ae4d30 R14: 00007ffef5ae52e0 R15: 00007ffef5ae52a0
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/1:0/25:
1 lock held by khungtaskd/31:
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
3 locks held by kworker/u8:3/51:
 #0: ffff888015089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3223 [inline]
 #0: ffff888015089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3329
 #1: ffffc90000bb7d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3224 [inline]
 #1: ffffc90000bb7d00 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3329
 #2: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: copy_process+0x21f1/0x3dc0 kernel/fork.c:2486
1 lock held by kworker/u9:1/4479:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
2 locks held by getty/4843:
 #0: ffff88802ac130a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900031332f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2211
1 lock held by kworker/u9:2/5091:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by kworker/u9:3/5093:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by kworker/u9:5/5096:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by kworker/u9:6/5098:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by kworker/u9:8/5100:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by kworker/u9:9/5101:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by kworker/1:4/5139:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by kworker/1:6/5168:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
1 lock held by syz-executor/5286:
 #0: ffffffff8e362110 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: do_exit+0x6b4/0x27e0 kernel/exit.c:835
6 locks held by syz-executor/6738:
3 locks held by syz-executor/6836:
 #0: ffff888023706420 (sb_writers#10){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2854 [inline]
 #0: ffff888023706420 (sb_writers#10){.+.+}-{0:0}, at: vfs_write+0x227/0xc90 fs/read_write.c:586
 #1: ffff888079dd1088 (&of->mutex){+.+.}-{3:3}, at: kernfs_fop_write_iter+0x1eb/0x500 fs/kernfs/file.c:325
 #2: ffffffff8e361f28 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:368 [inline]
 #2: ffffffff8e361f28 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_kn_lock_live+0xe6/0x290 kernel/cgroup/cgroup.c:1662
1 lock held by syz.0.364/6968:
1 lock held by syz.0.364/6970:
1 lock held by syz.2.365/6966:
 #0: ffff88807ccf0198 (&mm->mmap_lock){++++}-{3:3}, at: mmap_write_lock_killable include/linux/mmap_lock.h:122 [inline]
 #0: ffff88807ccf0198 (&mm->mmap_lock){++++}-{3:3}, at: vm_mmap_pgoff+0x17c/0x3d0 mm/util.c:571
2 locks held by syz.2.365/6967:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 31 Comm: khungtaskd Not tainted 6.10.0-rc6-syzkaller-00223-gc6653f49e4fd #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 6965 Comm: syz.0.364 Not tainted 6.10.0-rc6-syzkaller-00223-gc6653f49e4fd #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
RIP: 0010:common_interrupt_return+0x1a/0xcc arch/x86/entry/entry_64.S:561
Code: 89 c4 48 89 e7 e8 26 10 dd ff e9 91 04 00 00 90 66 90 b9 48 00 00 00 65 48 8b 15 71 64 62 74 83 e2 fe 89 d0 48 c1 ea 20 0f 30 <eb> 35 cc cc cc 41 5f 41 5e 41 5d 41 5c 5d 5b 41 5b 41 5a 41 59 41
RSP: 0018:ffffc9000453ff58 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000048
RDX: 0000000000000000 RSI: ffffffff8bcabb40 RDI: ffffffff8c1f15c0
RBP: 0000000000000000 R08: ffffffff8fac1d2f R09: 1ffffffff1f583a5
R10: dffffc0000000000 R11: fffffbfff1f583a6 R12: 0000000000000000
R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
FS:  00007f89ebe8a6c0(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000110c3ec8e9 CR3: 000000001186a000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:

Crashes (5):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/07/07 22:02 upstream c6653f49e4fd bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in cgroup_can_fork
2024/09/21 05:34 upstream baeb9a7d8b60 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in cgroup_can_fork
2024/08/23 09:19 linux-next c79c85875f1a ce8a9099 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in cgroup_can_fork
2024/08/22 10:34 linux-next 6a7917c89f21 ca02180f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in cgroup_can_fork
2024/07/03 03:12 linux-next 82e4255305c5 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in cgroup_can_fork
* Struck through repros no longer work on HEAD.