syzbot


INFO: task hung in vt_ioctl (5)

Status: auto-obsoleted due to no activity on 2025/06/06 19:57
Subsystems: kernel
[Documentation on labels]
First crash: 135d, last: 135d
Similar bugs (5)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in vt_ioctl (4) serial 1 1 670d 670d 0/29 auto-obsoleted due to no activity on 2023/12/19 05:20
linux-4.19 INFO: task hung in vt_ioctl 1 1 1881d 1881d 0/1 auto-closed as invalid on 2020/09/24 20:12
upstream INFO: task hung in vt_ioctl (3) serial 1 1 850d 850d 0/29 auto-obsoleted due to no activity on 2023/06/22 09:16
upstream INFO: task hung in vt_ioctl serial 1 1 1975d 1975d 0/29 auto-closed as invalid on 2020/05/23 20:53
upstream INFO: task hung in vt_ioctl (2) serial 1 2 1844d 1883d 0/29 auto-closed as invalid on 2020/10/01 08:58

Sample crash report:
INFO: task syz.3.11:5945 blocked for more than 144 seconds.
      Not tainted 6.14.0-rc5-syzkaller-00218-g2a520073e74f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.11        state:D stack:26800 pid:5945  tgid:5944  ppid:5836   task_flags:0x400140 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0xf43/0x5890 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6857
 schedule_timeout+0x244/0x280 kernel/time/sleep_timeout.c:75
 ___down_common+0x2d7/0x460 kernel/locking/semaphore.c:225
 __down_common kernel/locking/semaphore.c:246 [inline]
 __down+0x20/0x30 kernel/locking/semaphore.c:254
 down+0x74/0xa0 kernel/locking/semaphore.c:63
 console_lock+0x5b/0xa0 kernel/printk/printk.c:2833
 vt_ioctl+0x2627/0x2f80 drivers/tty/vt/vt_ioctl.c:920
 tty_ioctl+0x651/0x15d0 drivers/tty/tty_io.c:2802
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:906 [inline]
 __se_sys_ioctl fs/ioctl.c:892 [inline]
 __x64_sys_ioctl+0x190/0x200 fs/ioctl.c:892
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1da758d169
RSP: 002b:00007f1da84aa038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f1da77a5fa0 RCX: 00007f1da758d169
RDX: 0000000000000000 RSI: 0000000000005609 RDI: 0000000000000005
RBP: 00007f1da760e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f1da77a5fa0 R15: 00007fffce64f018
 </TASK>
INFO: task syz.0.26:6013 blocked for more than 144 seconds.
      Not tainted 6.14.0-rc5-syzkaller-00218-g2a520073e74f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.26        state:D stack:27264 pid:6013  tgid:6012  ppid:5830   task_flags:0x400140 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0xf43/0x5890 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6857
 schedule_timeout+0x244/0x280 kernel/time/sleep_timeout.c:75
 ___down_common+0x2d7/0x460 kernel/locking/semaphore.c:225
 __down_common kernel/locking/semaphore.c:246 [inline]
 __down+0x20/0x30 kernel/locking/semaphore.c:254
 down+0x74/0xa0 kernel/locking/semaphore.c:63
 console_lock+0x5b/0xa0 kernel/printk/printk.c:2833
 vcs_open+0x64/0xc0 drivers/tty/vt/vc_screen.c:763
 chrdev_open+0x237/0x6a0 fs/char_dev.c:414
 do_dentry_open+0x735/0x1c40 fs/open.c:956
 vfs_open+0x82/0x3f0 fs/open.c:1086
 do_open fs/namei.c:3830 [inline]
 path_openat+0x1e88/0x2d80 fs/namei.c:3989
 do_filp_open+0x20c/0x470 fs/namei.c:4016
 do_sys_openat2+0x17a/0x1e0 fs/open.c:1428
 do_sys_open fs/open.c:1443 [inline]
 __do_sys_openat fs/open.c:1459 [inline]
 __se_sys_openat fs/open.c:1454 [inline]
 __x64_sys_openat+0x175/0x210 fs/open.c:1454
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f56fe98d169
RSP: 002b:00007f56ff843038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f56feba5fa0 RCX: 00007f56fe98d169
RDX: 0000000000108002 RSI: 0000400000000040 RDI: ffffffffffffff9c
RBP: 00007f56fea0e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f56feba5fa0 R15: 00007ffe0fa0dd58
 </TASK>

Showing all locks held in the system:
2 locks held by kthreadd/2:
1 lock held by kworker/R-kvfre/6:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
3 locks held by kworker/0:0/9:
 #0: ffff88801b081d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x1293/0x1ba0 kernel/workqueue.c:3213
 #1: ffffc900000e7d18 ((crda_timeout).work){+.+.}-{0:0}, at: process_one_work+0x921/0x1ba0 kernel/workqueue.c:3214
 #2: ffffffff8fef9c28 (rtnl_mutex){+.+.}-{4:4}, at: crda_timeout_work+0x15/0x50 net/wireless/reg.c:541
3 locks held by kworker/0:1/10:
 #0: ffff88801b081d48 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x1293/0x1ba0 kernel/workqueue.c:3213
 #1: ffffc900000f7d18 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x921/0x1ba0 kernel/workqueue.c:3214
 #2: ffffffff8fef9c28 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x84/0x1130 net/wireless/reg.c:2481
3 locks held by kworker/u8:0/12:
3 locks held by kworker/u8:1/13:
1 lock held by kworker/R-mm_pe/14:
2 locks held by kworker/1:0/26:
1 lock held by khungtaskd/31:
 #0: ffffffff8e1bd0c0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e1bd0c0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e1bd0c0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x7f/0x390 kernel/locking/lockdep.c:6746
3 locks held by kworker/u8:2/36:
3 locks held by kworker/u8:3/53:
3 locks held by kworker/u8:4/68:
3 locks held by kworker/u8:5/717:
2 locks held by kworker/0:2/940:
3 locks held by kworker/u8:6/1163:
1 lock held by kworker/R-bat_e/3399:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
5 locks held by kworker/u8:7/4527:
3 locks held by kworker/u8:8/5109:
2 locks held by jbd2/sda1-8/5171:
3 locks held by syslogd/5191:
4 locks held by udevd/5209:
2 locks held by dhcpcd/5502:
4 locks held by dhcpcd/5503:
2 locks held by getty/5601:
 #0: ffff88814ed580a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fe62f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0xfba/0x1480 drivers/tty/n_tty.c:2211
3 locks held by syz-executor/5821:
2 locks held by syz-executor/5832:
1 lock held by kworker/R-wg-cr/5856:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5858:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5859:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3529
1 lock held by kworker/R-wg-cr/5861:
1 lock held by kworker/R-wg-cr/5862:
4 locks held by kworker/0:3/5865:
 #0: ffff88801b080d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1293/0x1ba0 kernel/workqueue.c:3213
 #1: ffffc900042b7d18 (reg_work){+.+.}-{0:0}, at: process_one_work+0x921/0x1ba0 kernel/workqueue.c:3214
 #2: ffffffff8fef9c28 (rtnl_mutex){+.+.}-{4:4}, at: reg_todo+0x1c/0x910 net/wireless/reg.c:3217
 #3: ffff88805de50768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6061 [inline]
 #3: ffff88805de50768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: reg_process_self_managed_hints+0x95/0x1f0 net/wireless/reg.c:3207
1 lock held by kworker/R-wg-cr/5866:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3529
1 lock held by kworker/R-wg-cr/5867:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3529
1 lock held by kworker/R-wg-cr/5868:
1 lock held by kworker/R-wg-cr/5869:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/5870:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x836/0xe80 kernel/workqueue.c:3529
3 locks held by kworker/0:5/5874:
3 locks held by kworker/0:6/5891:
4 locks held by kworker/0:7/5900:
2 locks held by kworker/0:8/5901:
2 locks held by syz.1.6/5918:
2 locks held by kworker/1:5/5924:
2 locks held by kworker/1:6/6019:
3 locks held by kworker/u8:9/6021:
4 locks held by kworker/u8:10/6022:
2 locks held by kworker/u8:11/6023:
3 locks held by kworker/u8:12/6024:
 #0: ffff88814d825948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x1293/0x1ba0 kernel/workqueue.c:3213
 #1: ffffc90004087d18 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x921/0x1ba0 kernel/workqueue.c:3214
 #2: ffffffff8fef9c28 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #2: ffffffff8fef9c28 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4730
2 locks held by kworker/0:9/6025:
2 locks held by syz-executor/6026:
2 locks held by kworker/u8:13/6027:
1 lock held by kworker/1:7/6028:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678
4 locks held by kworker/0:10/6029:
1 lock held by kworker/u8:14/6030:
 #0: ffffffff8e076588 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2678

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.14.0-rc5-syzkaller-00218-g2a520073e74f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0xf62/0x12b0 kernel/hung_task.c:399
 kthread+0x3af/0x750 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 5900 Comm: kworker/0:7 Not tainted 6.14.0-rc5-syzkaller-00218-g2a520073e74f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Workqueue: wg-crypt-wg2 wg_packet_tx_worker
RIP: 0010:check_preemption_disabled+0x0/0xe0 lib/smp_processor_id.c:13
Code: c0 75 0f 65 8b 05 bc 2c ad 74 85 c0 74 04 90 0f 0b 90 e9 53 fc ff ff 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <41> 54 55 53 48 83 ec 08 65 8b 1d 8d 7e ae 74 65 8b 05 82 7e ae 74
RSP: 0018:ffffc90000007760 EFLAGS: 00000046
RAX: 0000000000000000 RBX: ffff888034a85a00 RCX: 1ffffffff2de4954
RDX: 1ffff11006950c9a RSI: ffffffff8b6cfe00 RDI: ffffffff8bd35820
RBP: d8332a6fa9f2d431 R08: 0000000000000000 R09: fffffbfff2dd7fb8
R10: ffffffff96ebfdc7 R11: 0000000000000003 R12: 1ffff92000000f13
R13: 0000000000000001 R14: ffff88805fc96040 R15: ffff88805fc960f0
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffe79310e40 CR3: 000000007a4c0000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 lockdep_recursion_finish kernel/locking/lockdep.c:469 [inline]
 lockdep_hardirqs_on_prepare+0x17d/0x420 kernel/locking/lockdep.c:4409
 trace_hardirqs_on+0x36/0x40 kernel/trace/trace_preemptirq.c:78
 __local_bh_enable_ip+0xa4/0x120 kernel/softirq.c:394
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 ip6t_do_table+0xd2d/0x1d40 net/ipv6/netfilter/ip6_tables.c:375
 ip6table_mangle_hook+0xc4/0x770 net/ipv6/netfilter/ip6table_mangle.c:73
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:626
 nf_hook.constprop.0+0x42e/0x750 include/linux/netfilter.h:269
 NF_HOOK include/linux/netfilter.h:312 [inline]
 ipv6_rcv+0xa4/0x680 net/ipv6/ip6_input.c:309
 __netif_receive_skb_one_core+0x12e/0x1e0 net/core/dev.c:5893
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6006
 process_backlog+0x443/0x15f0 net/core/dev.c:6354
 __napi_poll.constprop.0+0xb7/0x550 net/core/dev.c:7188
 napi_poll net/core/dev.c:7257 [inline]
 net_rx_action+0xa94/0x1010 net/core/dev.c:7379
 handle_softirqs+0x213/0x8f0 kernel/softirq.c:561
 do_softirq kernel/softirq.c:462 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:449
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:389
 wg_socket_send_skb_to_peer+0x14c/0x220 drivers/net/wireguard/socket.c:184
 wg_packet_create_data_done drivers/net/wireguard/send.c:251 [inline]
 wg_packet_tx_worker+0x1aa/0x810 drivers/net/wireguard/send.c:276
 process_one_work+0x9c5/0x1ba0 kernel/workqueue.c:3238
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 worker_thread+0x6c8/0xf00 kernel/workqueue.c:3400
 kthread+0x3af/0x750 kernel/kthread.c:464
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
net_ratelimit: 22970 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:a6:b5:0d:70:a1:d7, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:a6:b5:0d:70:a1:d7, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/08 19:55 upstream 2a520073e74f 7e3bd60d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in vt_ioctl
* Struck through repros no longer work on HEAD.