syzbot


INFO: task hung in io_uring_alloc_task_context (5)

Status: auto-obsoleted due to no activity on 2025/04/05 18:35
Subsystems: io-uring
[Documentation on labels]
First crash: 209d, last: 209d
Similar bugs (4)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in io_uring_alloc_task_context fs 1 4 1295d 1355d 0/29 closed as invalid on 2022/02/08 09:40
upstream INFO: task hung in io_uring_alloc_task_context (2) fs 1 1 1241d 1241d 0/29 auto-closed as invalid on 2022/06/09 11:30
upstream INFO: task hung in io_uring_alloc_task_context (4) io-uring 1 2 1011d 1015d 0/29 auto-obsoleted due to no activity on 2023/04/10 23:14
upstream INFO: task hung in io_uring_alloc_task_context (3) fs 1 1 1135d 1135d 0/29 auto-closed as invalid on 2022/09/22 17:27

Sample crash report:
INFO: task syz.1.152:6471 blocked for more than 143 seconds.
      Not tainted 6.13.0-rc5-syzkaller-00163-gab75170520d4 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.152       state:D stack:23600 pid:6471  tgid:6470  ppid:5844   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6905
 __mutex_lock_common kernel/locking/mutex.c:665 [inline]
 __mutex_lock+0x7e7/0xee0 kernel/locking/mutex.c:735
 io_init_wq_offload io_uring/tctx.c:22 [inline]
 io_uring_alloc_task_context+0x107/0x610 io_uring/tctx.c:87
 __io_uring_add_tctx_node+0x338/0x540 io_uring/tctx.c:113
 __io_uring_add_tctx_node_from_submit+0x93/0x130 io_uring/tctx.c:156
 io_uring_add_tctx_node io_uring/tctx.h:32 [inline]
 __do_sys_io_uring_enter io_uring/io_uring.c:3390 [inline]
 __se_sys_io_uring_enter+0x2c2f/0x33b0 io_uring/io_uring.c:3330
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fa3ccb85d29
RSP: 002b:00007fa3cd98b038 EFLAGS: 00000246 ORIG_RAX: 00000000000001aa
RAX: ffffffffffffffda RBX: 00007fa3ccd75fa0 RCX: 00007fa3ccb85d29
RDX: 0000000000000000 RSI: 00000000000047ba RDI: 000000000000000a
RBP: 00007fa3ccc01b08 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fa3ccd75fa0 R15: 00007ffc33ba9298
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u8:0/11:
 #0: ffff88814d2c2148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88814d2c2148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310
 #1: ffffc90000107d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90000107d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310
 #2: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4215
1 lock held by khungtaskd/30:
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744
2 locks held by kworker/u8:4/80:
3 locks held by kworker/u8:6/2895:
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310
 #1: ffffc9000bad7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc9000bad7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310
 #2: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:281
2 locks held by getty/5589:
 #0: ffff8880351120a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
5 locks held by kworker/u8:11/6328:
 #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline]
 #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310
 #1: ffffc90003bbfd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline]
 #1: ffffc90003bbfd00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310
 #2: ffffffff8fca6810 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xd50 net/core/net_namespace.c:602
 #3: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xe9/0xaa0 net/core/dev.c:12061
 #4: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:329 [inline]
 #4: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:976
1 lock held by syz.1.152/6471:
 #0: ffff888075ec40a8 (&ctx->uring_lock){+.+.}-{4:4}, at: io_init_wq_offload io_uring/tctx.c:22 [inline]
 #0: ffff888075ec40a8 (&ctx->uring_lock){+.+.}-{4:4}, at: io_uring_alloc_task_context+0x107/0x610 io_uring/tctx.c:87
2 locks held by syz.1.152/6473:
1 lock held by syz-executor/7477:
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline]
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011
1 lock held by syz.4.452/7659:
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6928
3 locks held by syz.2.460/7703:
 #0: ffffffff8fd15710 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8fd155c8 (genl_mutex){+.+.}-{4:4}, at: genl_lock net/netlink/genetlink.c:35 [inline]
 #1: ffffffff8fd155c8 (genl_mutex){+.+.}-{4:4}, at: genl_op_lock net/netlink/genetlink.c:60 [inline]
 #1: ffffffff8fd155c8 (genl_mutex){+.+.}-{4:4}, at: genl_rcv_msg+0x121/0xec0 net/netlink/genetlink.c:1209
 #2: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: team_nl_options_set_doit+0x9b/0x1090 drivers/net/team/team_core.c:2533
1 lock held by syz.0.466/7719:
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: ip_mroute_setsockopt+0x15b/0x1190 net/ipv4/ipmr.c:1396
1 lock held by syz.0.466/7720:
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: ip_mroute_setsockopt+0x15b/0x1190 net/ipv4/ipmr.c:1396
4 locks held by syz.5.469/7730:
 #0: ffff88802a39a420 (sb_writers#11){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:515
 #1: ffff888055ad0148 (&type->i_mutex_dir_key#7/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:853 [inline]
 #1: ffff888055ad0148 (&type->i_mutex_dir_key#7/1){+.+.}-{4:4}, at: filename_create+0x260/0x540 fs/namei.c:4080
 #2: ffffffff8e96e828 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_lock include/linux/cgroup.h:368 [inline]
 #2: ffffffff8e96e828 (cgroup_mutex){+.+.}-{4:4}, at: cgroup_kn_lock_live+0xe6/0x290 kernel/cgroup/cgroup.c:1662
 #3: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: cgrp_css_online+0x90/0x2f0 net/core/netprio_cgroup.c:157
1 lock held by syz.5.469/7731:
 #0: ffff888055ad0148 (&type->i_mutex_dir_key#7){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:828 [inline]
 #0: ffff888055ad0148 (&type->i_mutex_dir_key#7){++++}-{4:4}, at: lookup_slow+0x45/0x70 fs/namei.c:1807
2 locks held by syz.5.469/7732:
 #0: ffffffff8fd15710 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8fd155c8 (genl_mutex){+.+.}-{4:4}, at: genl_lock net/netlink/genetlink.c:35 [inline]
 #1: ffffffff8fd155c8 (genl_mutex){+.+.}-{4:4}, at: genl_op_lock net/netlink/genetlink.c:60 [inline]
 #1: ffffffff8fd155c8 (genl_mutex){+.+.}-{4:4}, at: genl_rcv_msg+0x121/0xec0 net/netlink/genetlink.c:1209
2 locks held by syz.3.468/7734:
 #0: ffff8880793a0e08 (&sb->s_type->i_mutex_key#10){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:818 [inline]
 #0: ffff8880793a0e08 (&sb->s_type->i_mutex_key#10){+.+.}-{4:4}, at: __sock_release net/socket.c:639 [inline]
 #0: ffff8880793a0e08 (&sb->s_type->i_mutex_key#10){+.+.}-{4:4}, at: sock_close+0x90/0x240 net/socket.c:1408
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:329 [inline]
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:976
1 lock held by syz.3.468/7736:
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcb2cc8 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6928

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.13.0-rc5-syzkaller-00163-gab75170520d4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:234 [inline]
 watchdog+0xff6/0x1040 kernel/hung_task.c:397
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 53 Comm: kworker/u8:3 Not tainted 6.13.0-rc5-syzkaller-00163-gab75170520d4 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue: events_unbound nsim_dev_trap_report_work
RIP: 0010:___slab_alloc+0x9ab/0x14a0 mm/slub.c:3738
Code: df be ff ff ff ff e8 f4 08 ce 09 85 c0 0f 84 2e 09 00 00 49 8b 46 10 83 78 28 00 0f 89 ef 09 00 00 41 8b 44 24 28 4a 8b 04 38 <49> 89 06 49 83 46 08 08 49 8b 1c 24 48 83 c3 20 e8 e0 22 ce 09 89
RSP: 0018:ffffc90000bd7870 EFLAGS: 00000082
RAX: 0000000000000000 RBX: ffff8880b8742c40 RCX: 0000000080040004
RDX: 0000000000040003 RSI: ffffffff8c0aaae0 RDI: ffffffff8c5fb0e0
RBP: 0000000000000001 R08: 0000000080040004 R09: 1ffffffff2854910
R10: dffffc0000000000 R11: fffffbfff2854911 R12: ffff88801ac42140
R13: 0000000000000286 R14: ffff8880b8742c20 R15: ffff88802b4fe000
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000555584c1f5c8 CR3: 000000000e736000 CR4: 0000000000350ef0
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 __slab_alloc+0x58/0xa0 mm/slub.c:3920
 __slab_alloc_node mm/slub.c:3995 [inline]
 slab_alloc_node mm/slub.c:4156 [inline]
 __do_kmalloc_node mm/slub.c:4297 [inline]
 __kmalloc_node_track_caller_noprof+0x2e9/0x4c0 mm/slub.c:4317
 kmalloc_reserve+0x111/0x2a0 net/core/skbuff.c:609
 __alloc_skb+0x1f3/0x440 net/core/skbuff.c:678
 alloc_skb include/linux/skbuff.h:1323 [inline]
 nsim_dev_trap_skb_build drivers/net/netdevsim/dev.c:748 [inline]
 nsim_dev_trap_report drivers/net/netdevsim/dev.c:805 [inline]
 nsim_dev_trap_report_work+0x261/0xb50 drivers/net/netdevsim/dev.c:851
 process_one_work kernel/workqueue.c:3229 [inline]
 process_scheduled_works+0xa68/0x1840 kernel/workqueue.c:3310
 worker_thread+0x870/0xd30 kernel/workqueue.c:3391
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/05 18:32 upstream ab75170520d4 f3558dbf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in io_uring_alloc_task_context
* Struck through repros no longer work on HEAD.