syzbot


INFO: task hung in rxrpc_destroy_all_connections (5)

Status: upstream: reported on 2026/02/25 07:45
Subsystems: kernel
[Documentation on labels]
Reported-by: syzbot+138f5aa6fa94d4802887@syzkaller.appspotmail.com
First crash: 325d, last: 7h16m
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
8d127d0d-aa75-47e9-bd7f-70576e4b5f67 repro INFO: task hung in rxrpc_destroy_all_connections (5) 2026/03/08 02:32 2026/03/08 02:32 2026/03/08 02:42 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [kernel?] INFO: task hung in rxrpc_destroy_all_connections (5) 0 (1) 2026/02/25 07:45
Similar bugs (5)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in rxrpc_destroy_all_connections 1 1 2429d 2429d 0/1 auto-closed as invalid on 2019/12/16 04:54
upstream INFO: task hung in rxrpc_destroy_all_connections net afs 1 1 2415d 2415d 0/29 auto-closed as invalid on 2019/11/30 22:24
upstream INFO: task hung in rxrpc_destroy_all_connections (2) afs net 1 5 2015d 2051d 0/29 auto-closed as invalid on 2021/01/03 19:58
upstream INFO: task hung in rxrpc_destroy_all_connections (4) afs net 1 1 800d 793d 0/29 auto-obsoleted due to no activity on 2024/04/02 13:13
upstream INFO: task hung in rxrpc_destroy_all_connections (3) afs net 1 1 1617d 1617d 0/29 auto-closed as invalid on 2022/02/05 03:08

Sample crash report:
INFO: task syz.1.727:9705 blocked for more than 142 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.727       state:D stack:27272 pid:9705  tgid:9702  ppid:5829   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_timeout+0x1b2/0x280 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:100 [inline]
 __wait_for_common+0x2e7/0x4c0 kernel/sched/completion.c:121
 __flush_workqueue+0x3f7/0x1200 kernel/workqueue.c:4084
 rxrpc_destroy_all_connections+0xf9/0x420 net/rxrpc/conn_object.c:477
 rxrpc_exit_net+0x7b/0xc0 net/rxrpc/net_ns.c:113
 ops_exit_list net/core/net_namespace.c:199 [inline]
 ops_undo_list+0x2ee/0xab0 net/core/net_namespace.c:252
 setup_net+0x1fa/0x3a0 net/core/net_namespace.c:462
 copy_net_ns+0x46f/0x7c0 net/core/net_namespace.c:579
 create_new_namespaces+0x3ea/0xac0 kernel/nsproxy.c:130
 unshare_nsproxy_namespaces+0xc3/0x1f0 kernel/nsproxy.c:226
 ksys_unshare+0x473/0xad0 kernel/fork.c:3173
 __do_sys_unshare kernel/fork.c:3244 [inline]
 __se_sys_unshare kernel/fork.c:3242 [inline]
 __x64_sys_unshare+0x31/0x40 kernel/fork.c:3242
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f57e299c819
RSP: 002b:00007f57e3792028 EFLAGS: 00000246 ORIG_RAX: 0000000000000110
RAX: ffffffffffffffda RBX: 00007f57e2c16090 RCX: 00007f57e299c819
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040000080
RBP: 00007f57e2a32c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f57e2c16128 R14: 00007f57e2c16090 R15: 00007ffe33888f28
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/0:1/10:
 #0: ffff88813fe63148 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffff8880543fc008 (&____s->seqcount#15){.-.-}-{0:0}, at: trace_find_filtered_pid kernel/trace/trace_pid.c:15 [inline]
 #1: ffff8880543fc008 (&____s->seqcount#15){.-.-}-{0:0}, at: trace_ignore_this_task+0xbc/0x100 kernel/trace/trace_pid.c:44
 #2: ffff8880b8426358 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x124/0x1d0 kernel/time/timer.c:1004
 #3: ffffffff9b416238 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x144/0x490 lib/debugobjects.c:835
1 lock held by khungtaskd/32:
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
2 locks held by getty/9509:
 #0: ffff888033f3c0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900026062f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
1 lock held by syz.1.727/9705:
 #0: ffffffff905febd0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:575
3 locks held by kworker/u10:4/9795:
 #0: ffff88801c6b6948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90004d77d08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff905febd0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:673
1 lock held by syz.4.760/9870:
 #0: ffffffff905febd0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:575
2 locks held by kworker/u10:8/10140:
 #0: ffff8880b843b360 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2c/0x140 kernel/sched/core.c:647
 #1: ffff8880543fc008 (&____s->seqcount#15){.-.-}-{0:0}, at: trace_find_filtered_pid kernel/trace/trace_pid.c:15 [inline]
 #1: ffff8880543fc008 (&____s->seqcount#15){.-.-}-{0:0}, at: trace_ignore_this_task+0xbc/0x100 kernel/trace/trace_pid.c:44
2 locks held by kworker/u10:9/10141:
 #0: ffff88801df65948 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc900036b7d08 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
1 lock held by syz.5.924/10703:
 #0: ffffffff905febd0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:575
1 lock held by syz.6.932/10740:
 #0: ffff8880598dc148 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
 #0: ffff8880598dc148 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: __sock_release+0x86/0x260 net/socket.c:661
1 lock held by syz.0.1193/11816:
 #0: ffff888089ceb008 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:1028 [inline]
 #0: ffff888089ceb008 (&sb->s_type->i_mutex_key#14){+.+.}-{4:4}, at: __sock_release+0x86/0x260 net/socket.c:661
1 lock held by syz.8.1219/11936:
1 lock held by syz.7.1225/11957:
 #0: ffffffff905febd0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:575
1 lock held by syz.8.1244/12016:
 #0: ffffffff8e7f32b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
3 locks held by syz.8.1244/12020:
 #0: ffff8880545011c0 (&tty->legacy_mutex){+.+.}-{4:4}, at: __tty_hangup.part.0+0xd9/0x7f0 drivers/tty/tty_io.c:581
 #1: ffff8880545010a0 (&tty->ldisc_sem){++++}-{0:0}, at: __tty_ldisc_lock drivers/tty/tty_ldisc.c:289 [inline]
 #1: ffff8880545010a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_lock+0x65/0xb0 drivers/tty/tty_ldisc.c:313
 #2: ffffffff90617428 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff90617428 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_dev_lock+0x146/0x360 net/core/dev.c:2162

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 32 Comm: khungtaskd Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (40):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/12 19:28 upstream f5459048c38a 38c8e246 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/04/11 02:29 upstream 7c6c4ed80b87 38c8e246 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/04/01 20:00 upstream 9147566d8016 9a1f7828 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/15 06:40 upstream 69237f8c1f69 ee8d34d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/13 19:55 upstream 0257f64bdac7 351cb5cf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/11 06:03 upstream b4f0dd314b39 86914af9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/05 10:54 upstream ecc64d2dc9ff a9fe5c9e .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/03/02 07:41 upstream 39c633261414 43249bac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/25 04:23 upstream 7dff99b35460 787dfb7c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/24 19:13 upstream 7dff99b35460 96b1aa46 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/21 07:41 upstream a95f71ad3e2e 6e7b5511 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/18 21:01 upstream 23b0f90ba871 77d4d919 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/16 21:08 upstream 0f2acd3148e0 84656fa6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/14 20:38 upstream 3e48a11675c5 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/01 22:27 upstream 9f2693489ef8 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/02/01 00:46 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/24 18:53 upstream 62085877ae65 40acda8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/20 02:32 upstream 24d479d26b25 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/13 09:33 upstream b71e635feefc d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/07 13:07 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2026/01/05 11:01 upstream 3609fa95fb0f d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/31 12:43 upstream c8ebd433459b d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/21 15:19 upstream 9094662f6707 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/12/05 17:06 upstream 2061f18ad76e d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/26 14:34 upstream 30f09200cc4a c116feb4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/24 19:55 upstream ac3fd01e4c1e bf6fe8fe .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/19 03:04 upstream 5bebe8de1926 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/11/01 17:03 upstream ba36dd5ee6fd 2c50b6a9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/23 18:15 upstream 43e9ad0c55a3 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/04 12:21 upstream 2ccb4d203fe4 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/10/03 23:27 upstream e406d57be7bd 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/09/16 17:25 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/27 09:55 upstream fab1beda7597 e12e5ba4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/23 01:45 upstream cf6fc5eefc5b bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/10 04:36 upstream 561c80369df0 32a0e5ed .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/08/02 19:01 upstream a6923c06a3b2 7368264b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/07/24 09:40 upstream f9af7b5d9349 0c1d6ded .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/07/12 09:48 upstream 379f604cc3dc 3cda49cf .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/06/21 12:59 upstream 11313e2f7812 d6cdfb8a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
2025/05/22 14:46 upstream d608703fcdd9 0919b50b .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in rxrpc_destroy_all_connections
* Struck through repros no longer work on HEAD.