Systems | Development | Analytics | API | Testing

How leading AI companies really build: lessons from 40+ engineering leaders

What does it actually take to ship Gen 2 AI experiences to real users at scale? Matthew O'Riordan, CEO of Ably, shares insights from conversations with 40+ engineering leaders — including at unicorns and public corporations — on where AI delivery breaks and what production teams are doing about it. Topics covered: Timestamps.

The missing transport layer in user-facing AI applications

Most AI applications start the same way: wire up an LLM, stream tokens to the browser, ship. That works for simple request-response. It breaks when sessions outlast a connection, when users switch devices, or when an agent needs to hand off to a human. The cracks appear in the delivery layer, not the model. Every serious production team discovers this independently and builds their own workaround. Those workarounds don't hold once users start hitting them in production.

Resume tokens and last-event IDs for LLM streaming: How they work & what they cost to build

When an AI response reaches token 150 and the connection drops, most implementations have one answer: start over. The user re-prompts, you pay for the same tokens twice, and the experience breaks. Resume tokens and last-event IDs are the mechanism that prevents this. They make streams addressable – every message gets an identifier, clients track their position, and reconnections pick up from exactly where they left off. The concept is straightforward.

Why AI agents need a transport layer: Solving the realtime sync problem

Building AI agents that work reliably in production requires solving problems that have nothing to do with AI. While teams focus on prompt engineering, model selection, and agent orchestration, a different class of challenges emerges at deployment. These have little to do with LLMs and everything to do with keeping agents and clients synchronized in realtime. Over the past few months, we've spoken with engineers at over 40 companies building AI assistants, copilots, and agentic workflows.

WebSockets vs HTTP for AI applications: which to choose in 2026

When building AI experiences, choosing between WebSockets and HTTP isn't always straightforward. Which protocol is better for streaming LLM responses? How do you maintain continuity when users switch devices mid-conversation? Should you use both? The answer depends on the type of AI experience you're building. Modern AI applications often require both protocols, each serving different purposes. The key question is: how do you decide which communication pattern fits each scenario in your AI stack?

Edit and delete messages without rewriting your history layer

Editing or removing a message after it’s been published sounds simple. In realtime systems, it usually isn’t. Once a message has been delivered to multiple clients, cached locally, and written into history, changing it safely becomes a coordination problem. Clients need to agree on what’s current. History needs to stay consistent. Reconnects and refreshes can’t bring back stale content. That’s why many systems treat messages as immutable by default.

Appends for AI apps: Stream into a single message with Ably AI Transport

Streaming tokens is easy. Resuming cleanly is not. A user refreshes mid-response, another client joins late, a mobile connection drops for 10 seconds, and suddenly your “one answer” is 600 tiny messages that your UI has to stitch back together. Message history turns into fragments. You start building a side store just to reconstruct “the response so far”. This is not a model problem. It’s a delivery problem That’s why we developed message appends for Ably AI Transport.

Why orchestrators become a bottleneck in multi-agent AI

Complex user tasks often need multiple AI agents working together, not just a single assistant. That’s what agent collaboration enables. Each agent has its own specialism - planning, fetching, checking, summarising - and they work in tandem to get the job done. The experience feels intelligent and joined-up, not monolithic or linear. But making that work means more than prompt chaining or orchestration logic.

Multi-agent AI systems need infrastructure that can keep up

When you're building agentic AI applications with multiple agents working together, the infrastructure challenges show up fast. Agents need to coordinate, users need visibility into what's happening, and the whole system needs to stay responsive even as tasks branch out across specialised workers. We built a multi-agent travel planning system to understand these problems better. What we learned applies well beyond holiday booking.