Systems | Development | Analytics | API | Testing

Resume tokens and last-event IDs for LLM streaming: How they work & what they cost to build

When an AI response reaches token 150 and the connection drops, most implementations have one answer: start over. The user re-prompts, you pay for the same tokens twice, and the experience breaks. Resume tokens and last-event IDs are the mechanism that prevents this. They make streams addressable – every message gets an identifier, clients track their position, and reconnections pick up from exactly where they left off. The concept is straightforward.

Why AI agents need a transport layer: Solving the realtime sync problem

Building AI agents that work reliably in production requires solving problems that have nothing to do with AI. While teams focus on prompt engineering, model selection, and agent orchestration, a different class of challenges emerges at deployment. These have little to do with LLMs and everything to do with keeping agents and clients synchronized in realtime. Over the past few months, we've spoken with engineers at over 40 companies building AI assistants, copilots, and agentic workflows.

WebSockets vs HTTP for AI applications: which to choose in 2026

When building AI experiences, choosing between WebSockets and HTTP isn't always straightforward. Which protocol is better for streaming LLM responses? How do you maintain continuity when users switch devices mid-conversation? Should you use both? The answer depends on the type of AI experience you're building. Modern AI applications often require both protocols, each serving different purposes. The key question is: how do you decide which communication pattern fits each scenario in your AI stack?

Edit and delete messages without rewriting your history layer

Editing or removing a message after it’s been published sounds simple. In realtime systems, it usually isn’t. Once a message has been delivered to multiple clients, cached locally, and written into history, changing it safely becomes a coordination problem. Clients need to agree on what’s current. History needs to stay consistent. Reconnects and refreshes can’t bring back stale content. That’s why many systems treat messages as immutable by default.

Appends for AI apps: Stream into a single message with Ably AI Transport

Streaming tokens is easy. Resuming cleanly is not. A user refreshes mid-response, another client joins late, a mobile connection drops for 10 seconds, and suddenly your “one answer” is 600 tiny messages that your UI has to stitch back together. Message history turns into fragments. You start building a side store just to reconstruct “the response so far”. This is not a model problem. It’s a delivery problem That’s why we developed message appends for Ably AI Transport.

Why orchestrators become a bottleneck in multi-agent AI

Complex user tasks often need multiple AI agents working together, not just a single assistant. That’s what agent collaboration enables. Each agent has its own specialism - planning, fetching, checking, summarising - and they work in tandem to get the job done. The experience feels intelligent and joined-up, not monolithic or linear. But making that work means more than prompt chaining or orchestration logic.

Multi-agent AI systems need infrastructure that can keep up

When you're building agentic AI applications with multiple agents working together, the infrastructure challenges show up fast. Agents need to coordinate, users need visibility into what's happening, and the whole system needs to stay responsive even as tasks branch out across specialised workers. We built a multi-agent travel planning system to understand these problems better. What we learned applies well beyond holiday booking.

Realtime steering: interrupt, barge-in, redirect, and guide the AI

Start typing, change your mind, redirect the AI mid-response. It just works. That is the promise of realtime steering. Users expect to interrupt an answer, correct its direction, or inject new instructions on the fly without losing context or restarting the session. It feels simple, but delivering it requires low-latency control signals, reliable cancellation, and shared conversational state that survives disconnects and device switches.