Systems | Development | Analytics | API | Testing

The Durable Sessions stack is forming

By Matt O'Riordan, CEO and Co-Founder Across AI infrastructure right now, one word is doing a lot of work: durable. It is attached to execution. To agents. To workflows. To sessions. To streams. To transports. To memory. Every few weeks, another product ships with "durable" in the name. This is not branding noise. The underlying observation is the same in every case. AI systems are long-lived. They can fail at any layer. They need infrastructure that assumes failure rather than hopes against it.

Ably Python SDK v3: realtime for Python, built for AI

Python dominates AI development. It's where teams build their agents, orchestration layers, and the backend systems that turn LLM calls into products people actually use. Over the past year, those systems have matured rapidly. What used to live in notebooks and prototypes is now running in production, serving real users with real expectations around reliability and performance. That maturity brings infrastructure requirements. Tokens need to stream in order.

Multi-device AI session continuity: how cross-device conversation sync works

You start a research task on your laptop, the network drops during a meeting, and when you open your phone to continue, the conversation is gone – you re-prompt, get partial duplicate results, and lose 30 minutes of work. The delivery layer dropped it. That's one of the most consistent problems teams hit when building AI applications. It's particularly acute in customer support, where a session belongs to the conversation - not to any single device, connection, or participant.

Why AI support fails in production: The infrastructure problem behind every incident

HTTP streaming – the default transport underneath every major agent framework – was never designed for sessions that survive a tab close or hand off cleanly between participants. Two failures surface consistently in production CX products because of this. Both generate support tickets about conversation state and prompt quality. Both trace to the transport layer. The scenario that illustrates them: a customer contacts support about an order that's partially shipped and partially stuck.

Stateful agents, stateless infrastructure: the transport gap AI teams are patching by hand

Every major layer of the AI stack now has a name. Model providers - OpenAI, Anthropic, Google - handle inference. Agent frameworks - Vercel AI SDK, LangGraph, CrewAI - handle orchestration. Durable execution platforms like Temporal make backend workflows crash-proof.

Does your AI stack need a session layer? A maturity framework for teams building AI agents

Most teams building AI agents start with HTTP streaming. It's the right starting point. Every major agent framework defaults to it, it gets tokens on screen fast, and for a single-user prompt-response interaction it works well. The question is when it stops being enough - and how to recognise that before it turns into user experience problems, engineering waste, and technical debt that constrains what your product can do.

What 40+ engineering teams learned about shipping AI to users at scale

There’s no shortage of noise in AI right now. New frameworks, protocols, demos, and acronyms appear almost weekly. But when you speak directly to the teams actually shipping AI to users at scale, a different picture emerges. This is what we've learned over the last few months from speaking to CTOs, AI engineering leads, and product leaders from unicorns, public companies, and fast-growing platforms across industries where humans interact directly with AI.

AI Transport in action: resumable streaming, multi-device sync, and more

How do you deliver token streams, sync conversation state across devices, and let users interrupt an agent mid-response -- without rebuilding your stack every time you switch frameworks? Mike Christensen demonstrates Ably AI Transport in action, walking through the key primitives every production AI application needs and showcasing a multi-agent holiday planning app built on those primitives. Topics covered.

LiveObjects now available: shared state without the infrastructure overhead

Shared state is a hard problem. Not hard in the abstract, computer-science sense (the concepts are well understood). Hard in the someone has to actually build this sense, where every team that wants a live leaderboard, a shared config panel, or a poll that updates in real time ends up reinventing the same wheels: conflict resolution, reconnection handling, state recovery. Most teams do not want to spend their time building and maintaining that layer. They want to ship the feature that depends on it.