Systems | Development | Analytics | API | Testing

Realtime steering: interrupt, barge-in, redirect, and guide the AI

Start typing, change your mind, redirect the AI mid-response. It just works. That is the promise of realtime steering. Users expect to interrupt an answer, correct its direction, or inject new instructions on the fly without losing context or restarting the session. It feels simple, but delivering it requires low-latency control signals, reliable cancellation, and shared conversational state that survives disconnects and device switches.

How we built an AI-first culture at Ably

Most companies talk about being “AI-first.” At Ably, we decided to actually become one. We build realtime infrastructure for AI applications. To do that credibly, we need to live and breathe AI ourselves – not just in our product, but in how we work every day. Two years ago, we began a company-wide push for AI adoption.

The new Ably dashboard: understand your realtime system in motion

When you’re building realtime systems, blind spots slow you down. The new dashboard gives developers self-serve visibility into everything happening inside their apps, from high-level usage to individual connections, channels and messages. No setup. No external tools. Just open your browser and observe your data in motion.

The evolution of realtime AI: The transport layer needed for stateful, steerable AI UX

When we launched Ably in 2016, we set out to solve a fundamental problem: delivering reliable, low-latency real-time experiences at scale. So we set out to build a globally distributed system that didn't force developers to choose between latency, integrity, and reliability – trade-offs that had defined the realtime infrastructure space for years.

Anticipatory customer experience: How realtime infrastructure transforms CX

We're entering a new era of anticipatory customer experience – one that's not just reactive, not just responsive, but truly predictive. In this new model, systems don't wait for friction to appear; they recognise signals early and step in before the user ever feels a slowdown or moment of uncertainty. The bar has shifted: customers now expect brands to predict their needs and act before friction even surfaces.

Gen2 AI UX: Conversations that stay in sync across every device

Start a conversation on your laptop, finish it on your phone. The context just follows you. That’s what cross-device AI sync delivers. No reloading history, no reintroducing yourself, just one continuous thread across every screen. It builds trust, reduces friction, and makes the assistant feel like a single, persistent presence. This post unpacks why users expect it, what makes it technically tricky, and what your system needs to get it right.

AI UX: Reliable, resumable token streaming

Refresh the page, lose signal, switch tabs - the AI conversation just keeps going. That’s what reliable, resumable token streaming makes possible. No restarts, no lost context, just the same response picking up right where it left off. It keeps users in flow and builds trust, making conversations feel seamless. Even better, it unlocks things like switching devices mid-stream without missing a beat.

Live chat at unlimited scale: What it takes to support stadium-sized audiences

Live streaming has evolved from a novelty to the backbone of modern digital events. When major brands host virtual conferences, product launches, or community gatherings, they're no longer dealing with hundreds of viewers – they're managing tens of thousands of concurrent participants, all expecting to engage in realtime chat. We recently worked with a team building live chat for a major creative software company's annual event.