Systems | Development | Analytics | API | Testing

From Microservices to AI Traffic: Kong's Unified Control Plane When Architecture Gets Complicated

Modern enterprise architecture faces a three-body problem. Three distinct traffic patterns pull your teams in different directions. External APIs serve mobile apps and partner integrations. Internal microservices communicate within Kubernetes clusters. AI and LLM calls flow to OpenAI, AWS Bedrock, and self-hosted models. Each pattern looks API-like on the surface. Yet many organizations handle them with separate tools. The result?

Best Self-Service Analytics Tools for Agencies (Compared by Client Usability + Multi-Client Scale)

An agency-friendly tool cuts reporting time per client without turning every dashboard question into a support ticket. An Account Director sits down two hours before a monthly client call, sees the same pattern again, and opens PowerPoint. The dashboard exists, but the client never “gets it” without a guided tour, so the agency rewrites the story every month to prevent confusion and churn. A dashboard your client can’t read independently is a service ticket waiting to happen.

Beyond the Dashboard: Using Telemetry to Solve the Unknown Unknowns of Performance

Your dashboards are lying to you, not through bad data, but through incomplete data. They show you what you told them to watch. They cannot show you what you did not know to ask. Telemetry-driven performance engineering uses metrics, logs, traces and profiling to detect and diagnose issues that traditional dashboards cannot capture. The failures that hurt most are not the ones you predicted; they are the ones your monitoring was never designed to catch.

Stateful agents, stateless infrastructure: the transport gap AI teams are patching by hand

Every major layer of the AI stack now has a name. Model providers - OpenAI, Anthropic, Google - handle inference. Agent frameworks - Vercel AI SDK, LangGraph, CrewAI - handle orchestration. Durable execution platforms like Temporal make backend workflows crash-proof.

Why AI support fails in production: The infrastructure problem behind every incident

HTTP streaming – the default transport underneath every major agent framework – was never designed for sessions that survive a tab close or hand off cleanly between participants. Two failures surface consistently in production CX products because of this. Both generate support tickets about conversation state and prompt quality. Both trace to the transport layer. The scenario that illustrates them: a customer contacts support about an order that's partially shipped and partially stuck.

Does your AI stack need a session layer? A maturity framework for teams building AI agents

Most teams building AI agents start with HTTP streaming. It's the right starting point. Every major agent framework defaults to it, it gets tokens on screen fast, and for a single-user prompt-response interaction it works well. The question is when it stops being enough - and how to recognise that before it turns into user experience problems, engineering waste, and technical debt that constrains what your product can do.

How to Teach Your AI Agent to Build Keboola Data Apps

You can build Data Apps inside Keboola with Kai. But what if you prefer working with Keboola via MCP, in Claude Code, Cursor, or another AI-powered editor? Want to build a JavaScript Data App that Kai doesn't support yet? That's what the Keboola AI Kit is for. It's a set of skills you install into your agent so it knows how to work with Keboola - how to query your data, how to structure a Data App, how to deploy it. Here's how to set it up.

Create tests in Reflect directly from your coding agent!

If you’ve used Claude Code, GitHub Copilot, Cursor, or any coding agent, you already know the feeling. You describe what you want in plain language, the agent figures out the steps, and you watch it work. When something goes wrong, it backs up and tries a different approach. Reflect now brings that same agentic workflow to test automation. Through the SmartBear MCP server, any coding agent that supports MCP can connect to Reflect and build tests from high-level objectives.

DreamFactory 7.4.5 Release: MCP Aggregate Data Tool, Cursor IDE Support, and Production Stability

DreamFactory 7.4.5 ships the aggregate_data MCP tool — a purpose-built tool that lets AI agents compute SUM, COUNT, AVG , MIN, and MAX directly on the database server in a single call. This release also adds Cursor IDE OAuth compatibility, a desktop OAuth success page for smoother onboarding, server-side aggregate expression support across all SQL connectors, and critical MCP daemon stability improvements including request timeout guards and global error handlers.