Systems | Development | Analytics | API | Testing

From Microservices to AI Traffic: Kong's Unified Control Plane When Architecture Gets Complicated

Modern enterprise architecture faces a three-body problem. Three distinct traffic patterns pull your teams in different directions. External APIs serve mobile apps and partner integrations. Internal microservices communicate within Kubernetes clusters. AI and LLM calls flow to OpenAI, AWS Bedrock, and self-hosted models. Each pattern looks API-like on the surface. Yet many organizations handle them with separate tools. The result?

Practical Strategies to Monetize AI APIs in Production

AI APIs don't get enough credit for how much weight they're actually carrying. These AI APIs aren't merely technical connectors. They're, in fact, cost drivers and potential revenue engines. And when something goes sideways, they're ground zero. In production, they behave nothing like the traditional APIs your teams have been running for years; they introduce a whole new set of hurdles around operations, security, and governance that most organizations are still struggling to understand.

Connecting Kong and Solace: Building Smarter Event-Driven APIs

Bringing APIs and events together has always been a challenge. REST APIs give developers a familiar interface, while event brokers like Solace Broker excel at fan-out, filtering, and scalable, reliable event delivery. The tricky part? Bridging these two worlds without building a lot of custom glue. That’s exactly what the new Kong plugin for Solace upstream mediation does.

Kong Simplifies Multicloud Cloud Gateways with Managed Redis Cache

As enterprises race to deploy multicloud architectures and Agentic AI, they face a common bottleneck: "state." To govern AI token usage, manage agent-to-agent communication, or optimize performance via caching, API and AI gateways require a persistence layer to synchronize data. We’re excited to share the GA of Managed Redis cache for Kong Dedicated Cloud Gateways (DCGW).

Configuring Kong Dedicated Cloud Gateways with Managed Redis in a Multi-Cloud Environment

A persistent challenge arises as businesses adopt multicloud architectures and agentic AI: the need for state synchronization. API and AI gateways require a robust persistence layer to synchronize data, whether it's for governing AI token usage, facilitating agent-to-agent communication, or boosting performance through caching.

Leveraging the MCP Registry in Kong Konnect for Dynamic Tool Discovery

As enterprises start deploying AI agents into real systems, a new architectural challenge is emerging. Agents need a reliable way to discover tools, services, and capabilities dynamically, instead of relying on hardcoded integrations. This is where the Model Context Protocol (MCP) ecosystem is rapidly evolving. MCP servers expose tools and capabilities that AI agents can use. However, once organizations begin deploying multiple MCP servers across environments, the question becomes clear.

The Breakdown | API calls and mobile apps

You used an API this morning. Probably before you even got out of bed. That weather app? It's your phone communicating with a server in the cloud — sending a request, getting data back, and displaying it on your screen in seconds. Location. Request format. Expected response. That's the anatomy of an API call. And it's happening constantly across nearly every app on your phone. Hugo Guerrero and Amanda Alcamo break it all down in Episode 2 of The API & AI Breakdown. No jargon. No fluff. Just clarity.

AI Input vs. Output: Why Token Direction Matters for AI Cost Management

In the burgeoning intelligence economy, AI tokens are a metered utility, but enterprise profitability now hinges on a critical distinction: output tokens can cost up to 10x more than inputs, creating a new, invisible risk for cost overruns, particularly with Agentic AI. Learn how Kong AI Gateway and Konnect Metering & Billing provide the essential financial control plane to enforce directional guardrails, protect margins, and turn token consumption into realized revenue.

Governing Claude Code: How To Secure Agent Harness Rollouts with Kong AI Gateway

The AI coding and Agent Harness approach is no longer experimental. This is likely the most impactful agentic AI use case in production today, and Claude Code is one of the solutions really leading the charge. But as engineering teams race to adopt Claude Code across their organizations, a critical question emerges: who's governing all that LLM traffic?

Beyond the Single Payment Provider Lock-in: How Kong Enables Multi-Rail Billing for the AI Era

The recent article on OpenAI overhauling its payment systems to reduce its dependency on Stripe highlights an important tension many digital platform builders face today: how to balance usage-based monetization with the realities of payments infrastructure dependency.

Kong Insomnia Named in Gartner's Market Guide for API and MCP Testing Tools

We’re proud to share that Kong Insomnia has been recognized as a vendor in the Gartner Market Guide for API and MCP Testing Tools in February, 2026. In a rapidly evolving landscape where AI-driven integration and MCP servers are reshaping how APIs are built, tested, and consumed, being recognized by Gartner validates what our community and customers already know: Insomnia is a serious, enterprise-ready platform for modern API development and testing.

Kong Wins AI Innovator of the Year in SiliconANGLE Media's Tech Innovation CUBEd Awards

We're excited to announce that Kong just took home the AI Innovator of the Year award from SiliconANGLE Media's 2026 Tech Innovation CUBEd Awards. SiliconANGLE Media runs this annual awards program to recognize companies, technologies, and people moving the needle in B2B tech. Winners go through a review process by industry analysts and experts.