Systems | Development | Analytics | API | Testing

Secure AI at Scale: Prisma AIRS and Kong AI Gateway Now Integrated

In today's digital landscape, APIs are the backbone of modern applications, and AI is the engine of innovation. As organizations increasingly rely on microservices and AI-powered features, the API gateway has become the critical control point for managing traffic. But as LLM/GenAI and MCP requests flow through these gateways, they bring a new wave of security challenges.

The Enterprise API Strategy Cookbook: 8 Ingredients for Legacy Modernization

An enterprise API strategy is no longer an optional IT exercise but a mandate for modern business survival. In the digital economy, your organization's internal connectivity, or lack thereof, determines your speed, agility, and capacity for innovation. This cookbook provides the eight essential ingredients for translating technology investment into clear business outcomes, focusing the C-Suite on value, not code.

Model Context Protocol (MCP) Security: How to Restrict Tool Access Using AI Gateways

For too long, the Model Context Protocol (MCP) has operated on a principle of open access: connect an AI agent to an MCP server, and it gets access to every single tool that server offers. While this approach is simple for initial experimentation, it quickly becomes a liability in production.

Migrate from Postman to Insomnia: Free Collaboration for Unlimited Users

With Valentine’s Day fast approaching, love is in the air. And apparently, so are breakup emails. This isn’t just about one pricing change. It’s about a pattern. Some tools promise “free forever” to get you invested, watch you build workflows, and then change the rules. They know you’ve onboarded your teams, documented your APIs, and integrated the tool into your daily work. By the time they spring the paid tier on you, switching feels painful.

Introducing the Kong MCP Registry: Connect AI Agents with the Right Tools

In the rapidly evolving landscape of AI-driven development, the Model Context Protocol (MCP) has emerged as the critical standard for connecting AI applications to the data and tools they need. We are excited to announce the Technical Preview (TP) of Kong MCP Registry, a major milestone in our mission to provide the most comprehensive platform for modern API and AI management.

Agentic AI Governance: Managing Shadow AI and Risk for Competitive Advantage

While every organization races to deploy AI agents faster, a quieter crisis is compounding in the background, and it will play a large part in determining who survives the agentic era. The numbers are stark. Too many executives see AI governance as a brake on innovation or something to figure out later, after the speed problem is solved. With agentic AI, that's backwards.

Agentic AI Cost Management: Stopping Margin Erosion and the Fragmentation Tax

While every organization races to deploy AI agents faster, finance departments are watching something alarming unfold—and it will play a large part in determining who survives the agentic era. The numbers are stark: 84% of companies report more than 6% gross margin erosion from AI costs. Within that, 26% report erosion of 16% or more. And only 15% of companies can forecast AI costs within ±10% accuracy—the majority miss by 11-25%, and nearly one in four miss by more than 50%.

Building Secure AI Agents with Kong's MCP Proxy and Volcano SDK

Modern AI applications are no longer just about sending prompts to an LLM and returning text. As soon as AI systems need to interact with real business data, internal APIs, or operational workflows, the problem becomes one of orchestration, security, and control. The challenge is to build secure AI agents without embedding fragile logic or exposing sensitive systems directly to a model. This is where a layered architecture using Volcano SDK, DataKit, and Kong MCP Proxy becomes compelling.

Build Agentic Workflows: Expose API Orchestration as MCP Tools with Kong AI Gateway

Learn how to expose an API orchestration workflow as an MCP server using Kong AI Gateway, configure semantic guardrails, and build an agent with the Volcano SDK. We onboard GPT-4 behind /llm, orchestrate with DataKit, and debug MCP tools in Insomnia—end-to-end without adding server code.