Systems | Development | Analytics | API | Testing

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph. In this third and final part, we're going to evolve the AI Agent with multiple LLMs and Semantic Routing policies across them. In this blog post, we'll also explore new capabilities introduced in Kong AI Gateway 3.11 that support other GenAI infrastructures.

What is an AI Gateway?

Ever wondered what an AI Gateway is? Think of it as an airport for your AI traffic! We break down how an AI Gateway can: Act as a central access point for different AI models. Provide security for your LLM prompts. Route traffic to the best model for the job. Save on AI costs with features like response caching. Learn the basics of this essential tool that helps manage AI and LLM costs, security, and efficiency.

Kong AI Gateway: Prompt Compression

High token consumption from long prompts can degrade model performance and lead to expensive, inefficient LLM operations. This video demonstrates how to solve that problem using Kong's AI Gateway. AI Prompt Compressor Plugin: See how this plugin intelligently compresses incoming prompts before they hit the model. It summarizes context, removes redundant information, and trims excess tokens—all while preserving the original meaning.This could lead to significant cost savings and improved performance.

Marco Palladino on the growth of APIs with AI | CUBE Conversation

Marco Palladino, co-founder and CTO of Kong, joins theCUBE Research’s John Furrier for an insightful conversation exploring the latest technological advancements at the company. As a recognized expert in the field, Palladino shares his perspectives on API innovations and the evolving impact of AI in a world increasingly interconnected by digital technologies.

Service Catalog: The End of API Sprawl

Introducing Kong's Service Catalog, a powerful feature within the Kong Konnect platform designed to improve the lives of API producers. Learn how you can get a complete, 360-degree overview of your entire service ecosystem, not just the services running behind a Kong gateway. In this demo, you will learn: This tool is essential for platform teams who need to enforce governance and for application teams who need clear visibility into service details and dependencies.

Debugger in Kong Konnect

Are you spending too much time trying to track down failing requests or figure out performance issues within your Kong API Gateway? In this quick demo, we show you how to use Konnect's Debugger to save hours of debugging time by rapidly finding the root cause of latency and other issues. You'll learn how Debugger allows you to set up deep tracing sessions for your Kong data planes, collecting OpenTelemetry-compatible traces across the entire request and response lifecycle. We will walk you through a real-world scenario where we diagnose a spike in latency for a specific service.

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM infrastructure as well as external functions.

Introducing Konnect Debugger: Get Unprecedented API Traffic Visibility

We're excited to announce the general availability of Konnect Debugger, formerly known as Active Tracing during its tech preview phase. This powerful debugging and observability solution in Kong Konnect has evolved from a focused tracing tool into a comprehensive debugging platform.

Create an internal API & service inventory with Konnect Service Catalog

In this livestream, we’ll show you how to use Konnect Service Catalog to build and maintain a centralized inventory of your internal APIs and services. Learn how to improve visibility, enforce automated governance, and enhance collaboration across teams. We’ll highlight key features and best practices that help boost your security posture, improve developer collaboration, and enforce compliance—so your organization can operate with better API oversight.

A Brief History of APIs

The history of modern technology is a story about APIs. But the same tools that built our connected world have also created complexity and fragmentation. Before we can offload major workloads to AI and autonomous agents, we need to fix the shaky foundation they might be built on. This video explains the evolution of APIs, the challenges of API fragmentation, and why managing the full API lifecycle is critical for the future of tech and artificial intelligence.

Announcing Mesh Manager Support in Konnect Terraform Provider

We’re excited to announce the beta support for Mesh Manager in the Konnect Terraform Provider — a new tool that brings the power of infrastructure-as-code to Kong’s Service Mesh management platform. This provider enables engineering teams to declaratively manage Konnect Mesh resources using HashiCorp Terraform.

Announcing Kubernetes Ingress Controller 3.5

We're happy to announce the 3.5 release of Kong Ingress Controller (KIC). This release includes the graduation of combined services to General Availability, support for connection draining, as well as the start of deprecating support for some Ingress types as we help move customers to the Kubernetes Gateway API. Let’s get into more details about these!

It's time to start prioritizing every side of API discovery

Join us for a deep dive into API discovery – and why it’s time to treat it like a first-class priority. In this session, we’ll explore what we mean by the “two sides of API discovery” and why unifying both sides with a comprehensive solution is critical to driving API adoption and reuse, strengthening your organization’s security posture, and mitigating the financial and developer productivity-related costs associated with API sprawl.

Kong AI Gateway 3.11: Reduce Token Spend, Unlock Multimodal Innovation

Today, I'm excited to announce one of our largest Kong AI Gateway releases (3.11), which ships with several new features critical in building modern and reliable AI agents in production. We strongly recommend updating to this version to get access to the latest and greatest that AI infrastructure has to offer.

Kong Gateway Enterprise 3.11 Makes APIs & Event Streams More Powerful

We’re excited to bring you Kong Gateway Enterprise 3.11 with compelling new features to make your APIs and event streams even more powerful, including: We’ll also touch on what’s new with Konnect networking and Active Tracing. There’s a lot to unpack, so keep on reading for the full story!

Build Your Own Internal RAG Agent with Kong AI Gateway

RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a refresher. RAG (short for Retrieval-Augmented Generation) is a technique that injects relevant data from an external knowledge source directly into a prompt before sending it to a Large Language Model (LLM). “But wait, my model is already fine-tuned on my domain-specific data.

Service Catalog in Kong Konnect: The End of API Sprawl

Are your developers tired of the endless search for APIs? When teams can't find the services they need, innovation slows to a crawl. Developers waste valuable time digging through old repositories and outdated documentation, leading to frustration and project delays. That's why we built the Kong Service Catalog.

AI Gateway Benchmark: Kong AI Gateway, Portkey, and LiteLLM

In February 2024, Kong became the first API platform to launch a dedicated AI gateway, designed to bring production-grade performance, observability, and policy enforcement to GenAI workloads. At its core, Kong’s AI Gateway provides a universal API to enable platform teams to centrally secure and govern traffic to LLMs, AI agents, and MCP servers. Additionally, as AI adoption in your organization begins to skyrocket, so do AI usage costs.

What is API Security? Fundamentals & Strategies

APIs are the digital lifelines powering modern applications, microservices, IoT devices, and everything in between. They act as the universal translators of data, ferrying information between diverse software platforms. API security encompasses the technologies, practices, and protocols dedicated to protecting these invisible workhorses from unauthorized access, data breaches, and malicious misuse.