Systems | Development | Analytics | API | Testing

What is AIOps? Transforming IT Operations with AI

Picture this: It's 3 AM, and your phone erupts with alerts. Within minutes, you're drowning in a tsunami of notifications—hundreds of them—while your company's critical services hang by a thread. Your monitoring dashboard looks like a Christmas tree gone wrong, every light blinking red, and you have no idea where to start. Sound familiar?

How to Build a Multi-LLM AI Agent with Kong AI Gateway and LangGraph

In the last two parts of this series, we discussed How to Strengthen a ReAct AI Agent with Kong AI Gateway and How to Build a Single-LLM AI Agent with Kong AI Gateway and LangGraph. In this third and final part, we're going to evolve the AI Agent with multiple LLMs and Semantic Routing policies across them. In this blog post, we'll also explore new capabilities introduced in Kong AI Gateway 3.11 that support other GenAI infrastructures.

What is an AI Gateway?

Ever wondered what an AI Gateway is? Think of it as an airport for your AI traffic! We break down how an AI Gateway can: Act as a central access point for different AI models. Provide security for your LLM prompts. Route traffic to the best model for the job. Save on AI costs with features like response caching. Learn the basics of this essential tool that helps manage AI and LLM costs, security, and efficiency.

Kong AI Gateway: Prompt Compression

High token consumption from long prompts can degrade model performance and lead to expensive, inefficient LLM operations. This video demonstrates how to solve that problem using Kong's AI Gateway. AI Prompt Compressor Plugin: See how this plugin intelligently compresses incoming prompts before they hit the model. It summarizes context, removes redundant information, and trims excess tokens—all while preserving the original meaning.This could lead to significant cost savings and improved performance.

Marco Palladino on the growth of APIs with AI | CUBE Conversation

Marco Palladino, co-founder and CTO of Kong, joins theCUBE Research’s John Furrier for an insightful conversation exploring the latest technological advancements at the company. As a recognized expert in the field, Palladino shares his perspectives on API innovations and the evolving impact of AI in a world increasingly interconnected by digital technologies.

Service Catalog: The End of API Sprawl

Introducing Kong's Service Catalog, a powerful feature within the Kong Konnect platform designed to improve the lives of API producers. Learn how you can get a complete, 360-degree overview of your entire service ecosystem, not just the services running behind a Kong gateway. In this demo, you will learn: This tool is essential for platform teams who need to enforce governance and for application teams who need clear visibility into service details and dependencies.

Debugger in Kong Konnect

Are you spending too much time trying to track down failing requests or figure out performance issues within your Kong API Gateway? In this quick demo, we show you how to use Konnect's Debugger to save hours of debugging time by rapidly finding the root cause of latency and other issues. You'll learn how Debugger allows you to set up deep tracing sessions for your Kong data planes, collecting OpenTelemetry-compatible traces across the entire request and response lifecycle. We will walk you through a real-world scenario where we diagnose a spike in latency for a specific service.

How to Build a Single LLM AI Agent with Kong AI Gateway and LangGraph

In my previous post, we discussed how we can implement a basic AI Agent with Kong AI Gateway. In part two of this series, we're going to review LangGraph fundamentals, rewrite the AI Agent and explore how Kong AI Gateway can be used to protect an LLM infrastructure as well as external functions.