Systems | Development | Analytics | API | Testing

Kong's Dedicated Cloud Gateways: A Deep Dive

In case you missed it, we recently made a big announcement around beta GCP support for Kong’s Dedicated Cloud Gateways (DCGWs). There’s a lot of good stuff in there, but TL;DR DCGWs now support all three of the major cloud service providers (CSPs): AWS, Azure, and GCP at a 99.95% SLA with support for over 25 regions around the globe. Being the first API management vendor to support managed gateway deployments with all three CSPs has a lot of folks excited, for obvious reasons.

72% Say Enterprise GenAI Spending Going Up in 2025, Study Finds

Enterprise adoption of large language models (LLMs) is surging. According to Gartner, more than 80% of enterprises will have deployed generative AI (GenAI) applications or used GenAI APIs by 2026, up from just 5% in 2023. That stark increase paints a telling picture: LLMs have evolved from a fringe technology to a cornerstone of business development and productivity. But as with any new technology, competition is fierce.

How to Use GraphQL with Angular Using Apollo Client

You’ve probably heard of the concept of ‘Frontend decides, backend delivers’ in app development. On the off-chance that you haven’t, it means that the frontend defines the data it needs, and the backend acts on this instruction. This makes the data-fetching process more efficient, simplifies the error handling process and frees us, the devs, from the need to constantly make backend changes. The GraphQL query language for APIs, developed by Facebook, is a vital tool in this regard.

How To Use Python Code For Pulling API Data Efficiently

Do you ever feel like you need a superpower to get the information you need? Especially when you’re really into Python? APIs are pretty much that superpower! APIs (Application Programming Interfaces) let your code "talk" to other systems and get exactly what you need. They can help you come up with a new app, find the next big market trend, or even automate your morning weather report. This guide?

When To Use A List Comprehension In Python

To be honest, most Python developers are not using list comprehensions. Even I, who is writing this blog, never used list comprehensions before. But when I saw some examples, I felt I had to try and use them in my Python code. The reason for this change of mind is that there are a few advantages we get if we implement list comprehensions. Let’s see what these are in this blog today.

How to ensure your AI projects are production ready

Join Kong HQ for an insightful LinkedIn Live session titled "How to Ensure Your AI Projects Are Production Ready." As AI continues to transform industries, moving from experimentation to deployment is one of the biggest challenges organizations face. In this session, our experts will dive into what it truly means for an AI project to be "production ready," discussing essential practices around scalability, reliability, governance, and observability.

Breaking Down Silos: Aligning QA, Dev, and DevOps to Build Better APIs

Software release cycles are accelerating. In fact, 85% of organizations now release at least once per month, with a 51% increase in automated testing spend last year alone. Yet API quality still breaks down. Why? APIs sit at the center of this acceleration, but as velocity increases, many organizations face a persistent challenge.

Platform Engineering Vs Devops: Difference In 2025

Let’s start with DevOps, the buzzword that changed how we think about building and shipping software. These days, every college student and other professional wants to become a DevOps engineer. If you are an aspiring DevOps engineer or already working as a DevOps engineer, this blog will help you understand the difference between Platform engineering and Devops Platform engineering is really changing every company’s perspective on developing platforms.

Rethinking the Economics of Agentic AI: When 'Cheap' Gets Complicated

Everyone thinks AI is getting cheaper. But is it really? At first glance, the economics of AI seem to be improving for everyone. Thanks to continued model optimization and advances in hardware, the cost of running LLMs (also known as inference) is steadily decreasing. Developers today can access incredibly powerful models at a fraction of what it cost just a year ago. But there’s a catch.