Systems | Development | Analytics | API | Testing

Kong

API and Microservices Management Benchmark

Performance is a critical factor when choosing an API management solution. For businesses, the need to deliver low latency and high throughput is critical to ensuring that API transaction rates keep up with the speed of business. This white paper compares the performance of Kong and Apigee to understand performance in production environments.

Kong: Kubernetes Ingress Controller

Kubernetes is fundamentally changing container orchestration; is your stack ready to support it at scale? Watch the talk recording to learn how Kong’s Kubernetes Ingress Controller can power-drive your APIs and microservices on top of the Kubernetes platform. Hear Kong engineers walk through the process of setting up the Ingress controller and review its various features.

Steps to Deploying Kong as a Service Mesh

In a previous post, we explained how the team at Kong thinks of the term “service mesh.” In this post, we’ll start digging into the workings of Kong deployed as a mesh. We’ll talk about a hypothetical example of the smallest possible deployment of a mesh, with two services talking to each other via two Kong instances – one local to each service.

Microservices and Service Mesh

The service mesh deployment architecture is quickly gaining popularity in the industry. In the strategy, remote procedure calls (RPCs) from one service to another inside of your infrastructure pass through two proxies, one co-located with the originating service, and one at the destination. The local proxy is able to perform a load-balancing role and make decisions about which remote service instance to communicate with, while the remote proxy is able to vet incoming traffic.