Whether in the tech press or analyst reports, it became more common in 2018 to see the words “API” and “security”—or worse, “API” and “breach”—together in the same headline. APIs are not only the connective tissue between applications, systems, and data, but also the mechanisms that allow developers to leverage and reuse these digital assets for new purposes.
Four months ago, we declared that API Management is dead and announced our vision for a service control platform. Today, we’re taking a critical step towards fulfilling that vision with the launch of artificial intelligence and machine learning additions to the Kong Enterprise platform – Kong Brain and Kong Immunity.
Apigee experts published over 50 editorials in 2018 — including dozens here in APIs and Digital Transformation — to help developers, IT architects, and business leaders understand how to maximize the value of APIs and keep pace with constant technological change.
2018 was an AMAZING year for Google Cloud’s Apigee team. It was, in fact, another “best year ever.” We’re deeply grateful to the companies who use Apigee to accelerate their businesses with APIs.
Kong is very easy to get up and running: start an instance, configure a service, configure a route pointing to the service, and off it goes routing requests, applying any plugins you enable along the way. But Kong can do a lot more than connecting clients to services via routes.
In a previous post, we explained how the team at Kong thinks of the term “service mesh.” In this post, we’ll start digging into the workings of Kong deployed as a mesh. We’ll talk about a hypothetical example of the smallest possible deployment of a mesh, with two services talking to each other via two Kong instances – one local to each service.