Systems | Development | Analytics | API | Testing

Announcing Kubernetes Ingress Controller 3.5

We're happy to announce the 3.5 release of Kong Ingress Controller (KIC). This release includes the graduation of combined services to General Availability, support for connection draining, as well as the start of deprecating support for some Ingress types as we help move customers to the Kubernetes Gateway API. Let’s get into more details about these!

Announcing Mesh Manager Support in Konnect Terraform Provider

We’re excited to announce the beta support for Mesh Manager in the Konnect Terraform Provider — a new tool that brings the power of infrastructure-as-code to Kong’s Service Mesh management platform. This provider enables engineering teams to declaratively manage Konnect Mesh resources using HashiCorp Terraform.

Transforming Jira Test Management with advanced JQL functions for faster QA insights

If you’re part of a software testing team using Jira, you know how crucial it is to keep track of all your tests, their statuses, and how they relate to requirements. But let’s be honest - sometimes getting real-time test insights in Jira isn’t always easy. That’s exactly why the latest update from Xray Cloud is a game changer for test management in Jira. This release introduces 29 new advanced JQL (Jira Query Language) functions designed specifically for testing.

What Is A Flaky Test? Causes, Impacts & How To Deal With Them

In software development and automated testing, consistency really matters. One of the most frustrating barriers that developers and QA engineers encounter is a little something we call flaky tests: tests that pass or fail at random times with no changes to the code. These googly eyed tests tend to do the most damage and can produce unreliable results which erodes trust in the testing function and can even cause release cycles to slow down especially in the architectural context of CI/CD pipelines.

ZeroTrust for LLMs: Applying Security Principles Through DreamFactory's Gateway

The key to securing large language models (LLMs) lies in adopting a Zero‑Trust framework. This approach ensures that every interaction - whether from users, devices, or applications - is verified, authenticated, and authorized. With the rise of LLMs in enterprise environments, traditional security models no longer suffice. Here's how DreamFactory's Gateway helps implement Zero‑Trust principles effectively.

Building Streaming Data Pipelines, Part 2: Data Processing and Enrichment With SQL

In my last blog post, I looked at the essential first part of building any data pipeline—exploring the raw source data to understand its characteristics and relationships. The data is information about river levels, rainfall, and other weather information provided by the UK Environment Agency on a REST API. I used the HTTP Source connector to stream this into Apache Kafka topics (one per REST endpoint), and then Tableflow to expose these as Apache Iceberg tables.

AI at Scale Needs Control: Inside ClearML's Resource Allocation Policy Manager

By Erez Schnaider, Technical Product Marketing Manager, ClearML AI engineering today goes far beyond simply training a model. Teams are fine-tuning large language models on high-end GPUs, running massive, distributed experiments, and orchestrating hybrid workflows spanning on-premises clusters, private and public clouds. With great power comes great responsibility, and with powerful hardware comes complexity. Without robust controls, things can quickly descend into costly chaos: Who’s using what?

Don't lose the trace that matters: Multiplayer's zero-sampling approach

Multiplayer is the only session recorder that combines frontend replays with unsampled backend traces, stitched together automatically. You don’t have to choose between drowning in noise or missing the critical data. Backend tracing is the backbone of understanding how modern distributed systems behave. Each request generates a chain of spans as it travels through your services and components: what happened, how long it took, and whether it failed.

How to Get Security Patches for Legacy Unsupported Node.js Versions

Are you still running Node.js 12, 14, or even older versions in production? If so, you’re facing a serious challenge: these versions have reached End-of-Life (EOL) and no longer receive official updates or security patches. For many organizations, especially those operating on legacy environments like RHEL 7 or Ubuntu 18.04, upgrading to the latest Node.js version isn’t always feasible.