Systems | Development | Analytics | API | Testing

Build Your Own Internal RAG Agent with Kong AI Gateway

RAG (Retrieval-Augmented Generation) is not a new concept in AI, and unsurprisingly, when talking to companies, everyone seems to have their own interpretation of how to implement it. So, let’s start with a refresher. RAG (short for Retrieval-Augmented Generation) is a technique that injects relevant data from an external knowledge source directly into a prompt before sending it to a Large Language Model (LLM). “But wait, my model is already fine-tuned on my domain-specific data.

Why API-First Matters in an AI-Driven World

APIs have long been the backbone of modern software systems, architectures, and businesses. They now dominate the web, accounting for 71% of all internet traffic. Generative AI is accelerating this trend especially as we pivot our interaction with common web-based capabilities, like “search” in favour of AI-enriched variants. More AI leads to more APIs, and with that, APIs act as an important mechanism to move data into and out of AI applications, AI agents, and Large Language Models (LLMs).

Embed Quality to Ensure Regulatory Compliance in FinTech Solutions

This article originally appeared on Software Testing News. We’re sharing it here for our audience who may have missed it. An overlooked API can expose customer data, trigger multi-million-dollar fines, and sink a FinTech product launch. And now, the FinTech industry is at a crossroads, driven by innovation yet bounded by intensifying regulatory demands.

Bridging SQL and Vector DBs: Unified Data AI Gateways for Hybrid AI Stacks

AI systems need both structured data (like spreadsheets) and unstructured data (like images or text). SQL databases excel at structured data, while vector databases handle unstructured data for tasks like similarity searches. The solution? Hybrid AI stacks that combine both through unified Data AI Gateways.

How To Create A Pandas Pivot Table In Python

In today’s data-driven world, collecting data is easy, but making sense of it is what truly matters. That’s where Pandas pivot tables come into play. With just a few lines of Python, you can quickly turn disorganized data into meaningful, well-structured summaries. Imagine Excel pivot tables, but faster, more flexible, and fully powered by code.

How To Use A Testing Suite In Software Testing

Quality assurance (QA) is no longer an optional luxury in today’s software development, it is a necessity. As applications become more complex, executing or managing hundreds or thousands of tests by hand is increasingly impractical. Testing suites in software testing provides a way to manage a formal collection of test cases to test various aspects of a software application systematically.

Always Obsessed, Always Brilliant, Always Qlik

At Qlik, we’re incredibly proud of our Luminary and Partner Ambassador communities. They’re more than customers, more than partners, they’re an integral part of who we are. For over a decade, hundreds of data leaders across industries and continents have proudly called themselves Qlik Luminaries and Partner Ambassadors. With new people joining every year, this global network keeps growing, but one thing never changes: these folks bring the spark.

Turn Playwright Test Reports into Insights

Automated testing is essential for providing quality software in today's high-velocity development environments. So, Microsoft developed Playwright, an open-source framework for automating user interactions with web browsers. It is used for end-to-end testing of web applications. Playwright, Microsoft’s advanced end-to-end testing framework, has gained popularity among developers and QA teams because of its cross-browser capabilities and feature set.

Vibe Data Engineering? We've Been Delivering That Since 2012

A few years ago, if you asked someone to define “vibe data engineering,” you’d probably get a puzzled look. Today, it's a phrase that's beginning to surface in conversations across enterprise teams, especially among those who need data to work for them, not the other way around. It doesn’t mean writing the cleanest DAGs or orchestrating distributed clusters. It means making data work fluidly, simply, and on your terms. It means doing more with less, and doing it without code.