Systems | Development | Analytics | API | Testing

The latest News and Information on Software Testing and related technologies.

What Is Baseline Testing? Meaning, Examples & Use Cases

Every software change answers one simple question: Did something break? Baseline testing exists to answer it with confidence. Teams often ship regressions simply because they lack a reliable reference to compare against. In modern software testing, a baseline provides that reference point and helps teams understand change without slowing down delivery.

Can you share what challenge the customer was facing before finding Katalon?

Many customers were struggling with BDD-style test processes — writing structured, English-language scripts that business and tech teams could agree on but couldn’t easily automate. Before Katalon, they lacked a smooth way to turn those scripts or raw requirements into automated tests, which Katalon (and AI features like Studio Assist) finally made efficient. — Coty Rosenblath, Chief Technology Officer at Katalon.

Measuring the Impact of AI in QA and Automation | Jaydeep Chakrabarty | Testflix 2025

In this fireside chat with Jaydeep, we’ll dive into how AI is changing the way we measure success in both QA processes and live generative AI bots. On the QA side, we’ll look at cycle time reduction—the “time goalie” metric that shows how quickly we move from discovering a bug to fixing it. We’ll also talk about predictive quality accuracy, which shifts QA from being reactive to proactive by predicting which code changes are most likely to introduce bugs. And of course, we’ll touch on test creation velocity—how much faster teams are able to create meaningful automation with AI’s support.

Playwright MCP: Turn Natural Language into Reliable Tests in Minutes | Vignesh Srinivasa Raghavan

Model Context Protocol (MCP) lets AI agents use real tools safely. In this talk, we’ll see how Playwright MCP bridges agents and a real browser by leveraging the accessibility tree (not screenshots) to navigate pages, locate elements, perform actions, and extract data—then export stable Playwright tests you can commit.

Why gRPC is a Debugging Nightmare #speedscale #observability #grpc #testing #devops

gRPC is fast and efficient - until it breaks at 2:00 AM. Traditional observability tools are built for HTTP/1.1 and JSON. When you switch to gRPC, you’re dealing with binary Protobuf payloads and HTTP/2 multiplexing that most logs and traces simply weren't designed to handle. Speedscale flips the switch by decoding Protobuf directly into human-readable JSON in real-time. Get the speed of gRPC with the visibility of REST.

Evaluating AI Tools: Practical Framework for Testers & Leaders | Ajay Balamurugadas | Testflix 2025

The AI ecosystem is exploding with tools that promise to accelerate delivery, improve quality, and transform the way we work. Yet for many teams, evaluating these tools is overwhelming - flashy demos and marketing claims rarely answer the real questions: Will this work in our context? Can it scale? Is it sustainable?

Stateful Vs Stateless: A Developer'S Real-World Guide (2026)

Why do some bugs only appear after deployment, even when tests pass locally? Early in my backend work, I kept hearing discussions around stateful vs stateless. It felt academic at first, but once I started dealing with scaling issues, flaky tests, and production bugs, I saw how much this decision actually matters. This article is based on how I’ve seen these architectures behave in real systems, not just diagrams.

DLP: The Key to Secure K8s Testing #speedscale #dlp #kubernetes #devops #testing

Testing with production traffic doesn't have to be a security risk. Engineers often avoid production data because of sensitive info like passwords, tokens, and PII. But legacy test data management is too static for modern, fast-changing payloads. Enter the Speedscale Streaming DLP Engine. It automatically detects and redacts sensitive data in real time as it's captured from your environment. You get the realism of production traffic without the risk of a data breach.

More code, more bugs, same team. So what's your plan?

The plan is to test earlier and faster to keep up with AI-generated code. By using AI-assisted, in-sprint testing and shift-left strategies, teams can catch issues sooner, scale testing with the same team, and maintain quality despite higher code volume. — Alex Martins, VP of Strategy at Katalon Follow Katalon for more insights in our series!