Systems | Development | Analytics | API | Testing

The latest News and Information on Software Testing and related technologies.

AI in Performance Testing: MCP server Integration with OctoPerf

Some topics are just too trendy to overlook, and AI in testing is definitely one of them. A few weeks ago, we shared a blog post introducing the integration between an MCP server and OctoPerf, highlighting the many benefits it brings. To illustrate this in action, we recently hosted a webinar led by Thomas Pitteman, performance testing expert at Adeo and OctoPerf power user.

Mitmproxy vs Proxymock: Replaying Traffic for Realistic API Testing

Replaying traffic is a core tool in your toolbox when you need to reproduce a tricky bug or validate how your app behaves. Traffic replay is especially valuable for testing complex software applications that rely on APIs and microservices, where integration and functionality must be thoroughly validated.

Part 1: Building a Production-Grade Traffic Capture and Replay System

A few years ago I was on call during the Super Bowl. At the time I was working for an observability vendor and one of our customers had an outage caused by a surge in user traffic. But our monitoring system didn’t have enough data to know what went wrong and I sat on a call for 2 hours painfully listening to them spinning up more servers and trying to catch up with the user load.

How can we manage and secure test data under regulationsnlike GDPR and CCPA?

Keep test data private by avoiding production data and favoring synthetic data that mimics real patterns. If you must reproduce a production issue, fully anonymize and break any link to personal information, track data provenance, and limit access. Maintain relationships between datasets when generating synthetic records and confirm your software suppliers meet privacy standards. This approach helps teams satisfy GDPR and CCPA while testing effectively.

Levels of Autonomy in Software Development: Closing the Gap Between Creation and Confidence

When the automotive industry introduced the concept of Levels of Autonomy, it gave us a shared language for something profound. It wasn’t just about self-driving cars, it was about how humans and intelligent systems work together as execution gradually shifts from one participant to the other. Level 0 is full human control. Level 5 means the car can handle any situation on its own. And between those two extremes are a series of stages that capture both technological progress and human adaptation.

Debugging Without a Net: The Pain of Reproducing Production Issues

Every engineer has been there — a late-night page, a broken feature in production, and no clear way to reproduce it. The logs are vague. The metrics look normal. Your local environment works fine. Yet something somewhere is failing for real users. So begins the detective work — debugging a live system with almost no tools, no perfect test data, and no clone of production.

Agentic QA as a Quality Operating Model

By now, most teams experimenting with AI-augmented testing have started with narrow, tactical use cases: writing test cases faster, summarizing logs, or tagging defects. These are useful — and they build trust in the tech. But true value emerges when you stop thinking of agents as plug-ins, and start thinking of them as a virtual QA team, a set of coordinated roles that evolve how testing is done, how it’s governed, and how it delivers value across the delivery lifecycle.

SmartBear Recognized as a Visionary in the 2025 Gartner Magic Quadrant for API Management

We’re proud to share that Gartner has recognized SmartBear as a Visionary in the 2025 Gartner Magic Quadrant for API Management. We believe this recognition reflects our dedication to API development teams and to delivering practical solutions that drive excellence. APIs are essential, allowing businesses to move faster, integrate easily, and deliver the best experiences for their customers.