Systems | Development | Analytics | API | Testing

AI test automation with full visibility | Qmetry + Reflect integration

In this demo, you’ll see how Reflect and QMetry work together to connect automated testing with test management. In this short walkthrough, test execution from Reflect flows directly into QMetry, giving your team better visibility, reducing manual effort, and helping you move faster without losing control of quality. If you’re looking to scale testing while keeping everything organized and traceable, this integration is built for you.

Turn test data into release insights with AI | SmartBear MCP for Zephyr

Testing teams need to know if they’re ready for a release. Getting answers within Jira, however, often means jumping between multiple screens and reports. In this demo, see how you can query your test data with SmartBear MCP for Zephyr to get insights directly from your testing system of record, so you can make faster, more informed release decisions. From within AI tools like Copilot, Claude, or VS Code, you’ll learn how you can.

The quiet crisis in software quality - and what autonomous testing changes

There’s a tension building inside most engineering organizations right now, and not many people are talking about it openly. AI has given development teams an extraordinary gift: the ability to build faster than ever before. Features that once took days can be prototyped in hours. Applications that required large teams can now be scaffolded by a handful of engineers with the right tools. By almost every measure of development velocity, we are living through a remarkable moment.

Tester's guide to digital transformation: Why robust object recognition matters

Digital transformation rarely happens in a clean, technical environment. Most organizations aren’t starting from a blank slate – you’re operating across a mix of legacy desktop applications, internal web systems, custom-built interfaces, and business-critical workflows that must remain stable while modernization continues around them. The central challenge is whether that automation can remain reliable as underlying technologies evolve.

Create tests in Reflect directly from your coding agent!

If you’ve used Claude Code, GitHub Copilot, Cursor, or any coding agent, you already know the feeling. You describe what you want in plain language, the agent figures out the steps, and you watch it work. When something goes wrong, it backs up and tries a different approach. Reflect now brings that same agentic workflow to test automation. Through the SmartBear MCP server, any coding agent that supports MCP can connect to Reflect and build tests from high-level objectives.

BearQ Q&A recap: Top questions from SmartBear's live event

Asked a question in our BearQ livestream? We’ve got your answers. We received 100+ questions during the event and couldn’t get to all of them live, so we pulled together the most common ones and answered them here. In this video, we break down what BearQ can test, how it handles authentication and complex workflows, how the AI works behind the scenes, how it fits into your existing tools, and even how to get early access.

Why we built vision AI into TestComplete: Solving the complex app testing challenge

When we talk to testing teams at enterprise organizations, we hear the same frustrations repeatedly: “Our automation breaks every time the UI changes.” “We can’t test this application because it doesn’t expose accessible properties.” “We spend more time maintaining tests than creating new ones.” These scenarios block test automation adoption for teams that need it most.

In case you missed it | Meet Smartbear BearQ + application integrity

Missed the live event? Here’s a quick look at what we unveiled. AI has fundamentally changed how applications are built, creating a growing gap between development velocity and your ability to validate what’s being built. That’s why SmartBear delivers application integrity for the AI era – ensuring continuous, measurable assurance that your software just works as intended, with governance to operate at AI speed and scale.

Advanced Object Recognition in Test Automation: Comparing Leading Enterprise Solutions

Object recognition is the capability of test automation tools to identify, locate, and interact with user interface elements within an application under test. It serves as the bridge between automated test scripts and the visual elements that end users see, enabling tests to accurately simulate user actions and validate application behavior.

We're dropping something BIG at SmartBear!

AI has transformed software development, dramatically increasing velocity. The challenge now is maintaining quality at that speed. Engineering leaders across the industry are searching for a real answer. On March 18, we’re unveiling our solution. Join our livestream for an exclusive product reveal featuring special guest, John Romero. A legend in the industry and the perfect voice to help us unveil what we've been building.

Meet SmartBear BearQ - QA for the Age of AI

The AI revolutionized coding, but software testing hasn’t caught up. Until now. Meet BearQ: QA built for the age of AI. BearQ introduces a new paradigm of autonomous, agentic quality assurance. Instead of static scripts and brittle frameworks, BearQ’s specialized AI agents – the QA Lead Agent, Tester Agent, and Explorer Agent – work continuously to: Testing was a static checkpoint. Now it’s a living, learning system that ensures application integrity.

Application integrity: The new standard for AI-era software quality

Over the past few years, we’ve watched coding velocity accelerate at an extraordinary pace. AI has completely disrupted how developers build software. Agentic tools can now generate clean code faster than ever before. While AI has turbocharged code generation, code review, and code-level testing, it’s created a massive strain on the rest of the software development lifecycle.

SmartBear Application Integrity Core | Redefining software quality for the AI era

Agent-powered code generation is happening at unprecedented speed, creating a growing gap between development velocity and your ability to validate what's being built. This leaves organizations unsure if their applications are doing what's intended or missing what's required. That's why SmartBear delivers application integrity for the AI era – ensuring continuous, measurable assurance that your software just works as intended, with governance to operate at AI speed and scale.

Best AI test automation tools for fast, high-quality releases

The promise of test automation was simple: automate repetitive testing tasks, catch bugs faster, and ship quality software at scale. Yet for most development teams, that promise remains unfulfilled. Traditional test automation frameworks demand specialized coding skills, require constant maintenance when applications change, and create bottlenecks that slow down release cycles rather than accelerate them.

Designing error models in OpenAPI for agent-safe APIs | Swagger Studio

Poorly documented or inconsistent error models lead to brittle clients and unreliable automation. Whether you're building APIs for human developers or AI agents, proper error handling is crucial for automation and reliability. In this guided tutorial, SmartBear Solutions Engineer Rosemary Charnley demonstrates how to design robust error models in OpenAPI specifications using Swagger Studio.

Connect API design, testing, and governance in one workflow | Swagger

API design, functional testing, and governance shouldn’t live in silos. In this demo, Product Owner Wojciech Nowacki walks through a practical, end-to-end workflow that connects: You’ll see how API definitions created in Studio feed directly into automated functional testing ensuring style compliance, functional correctness, and governance checks across the full API lifecycle. Perfect for API platform teams, architects, and developers looking to unify design and test automation.

Best tool for AI-powered automated testing: Reflect vs. ACCELQ

If you’re shipping multiple releases weekly and your team is drowning in test maintenance, you’ve likely discovered the painful truth about traditional automation: code-heavy frameworks break faster than your developers can ship features. Every CSS class rename triggers test failures. Every component refactoring creates maintenance sprints.

How to make APIs AI-ready | Automating reviews with Swagger Studio & Spectral

As AI agents increasingly interact with APIs, design clarity and structured metadata matter more than ever. In this focused demo, Senior Solutions Engineer Mairtín Conneely take us through how to use Spectral rulesets in Swagger Studio to automatically enforce AI-ready API design standards across your OpenAPI definitions. This video covers:What “AI-ready” API design meansCreating custom Spectral rulesImporting governance rules into Swagger StudioRunning automated AI-readiness checksScaling API quality with governance automation.