Systems | Development | Analytics | API | Testing

Velocity can't come at the cost of quality

AI-generated code is flooding your pipelines. Your test automation debt is piling up. If this sounds familiar, you're not alone. Velocity can't come at the cost of quality. As AI transforms how we build software, API testing must evolve. Join Justin Collier, Senior Director, Product Management, and Yousaf Nabi, Developer Advocate, to explore the future of API testing in an AI-driven world.

Why the "tsunami of code" is breaking QA | From the Bear Cave Ep. 3

Recent SmartBear research shows that 70% of teams are already seeing quality degrade with AI-generated code, creating a real bottleneck in the software-development lifecycle (SDLC). As output increases, QA teams are left choosing between delaying releases to validate changes or shipping faster with less confidence in what’s actually working. In this From the Bear Cave clip, SmartBear CEO Dan Faulkner and CMO Kelly Wenzel dig into a growing gap in modern software development: how AI is accelerating code generation but testing and quality validation aren’t scaling with it.

Complete beginner's guide to test automation | TestComplete

Learn how to get started with TestComplete in this comprehensive beginner's tutorial. TestComplete is an automated testing platform for desktop, web, and mobile applications – and this guide will help you create your first test in just minutes. What You'll Learn: This video is perfect for test automation engineers and developers new to automated testing. Whether you're testing desktop applications, web apps, or mobile interfaces, this tutorial covers the essential features every TestComplete user needs to know.

How to scale API standards across large teams | Swagger Studio

When multiple designers and teams contribute APIs, you face inconsistent schemas, divergent patterns, and broken assumptions. However, the "shift-left" approach to API standardization helps you catch issues early, automate compliance, and maintain quality without manual gating – making your API program truly scalable. In this video, SmartBear Senior Solution Engineer Joe Joyce demonstrates how to enforce consistent API standards across large development teams using Swagger Studio's governance, collaboration, and CI/CD integration features.

Inside the SmartBear Roadmap: Delivering Application Integrity Across the SDLC

As software teams move faster across APIs, testing, and observability, keeping application integrity intact is harder than ever. Join SmartBear product leaders for a Now / Next / Later look at how we’re evolving our platform to help teams build, test, and operate software with confidence. What you’ll get from this session: Get a clear view of where SmartBear is headed and how these capabilities come together to help your teams scale quality alongside velocity across the SDLC.

How to Add Intent and Metadata to OpenAPI in Swagger Studio for AI Agents

Modern APIs aren’t just read by developers anymore; they’re also interpreted by tools and AI agents. In this video, Solutions Architect Joe Joyce walks through how to enrich an OpenAPI definition in Swagger Studio with meaningful metadata such as descriptions, summaries, operation IDs, tags, schemas, and examples. You’ll see step-by-step how these additions help tools and automated agents better understand API intent, purpose, and semantics. This turns your OpenAPI definition into a contract that scales beyond documentation.

AI test automation with full visibility | Qmetry + Reflect integration

In this demo, you’ll see how Reflect and QMetry work together to connect automated testing with test management. In this short walkthrough, test execution from Reflect flows directly into QMetry, giving your team better visibility, reducing manual effort, and helping you move faster without losing control of quality. If you’re looking to scale testing while keeping everything organized and traceable, this integration is built for you.

Turn test data into release insights with AI | SmartBear MCP for Zephyr

Testing teams need to know if they’re ready for a release. Getting answers within Jira, however, often means jumping between multiple screens and reports. In this demo, see how you can query your test data with SmartBear MCP for Zephyr to get insights directly from your testing system of record, so you can make faster, more informed release decisions. From within AI tools like Copilot, Claude, or VS Code, you’ll learn how you can.

BearQ Q&A recap: Top questions from SmartBear's live event

Asked a question in our BearQ livestream? We’ve got your answers. We received 100+ questions during the event and couldn’t get to all of them live, so we pulled together the most common ones and answered them here. In this video, we break down what BearQ can test, how it handles authentication and complex workflows, how the AI works behind the scenes, how it fits into your existing tools, and even how to get early access.

In case you missed it | Meet Smartbear BearQ + application integrity

Missed the live event? Here’s a quick look at what we unveiled. AI has fundamentally changed how applications are built, creating a growing gap between development velocity and your ability to validate what’s being built. That’s why SmartBear delivers application integrity for the AI era – ensuring continuous, measurable assurance that your software just works as intended, with governance to operate at AI speed and scale.