Systems | Development | Analytics | API | Testing

SmartBear QMetry's AI-based test generation: Execute tests in minutes

In this video, you’ll discover how SmartBear QMetry's AI-powered test generation automatically transforms requirements into complete, executable test cases within minutes. Watch as we demonstrate test generation cases from Jira, Rally, and Azure requirements, demonstrate how to refine existing tests, and save your teams hours of manual work.
Sponsored Post

What AI Has Never Seen: The Context Gap in Code Generation

Your AI coding assistant has read the entire internet. It knows every programming language, every framework, every best practice documented in Stack Overflow answers and GitHub repositories. It can generate a REST API handler in seconds that looks perfect with clean code, proper error handling, following all the patterns. But here's what it's never seen: your production traffic. Data from a real API request. Someone filling out a form with messed up or incomplete data. AI is changing how we write and test code, but there's a fundamental gap between training data and production reality.

Silent Failures: Why AI Code Breaks in Production

You ship a small “safe” change on Friday. The diff is tiny, the tests are green, and the AI assistant was confident. An hour after deploy, your on-call channel lights up. A downstream service is rejecting responses that look fine in code review. Now you’re rolling back and rewriting a fix that should have been obvious if you had real traffic in the loop. This isn’t a hypothetical.

Are Your APIs Ready for AI? Preparing Your Landscape for Intelligent Consumption

Getting APIs to work with AI has become one of the major themes in the API space recently. And that’s not surprising because APIs are at the core of an AI’s ability to reach out into the world, to get access to data and information, and to invoke commands and workflows to act. This was always what APIs were for, but in this article we will dive a little deeper what that evolution looks like, and what that means for API governance and management.

What is Semantic Caching?

When we think of a typical API, part of a production-ready setup generally includes a cache. This cache allows for similar requests to be served without having to do the entire roundtrip. But when it comes to AI applications powered by large language models, traditional caching falls short. This is because queries to an AI endpoint may look different in terms of how things are worded or phrased but actually mean the same thing semantically.

On-Prem Enterprise Alternatives to Cloud-Hosted AI Dev Tools | DreamFactory

This guide explains how enterprises can replace cloud-hosted AI developer tools with secure, on-prem alternatives. It covers architectures, governance, and selection criteria that meet compliance and performance goals. You will learn how teams stand up private code assistants, model gateways, vector search, and policy controls behind the firewall.

Reusing test cases with Call to Test | Zephyr

SmartBear Zephyr is the Jira-native test management and automation platform that empowers your team to deliver better software,faster. By creating test cases, linking them to user stories and requirements, and monitoring progress all within Jira, you can unify your testing and development efforts. This short video demonstrates how to use a test case in Zephyr, known as the “Call to Test” capability. You’ll see how you can reference and reuse test cases across multiple Jira projects, no matter the test case type.

FastAPI error handling: types, methods, and best practices

Errors and exceptions are inevitable in any software, and FastAPI applications are no exception. Errors can disrupt the normal flow of execution, expose sensitive information, and lead to a poor user experience. Hence, it is important to implement robust error-handling mechanisms in FastAPI applications. In this article, we will discuss the different types of FastAPI errors to help you understand their causes and effects.