Systems | Development | Analytics | API | Testing

OpenAPI Schema Validation for AI

Schema validation ensures AI agents interact with APIs accurately by enforcing strict rules for requests and responses. OpenAPI provides a clear, machine-readable contract for APIs, reducing errors and improving reliability. This approach eliminates issues like ambiguous responses or schema drift, ensuring predictable behavior and secure data access.

Run Local LLMs on Mac to Cut Claude Costs

Part of the motivation for this post is how cloud API economics are shifting: Anthropic is moving large enterprise customers toward per-token, usage-based billing (unbundled from flat seat fees), which makes “always call the API” a moving cost line for teams at scale. A hybrid or local layer is one way to keep spend bounded while you still use premium models where they matter.

DreamFactory 7.5.0 Release: GitHub-Connected AI Agents, a Platform-Wide Security Hardening Pass, and a Smoother MCP Authoring Experience

DreamFactory 7.5.0 is focused on two audiences that have been growing fastest in our user base: teams wiring LLM agents to production databases through MCP, and security and platform teams hardening those deployments for real-world traffic.

Introducing Kong A2A and MCP Metrics: Visibility and Governance for AI Tool Adoption at Scale

Scaling LLM and agentic AI adoption from pilot programs to enterprise-wide deployments is a massive logistical rollout. As AI and agentic usage grow, so does a nagging question for leadership: **Are agents using the right tools to get the job done?** While raw infrastructure metrics might tell you if a server is "up," they fail to tell you if your AI investment is being leveraged.

10 Ways to Optimize API Performance Testing for Faster, More Reliable Results (2026 Guide)

Many teams dedicate time and resources to API performance testing, yet still face sluggish releases and delayed deployments. Incidents slip through, and users encounter slow applications. The root cause? Too often, teams treat performance testing as a checkbox, without truly simulating real-world loads or analyzing key performance metrics such as latency, throughput, and error rates. This leads to a false sense of readiness that quickly unravels in production environments.

Why the "tsunami of code" is breaking QA | From the Bear Cave Ep. 3

Recent SmartBear research shows that 70% of teams are already seeing quality degrade with AI-generated code, creating a real bottleneck in the software-development lifecycle (SDLC). As output increases, QA teams are left choosing between delaying releases to validate changes or shipping faster with less confidence in what’s actually working. In this From the Bear Cave clip, SmartBear CEO Dan Faulkner and CMO Kelly Wenzel dig into a growing gap in modern software development: how AI is accelerating code generation but testing and quality validation aren’t scaling with it.

Velocity can't come at the cost of quality

AI-generated code is flooding your pipelines. Your test automation debt is piling up. If this sounds familiar, you're not alone. Velocity can't come at the cost of quality. As AI transforms how we build software, API testing must evolve. Join Justin Collier, Senior Director, Product Management, and Yousaf Nabi, Developer Advocate, to explore the future of API testing in an AI-driven world.

MCP in Production: Governing Agentic API Consumption | DeveloperWeek

As AI agents begin interacting with APIs, traditional API governance models need to evolve. In this DeveloperWeek session, Derric Gilling (WSO2) explains how organizations can manage and secure agent-driven API consumption using the Model Context Protocol (MCP). Unlike human applications, AI agents can generate large volumes of API calls from a single prompt. Without proper controls, this can lead to unexpected costs, security risks, and limited visibility into how APIs are being used.

Git review for TestComplete projects

Teams using TestComplete face a common problem: one small test change can produce a wide set of modified files, and not all of them deserve the same level of scrutiny. The fix is not to review everything equally – it is to classify TestComplete artifacts by risk, then standardize how your team reviews, stages, and merges them. This article outlines this process and offers best practices for using Git effectively with TestComplete projects.