Systems | Development | Analytics | API | Testing

Is OpenTelemetry overkill? There's a lazier (and better) way. #speedscale #sre #ebpf #kubernetes

If you "aspire to be lazy" like we do, you know that building staging environments and mocking complex back-ends (like MySQL, AI models, and 3rd party APIs) is a massive time sink. In this demo, we show you how to use Internet Magic (aka eBPF) to: Stay tuned for Part 2, where we take these recordings and spin up a staging environment automatically.

AI test automation with full visibility | Qmetry + Reflect integration

In this demo, you’ll see how Reflect and QMetry work together to connect automated testing with test management. In this short walkthrough, test execution from Reflect flows directly into QMetry, giving your team better visibility, reducing manual effort, and helping you move faster without losing control of quality. If you’re looking to scale testing while keeping everything organized and traceable, this integration is built for you.

Turn test data into release insights with AI | SmartBear MCP for Zephyr

Testing teams need to know if they’re ready for a release. Getting answers within Jira, however, often means jumping between multiple screens and reports. In this demo, see how you can query your test data with SmartBear MCP for Zephyr to get insights directly from your testing system of record, so you can make faster, more informed release decisions. From within AI tools like Copilot, Claude, or VS Code, you’ll learn how you can.

AI Coding Agents Break What Works

Your AI coding agent just made every test pass. Ship it, right? Not so fast. A growing class of AI-generated bugs doesn’t come from writing bad code. It comes from the AI changing working code to accommodate its own mistakes. This isn’t a theoretical risk. It’s happening now, in production codebases, and it’s harder to catch than any bug the AI might introduce from scratch.

Policy-Driven APIs for AI: Best Practices | DreamFactory

Before rolling out policy-driven APIs, it's crucial to have a governance framework in place. This framework should clearly outline who makes decisions, how approvals work, and how exceptions are handled. Interestingly, while 71% of organizations claim to have data governance programs, only 25% actually put them into practice. Even fewer - just 28% - have enterprise-wide oversight for AI governance roles and responsibilities.

DreamFactory 7.4.5 Release: MCP Aggregate Data Tool, Cursor IDE Support, and Production Stability

DreamFactory 7.4.5 ships the aggregate_data MCP tool — a purpose-built tool that lets AI agents compute SUM, COUNT, AVG , MIN, and MAX directly on the database server in a single call. This release also adds Cursor IDE OAuth compatibility, a desktop OAuth success page for smoother onboarding, server-side aggregate expression support across all SQL connectors, and critical MCP daemon stability improvements including request timeout guards and global error handlers.

Create tests in Reflect directly from your coding agent!

If you’ve used Claude Code, GitHub Copilot, Cursor, or any coding agent, you already know the feeling. You describe what you want in plain language, the agent figures out the steps, and you watch it work. When something goes wrong, it backs up and tries a different approach. Reflect now brings that same agentic workflow to test automation. Through the SmartBear MCP server, any coding agent that supports MCP can connect to Reflect and build tests from high-level objectives.

From Microservices to AI Traffic: Kong's Unified Control Plane When Architecture Gets Complicated

Modern enterprise architecture faces a three-body problem. Three distinct traffic patterns pull your teams in different directions. External APIs serve mobile apps and partner integrations. Internal microservices communicate within Kubernetes clusters. AI and LLM calls flow to OpenAI, AWS Bedrock, and self-hosted models. Each pattern looks API-like on the surface. Yet many organizations handle them with separate tools. The result?

Real Device Access API - Product Demo

Building Internal Developer Tools with a Device Lab API: Sessions, Streaming, Logs, and Automation For years, platform teams have had to choose between costly internal device labs for control or public clouds with limited access. That tradeoff ends with the Real Device Access API, the first solution to treat mobile devices as Infrastructure-as-Code—delivering direct, low-latency access to real devices without framework constraints. See how teams can retire internal racks while running any workflow on fully managed infrastructure they control programmatically.