Systems | Development | Analytics | API | Testing

Supercharge your LLM Using Production Data Context

Are your LLM coding agents (like Cursor or Claude Code) hallucinating fixes because they don't know what's actually happening in production? In this video, Matt from Speedscale shows you how to bridge the gap between your local IDE and live production traffic using the Model Context Protocol (MCP). Most observability tools just give you telemetry. Speedscale’s MCP server gives your agent the "inner workings" of actual API calls and payloads, so it can check its assumptions against reality. No more "vibe-coding" and hoping it works; let your agent find the 500 errors and rate limits for you.

How do you plan to test 10x more code with the same old tools?

You can’t test 10x more code with the same old tools. As AI dramatically increases code volume and speed, traditional testing becomes a bottleneck. Teams need AI embedded across the entire testing lifecycle to scale testing, boost productivity, and keep releases moving fast without sacrificing quality — Alex Martins, VP of Strategy at Katalon Follow Katalon for more insights in our series!

Let Your LLM Debug Using Production Recordings

Modern LLM coding agents are great at reading code, but they still make assumptions. When something breaks in production, those assumptions can slow you down—especially when the real issue lives in live traffic, API responses, or database behavior. In this post, I’ll walk through how to connect an MCP server to your LLM coding assistant so it can pull real production data on demand, validate its assumptions, and help you debug faster.

AI Virtual Health Assistants: The Future of Remote Patient Monitoring

‍ The healthcare industry stands at the precipice of a revolution, shifting away from reactive, hospital-centric care towards proactive, personalised, and remote management. At the heart of this transformation is Remote Patient Monitoring (RPM), a system that uses connected digital health technologies to gather and transmit patient physiological data outside of traditional clinical settings. While RPM offers tremendous value, it generates a massive, continuous stream of data.

Securing LLMs: Insights into OWASP Top 10 | Maryia Tuleika | TTTribeCast Webinar

AI can feel like a black box, but when it is tested like any other system, unexpected weaknesses begin to surface. This session explores how large language models can be pushed into unsafe or unintended behavior, revealing that AI is not immune to flaws, poor decisions, or broken assumptions.

Implementing AI in the Playwright Framework | Muralidharan R. | TTTribeCast Webinar

Implementing the Playwright Framework with AI explores how to enhance test automation by integrating AI capabilities. This approach minimises code writings, eliminates traditional locator strategies, and ensures low test maintenance. The session will demonstrate how AI-driven solutions can simplify and optimise test automation, making it more efficient and scalable.

Revolutionizing Web Testing with Playwright & Gen AI | Vignesh Srinivasa Raghavan

Discover the groundbreaking convergence of Playwright and Generative AI that's transforming how web testing is approached. This session unveils a revolutionary capability where AI assists in automatically generating test scripts without requiring manual coding. Experience how this intelligent automation system can analyze your application, understand user flows, and create comprehensive test suites with minimal human intervention.