Systems | Development | Analytics | API | Testing

Playwright MCP: Turn Natural Language into Reliable Tests in Minutes | Vignesh Srinivasa Raghavan

Model Context Protocol (MCP) lets AI agents use real tools safely. In this talk, we’ll see how Playwright MCP bridges agents and a real browser by leveraging the accessibility tree (not screenshots) to navigate pages, locate elements, perform actions, and extract data—then export stable Playwright tests you can commit.

Measuring the Impact of AI in QA and Automation | Jaydeep Chakrabarty | Testflix 2025

In this fireside chat with Jaydeep, we’ll dive into how AI is changing the way we measure success in both QA processes and live generative AI bots. On the QA side, we’ll look at cycle time reduction—the “time goalie” metric that shows how quickly we move from discovering a bug to fixing it. We’ll also talk about predictive quality accuracy, which shifts QA from being reactive to proactive by predicting which code changes are most likely to introduce bugs. And of course, we’ll touch on test creation velocity—how much faster teams are able to create meaningful automation with AI’s support.

Can you share what challenge the customer was facing before finding Katalon?

Many customers were struggling with BDD-style test processes — writing structured, English-language scripts that business and tech teams could agree on but couldn’t easily automate. Before Katalon, they lacked a smooth way to turn those scripts or raw requirements into automated tests, which Katalon (and AI features like Studio Assist) finally made efficient. — Coty Rosenblath, Chief Technology Officer at Katalon.

Vibe Coding: Emergence, Impact & Future of AI-Driven Development | Andrew Knight | Testflix 2025

In this session, Andrew will trace Vibe Coding's journey—from emergence to current impact—exploring how it has got us into re-thinking development and testing. He'll examine today's tools, real-world use cases, and the cultural shifts teams need to embrace this AI-driven approach. Andrew will share hot takes on myths versus reality and deliver practical advice for getting started. This video is of one of the sessions presented at - World’s Leading Virtual Software Testing Conference.

More code, more bugs, same team. So what's your plan?

The plan is to test earlier and faster to keep up with AI-generated code. By using AI-assisted, in-sprint testing and shift-left strategies, teams can catch issues sooner, scale testing with the same team, and maintain quality despite higher code volume. — Alex Martins, VP of Strategy at Katalon Follow Katalon for more insights in our series!

DLP: The Key to Secure K8s Testing #speedscale #dlp #kubernetes #devops #testing

Testing with production traffic doesn't have to be a security risk. Engineers often avoid production data because of sensitive info like passwords, tokens, and PII. But legacy test data management is too static for modern, fast-changing payloads. Enter the Speedscale Streaming DLP Engine. It automatically detects and redacts sensitive data in real time as it's captured from your environment. You get the realism of production traffic without the risk of a data breach.

Open Lakehouse Meetup (ft. Apache Iceberg): Building Scalable Data Platforms

Discover the future of the Data Lakehouse with this deep dive into Apache Iceberg V3 and V4 from the Bengaluru community meetup. Learn how PyIceberg and DuckDB are revolutionizing Python-native data processing by eliminating the need for Spark clusters for 99% of common query sizes. Explore high-performance ingestion benchmarks from Oleg and the Google Dataproc Lightning Engine, achieving over 500k rows/sec through Apache Arrow and C++ vectorization. This session is a masterclass for data engineers on metadata compaction, Rest Catalogs, and building vendor-agnostic data platforms.

Evaluating AI Tools: Practical Framework for Testers & Leaders | Ajay Balamurugadas | Testflix 2025

The AI ecosystem is exploding with tools that promise to accelerate delivery, improve quality, and transform the way we work. Yet for many teams, evaluating these tools is overwhelming - flashy demos and marketing claims rarely answer the real questions: Will this work in our context? Can it scale? Is it sustainable?