Systems | Development | Analytics | API | Testing

Accelerating Agile with BDD. Practical Guide for Testers and Teams | Ashwini Lalit

BDD (Behavior-Driven Development) is an agile approach comprising three key practices: discovery, formulation, and automation. This methodology aims to improve software development by reducing ambiguities, enhancing collaboration, and creating living documentation. In BDD, acceptance tests stay stable because business rules change less than the UI. They can be written before the UI and describe business actions that guide development, serving as the application’s business vocabulary.

Making AI Work in Real Teams. Operationalizing AI Explained | Melissa Tondi

Let's talk about the real-world journey of operationalizing AI—what it looks like behind the scenes when you’re scaling solutions, building support systems, and doing it all with a lean team. There are plenty of brilliant AI experts—folks doing deep work, research, experimentation, and implementation. This session complements and focuses on how to operationalize, centralize and scale your team, organization and company!

How to Use GenAI to Build Load Test Scripts in Apache JMeter | Sandeep Garg

This demonstration shall suggest those baby steps that should encourage most of us (the testers) to inculcate the habit of exploring* GenAI and LLMs. The exploration shall help understand why, what, where, when and how we should use these rapidly moving technologies to bring efficiency and better thinking in our day-to-day testing work.

AI x Testing Leadership | Jaydeep Chakrabarty | Ask Me Anything

AI is not just changing how we test it's redefining how we lead. This high-impact AMA explores how testing leadership must evolve in an AI-first world. Whether you're managing a lean QA team or scaling quality across a large enterprise, the session offers frameworks and insight to help you lead, not just adapt, through transformation. What you'll take away: About Jaydeep Chakrabarty.

Ask Me Anything with Harinee Muralinath

AI tools are getting smarter, and many of them now have access to more than you realize. From test scripts to config files to internal APIs, what starts as "just a helper" can become a hidden risk. In this AMA, Harinee will share where things go wrong when tools are trusted too quickly. We’ll look at real examples, permission pitfalls, and ways to keep your systems safer. If you’re using AI in testing or automation, this session will help you ask better questions. Bring your curiosity and your concerns. Let’s talk about what your tools can do, and what they shouldn't.

A Shifting Left Success Story | David Ingraham | TTTribeCast Webinar

A Shifting Left Success Story” takes you inside a real-world transformation where test automation was intentionally moved earlier in the development lifecycle — with measurable and lasting impact. This session unpacks the how, why, and key lessons learned from embedding Shift Left practices within a cross-functional team. You’ll discover what made the approach successful, where challenges emerged, and how a thoughtful Shift Left strategy can dramatically improve code quality, shorten feedback loops, and build greater trust between developers, testers, and product stakeholders.

Agentic AI: From Reactive Bots to Autonomous Digital CoWorkers | Toni Ramchandani

Agentic AI marks the next evolutionary leap in artificial intelligence - systems that don’t just answer prompts or generate content, but plan, decide, and act on our behalf with minimal oversight. In this webinar, we’ll demystify what “agentic” really means, trace the shift from single‑step chatbots to multi‑step autonomous agents, and explore the architectures—sense‑plan‑act loops, large‑language‑model reasoning layers, and tool‑integrations—that make true agency possible.

Securing LLMs: Insights into OWASP Top 10 | Maryia Tuleika | TTTribeCast Webinar

AI can feel like a black box, but when it is tested like any other system, unexpected weaknesses begin to surface. This session explores how large language models can be pushed into unsafe or unintended behavior, revealing that AI is not immune to flaws, poor decisions, or broken assumptions.

Implementing AI in the Playwright Framework | Muralidharan R. | TTTribeCast Webinar

Implementing the Playwright Framework with AI explores how to enhance test automation by integrating AI capabilities. This approach minimises code writings, eliminates traditional locator strategies, and ensures low test maintenance. The session will demonstrate how AI-driven solutions can simplify and optimise test automation, making it more efficient and scalable.

API Testing With Cypress | Kristin Jackvony | TTTribeCast Webinar

API tests are faster and more reliable than UI tests. So why aren’t more testers using them? In this session, we’ll learn what kinds of API tests to run, and how to easily configure them in Cypress. Your API tests can live alongside your UI tests, providing valuable fast feedback for the quality of your application!

Revolutionizing Web Testing with Playwright & Gen AI | Vignesh Srinivasa Raghavan

Discover the groundbreaking convergence of Playwright and Generative AI that's transforming how web testing is approached. This session unveils a revolutionary capability where AI assists in automatically generating test scripts without requiring manual coding. Experience how this intelligent automation system can analyze your application, understand user flows, and create comprehensive test suites with minimal human intervention.

Panel Discussion - QA Leadership in the Age of AI: Talent, Strategy, and Choices | Testflix 2025

The rise of AI is transforming how software is developed and tested—but the hardest questions for QA leaders aren’t about tools. They’re about people, productivity, and long-term strategy. As organizations race to integrate AI, leaders must rethink how they hire, train, and guide teams while making tough decisions about whether to build custom solutions, buy off-the-shelf platforms, or take a more measured approach. This panel brings together leaders who are navigating these crossroads in real time.

Panel Discussion - AI in Automation: Accelerating Scripts and Execution | Testflix 2025

AI in automation is rapidly emerging as a powerful enabler for testers. From automation script generation utilities to simplifying API testing and framework development, these capabilities promise to accelerate productivity. But as testers embrace assistants like Copilot and Cursor, big questions emerge - What does this mean to the future of open-source frameworks like Selenium and Playwright? How should testers balance the speed of AI-generated code with the need for reliability and maintainability? ⁠How do we measure productivity gains from AI-paired programming? And AI doesn't stop at coding.

Measuring the Impact of AI in QA and Automation | Jaydeep Chakrabarty | Testflix 2025

In this fireside chat with Jaydeep, we’ll dive into how AI is changing the way we measure success in both QA processes and live generative AI bots. On the QA side, we’ll look at cycle time reduction—the “time goalie” metric that shows how quickly we move from discovering a bug to fixing it. We’ll also talk about predictive quality accuracy, which shifts QA from being reactive to proactive by predicting which code changes are most likely to introduce bugs. And of course, we’ll touch on test creation velocity—how much faster teams are able to create meaningful automation with AI’s support.

Playwright MCP: Turn Natural Language into Reliable Tests in Minutes | Vignesh Srinivasa Raghavan

Model Context Protocol (MCP) lets AI agents use real tools safely. In this talk, we’ll see how Playwright MCP bridges agents and a real browser by leveraging the accessibility tree (not screenshots) to navigate pages, locate elements, perform actions, and extract data—then export stable Playwright tests you can commit.

Evaluating AI Tools: Practical Framework for Testers & Leaders | Ajay Balamurugadas | Testflix 2025

The AI ecosystem is exploding with tools that promise to accelerate delivery, improve quality, and transform the way we work. Yet for many teams, evaluating these tools is overwhelming - flashy demos and marketing claims rarely answer the real questions: Will this work in our context? Can it scale? Is it sustainable?

Vibe Coding: Emergence, Impact & Future of AI-Driven Development | Andrew Knight | Testflix 2025

In this session, Andrew will trace Vibe Coding's journey—from emergence to current impact—exploring how it has got us into re-thinking development and testing. He'll examine today's tools, real-world use cases, and the cultural shifts teams need to embrace this AI-driven approach. Andrew will share hot takes on myths versus reality and deliver practical advice for getting started. This video is of one of the sessions presented at - World’s Leading Virtual Software Testing Conference.

Leading in the Post-AI Era: Insights for Testers & Teams | Pradeep Soundararajan | Testflix 2025

AI is leveling the field across organizations. Everyone, from leaders to individual contributors, is approaching work with a beginner’s mindset, questioning what AI can do, what it cannot, and how it affects business, customers, and roles. This shared uncertainty challenges traditional leadership models and raises a fundamental question. How does leadership evolve when no one has all the answers?

How Testers Can Partner with AI in Automation | Ronak Ray | Testflix 2025

AI is reshaping software testing, but real quality does not come from hype alone. This session explores where AI truly adds value in automation and where human testers remain essential. It focuses on building a practical partnership between AI and testers, where speed and scale are balanced with judgment, context, and responsibility.

How to Harness AI to Improve SDLC Quality | Mallika Fernandes | Testflix 2025

As software systems grow more complex and business cycles demand faster releases, traditional approaches to quality engineering are no longer enough. AI brings a new dimension to the Software Development Life Cycle—augmenting decision-making, predicting risks, automating quality checks, and continuously assuring value delivery. Armed with GenAI and Agents we can reshape how we think about software quality. This discussion explores practical ways to embed AI across the SDLC to accelerate delivery, reduce risk, and achieve a step-change in quality outcomes.

New Emerging Trends in the Quality Engineering Space with AI | Vanya Seth | Testflix 2025

Retrieval-Augmented Generation can sound convincing while still being wrong. This session focuses on moving beyond surface-level metrics and turning stochastic AI outputs into evidence-backed, verifiable results. It explores how to test the entire RAG pipeline, from ingestion and indexing to retrieval, grounding, and answerability, ensuring every response is traceable to the right source, policy, and user context.

The Longer You Wait, the More Expensive the Bug Becomes

In September 2015, CareFusion issued emergency Class 1 recalls for its Alaris Syringe pumps. The pump was supposedly programmed to administer scheduled medical infusions to patients. As per official reports, due to a software code error (leading to a malfunction), the infusion pump could (or might already) have wrongly administered the scheduled medication, putting patient lives at risk. In response, the company issued recalls, regulators got involved, and the reputation damage was immediate.

How to Test RAG Pipelines for Reliable AI | Aparana Gupta | Testflix 2025 | #testingcommunity

Retrieval-Augmented Generation can sound convincing while still being wrong. This session focuses on moving beyond surface-level metrics and turning stochastic AI outputs into evidence-backed, verifiable results. It explores how to test the entire RAG pipeline, from ingestion and indexing to retrieval, grounding, and answerability, ensuring every response is traceable to the right source, policy, and user context.

Private AI: Gains, Gaps and Gotchas | Samar Ranjan | Testflix2025 | #testingcommunity

Private AI is emerging as a strong alternative for teams that need the power of AI without compromising data privacy or compliance. This session explores how local LLMs can support software development and test automation when cloud-based tools are not an option. Using setups like Ollama with models such as Qwen 2.5 and integrations like the Continue plugin, the talk demonstrates how secure, on-device AI can accelerate tasks like BDD creation, automation scripting, and performance testing.

Breaking Boundaries: A Tester's Guide to Freelance and Remote Success | Manish Saini | Testflix2025

Freelancing is more than a side hustle. It can be a launchpad to global careers, higher earning potential, and exposure to diverse teams and practices. This session shows how testers can start with small freelance gigs to build experience and credibility, then scale into larger, long-term engagements by specializing in areas like automation, performance, or QA consulting.

QA and Software Testing Job Landscape in the USA | Júlio de Lima | Testflix2025 | #testingcommunity

This Atomic Talk is based on an analysis of more than 500 QA and software testing job openings across the United States. The session walks through the research process, the data collected, and the key trends revealed through clear graphs and insights from the study. By the end of the talk, attendees will have a strong understanding of the most in-demand testing skills, tools, programming languages, and automation technologies currently shaping the QA job market in the U.S., helping them make more informed career and upskilling decisions.

Bias in, Bias Out : Knowing various Biases in Testing AI | Maheshwaran VK | Testflix 2025

Just like humans, AI systems are shaped by how they are brought up. In the case of Large Language Models, this upbringing happens through data collection, training, and productization. At each of these stages, bias can quietly enter the system through the data we select, the way models are trained, or the assumptions embedded into the final product. These biases, whether intentional or accidental, influence how models think, respond, and interact with users in the real world.

Effective Public Speaking | Johanna Rothman | Testflix2025 | #testingcommunity

As AI becomes more capable, many managers assume that knowledge workers can be easily replaced by machines. Yet innovation still comes from people learning, collaborating, and sharing ideas. Rather than worrying about replacement, knowledge workers can actively demonstrate their value by developing strong public speaking skills.

Building Quality in LLM-Powered Applications | Craig Risi | Testflix2025 | #testingcommunity

As organizations rapidly adopt Large Language Models, many discover that building reliable and trustworthy AI systems is far more complex than traditional software development. LLMs are non-deterministic, context-sensitive, and prone to issues like bias, hallucinations, and prompt injection, making quality assurance a deeper challenge than simple testing.

Testing Agentic AI | Robert Sabourin | Testflix2025 | #testingcommunity

This talk explores the challenges of testing agentic AI systems—AI that autonomously reacts to events and initiates processes. Drawing on decades of experience, Robert Sabourin emphasizes that testing begins and ends with risk. A three-dimensional model (business impact, technical risk, autonomy) guides evaluation. Testers generate ideas using a broad taxonomy, from capabilities and failure modes to creative and adversarial approaches. Continuous testing and monitoring ensure findings inform business decisions, emphasizing learning over correctness.