Systems | Development | Analytics | API | Testing

How to Harness AI to Improve SDLC Quality | Mallika Fernandes | Testflix 2025

As software systems grow more complex and business cycles demand faster releases, traditional approaches to quality engineering are no longer enough. AI brings a new dimension to the Software Development Life Cycle—augmenting decision-making, predicting risks, automating quality checks, and continuously assuring value delivery. Armed with GenAI and Agents we can reshape how we think about software quality. This discussion explores practical ways to embed AI across the SDLC to accelerate delivery, reduce risk, and achieve a step-change in quality outcomes.

New Emerging Trends in the Quality Engineering Space with AI | Vanya Seth | Testflix 2025

Retrieval-Augmented Generation can sound convincing while still being wrong. This session focuses on moving beyond surface-level metrics and turning stochastic AI outputs into evidence-backed, verifiable results. It explores how to test the entire RAG pipeline, from ingestion and indexing to retrieval, grounding, and answerability, ensuring every response is traceable to the right source, policy, and user context.

The Longer You Wait, the More Expensive the Bug Becomes

In September 2015, CareFusion issued emergency Class 1 recalls for its Alaris Syringe pumps. The pump was supposedly programmed to administer scheduled medical infusions to patients. As per official reports, due to a software code error (leading to a malfunction), the infusion pump could (or might already) have wrongly administered the scheduled medication, putting patient lives at risk. In response, the company issued recalls, regulators got involved, and the reputation damage was immediate.

How to Test RAG Pipelines for Reliable AI | Aparana Gupta | Testflix 2025 | #testingcommunity

Retrieval-Augmented Generation can sound convincing while still being wrong. This session focuses on moving beyond surface-level metrics and turning stochastic AI outputs into evidence-backed, verifiable results. It explores how to test the entire RAG pipeline, from ingestion and indexing to retrieval, grounding, and answerability, ensuring every response is traceable to the right source, policy, and user context.

Private AI: Gains, Gaps and Gotchas | Samar Ranjan | Testflix2025 | #testingcommunity

Private AI is emerging as a strong alternative for teams that need the power of AI without compromising data privacy or compliance. This session explores how local LLMs can support software development and test automation when cloud-based tools are not an option. Using setups like Ollama with models such as Qwen 2.5 and integrations like the Continue plugin, the talk demonstrates how secure, on-device AI can accelerate tasks like BDD creation, automation scripting, and performance testing.

Breaking Boundaries: A Tester's Guide to Freelance and Remote Success | Manish Saini | Testflix2025

Freelancing is more than a side hustle. It can be a launchpad to global careers, higher earning potential, and exposure to diverse teams and practices. This session shows how testers can start with small freelance gigs to build experience and credibility, then scale into larger, long-term engagements by specializing in areas like automation, performance, or QA consulting.

QA and Software Testing Job Landscape in the USA | Júlio de Lima | Testflix2025 | #testingcommunity

This Atomic Talk is based on an analysis of more than 500 QA and software testing job openings across the United States. The session walks through the research process, the data collected, and the key trends revealed through clear graphs and insights from the study. By the end of the talk, attendees will have a strong understanding of the most in-demand testing skills, tools, programming languages, and automation technologies currently shaping the QA job market in the U.S., helping them make more informed career and upskilling decisions.

Bias in, Bias Out : Knowing various Biases in Testing AI | Maheshwaran VK | Testflix 2025

Just like humans, AI systems are shaped by how they are brought up. In the case of Large Language Models, this upbringing happens through data collection, training, and productization. At each of these stages, bias can quietly enter the system through the data we select, the way models are trained, or the assumptions embedded into the final product. These biases, whether intentional or accidental, influence how models think, respond, and interact with users in the real world.

Effective Public Speaking | Johanna Rothman | Testflix2025 | #testingcommunity

As AI becomes more capable, many managers assume that knowledge workers can be easily replaced by machines. Yet innovation still comes from people learning, collaborating, and sharing ideas. Rather than worrying about replacement, knowledge workers can actively demonstrate their value by developing strong public speaking skills.

Building Quality in LLM-Powered Applications | Craig Risi | Testflix2025 | #testingcommunity

As organizations rapidly adopt Large Language Models, many discover that building reliable and trustworthy AI systems is far more complex than traditional software development. LLMs are non-deterministic, context-sensitive, and prone to issues like bias, hallucinations, and prompt injection, making quality assurance a deeper challenge than simple testing.