Systems | Development | Analytics | API | Testing

How to Harness AI to Improve SDLC Quality | Mallika Fernandes | Testflix 2025

As software systems grow more complex and business cycles demand faster releases, traditional approaches to quality engineering are no longer enough. AI brings a new dimension to the Software Development Life Cycle—augmenting decision-making, predicting risks, automating quality checks, and continuously assuring value delivery. Armed with GenAI and Agents we can reshape how we think about software quality. This discussion explores practical ways to embed AI across the SDLC to accelerate delivery, reduce risk, and achieve a step-change in quality outcomes.

Private AI: Gains, Gaps and Gotchas | Samar Ranjan | Testflix2025 | #testingcommunity

Private AI is emerging as a strong alternative for teams that need the power of AI without compromising data privacy or compliance. This session explores how local LLMs can support software development and test automation when cloud-based tools are not an option. Using setups like Ollama with models such as Qwen 2.5 and integrations like the Continue plugin, the talk demonstrates how secure, on-device AI can accelerate tasks like BDD creation, automation scripting, and performance testing.

How to Test RAG Pipelines for Reliable AI | Aparana Gupta | Testflix 2025 | #testingcommunity

Retrieval-Augmented Generation can sound convincing while still being wrong. This session focuses on moving beyond surface-level metrics and turning stochastic AI outputs into evidence-backed, verifiable results. It explores how to test the entire RAG pipeline, from ingestion and indexing to retrieval, grounding, and answerability, ensuring every response is traceable to the right source, policy, and user context.

QA and Software Testing Job Landscape in the USA | Júlio de Lima | Testflix2025 | #testingcommunity

This Atomic Talk is based on an analysis of more than 500 QA and software testing job openings across the United States. The session walks through the research process, the data collected, and the key trends revealed through clear graphs and insights from the study. By the end of the talk, attendees will have a strong understanding of the most in-demand testing skills, tools, programming languages, and automation technologies currently shaping the QA job market in the U.S., helping them make more informed career and upskilling decisions.

Breaking Boundaries: A Tester's Guide to Freelance and Remote Success | Manish Saini | Testflix2025

Freelancing is more than a side hustle. It can be a launchpad to global careers, higher earning potential, and exposure to diverse teams and practices. This session shows how testers can start with small freelance gigs to build experience and credibility, then scale into larger, long-term engagements by specializing in areas like automation, performance, or QA consulting.

Peeking Under the Hood of Claude Code

Everyone is talking about Claude Code, but few people understand the machinery running in the background. Today, we’re opening up the terminal to see how Anthropic’s coding agent manages state, runs tests, and fixes its own bugs. From the Model Context Protocol (MCP) to its unique React-based terminal UI, find out what makes Claude Code the most "senior" feeling AI assistant on the market.

Is Claude Code Spying for OpenAI? #speedscale #anthropic #openai #claude #codingagent

While analyzing network traffic, we found huge amounts of telemetry including chat snippets, being sent to statsig.anthropic.com. The irony? Statsig was recently acquired by OpenAI. In this video, we use proxymock to intercept the traffic and show you exactly what’s being sent from your terminal to Anthropic (and technically, OpenAI’s infrastructure).

Effective Public Speaking | Johanna Rothman | Testflix2025 | #testingcommunity

As AI becomes more capable, many managers assume that knowledge workers can be easily replaced by machines. Yet innovation still comes from people learning, collaborating, and sharing ideas. Rather than worrying about replacement, knowledge workers can actively demonstrate their value by developing strong public speaking skills.

Bias in, Bias Out : Knowing various Biases in Testing AI | Maheshwaran VK | Testflix 2025

Just like humans, AI systems are shaped by how they are brought up. In the case of Large Language Models, this upbringing happens through data collection, training, and productization. At each of these stages, bias can quietly enter the system through the data we select, the way models are trained, or the assumptions embedded into the final product. These biases, whether intentional or accidental, influence how models think, respond, and interact with users in the real world.