Systems | Development | Analytics | API | Testing

A Wharton AI Research Leader's Formula for Responsible AI

Learn why scaling AI is as much a human challenge as it is a technological one. Stefano Puntoni, Co-Director of Wharton Human-AI Research and Professor at The Wharton School, examines the limits of data-driven decision making in the age of AI and why insights so often fail to translate into action. He breaks down the psychology behind AI resistance and outlines the leadership and change management strategies needed to turn AI potential into real organizational impact.

Why we built vision AI into TestComplete: Solving the complex app testing challenge

When we talk to testing teams at enterprise organizations, we hear the same frustrations repeatedly: “Our automation breaks every time the UI changes.” “We can’t test this application because it doesn’t expose accessible properties.” “We spend more time maintaining tests than creating new ones.” These scenarios block test automation adoption for teams that need it most.

Ep 66 | Women Leaders in Technology: AI Agents Are Your New Team- Now What?

From econometrics to anthropology to leading roles at Salesforce, AWS, and Nextdoor, Tatyana shares how her background shaped a fundamentally different approach to leadership. Drawing on her unconventional journey, she explains why agentic AI is forcing leaders to rethink how they manage technology, shifting from systems to a focus on teams, culture, and governance. Together, Tatyana and Paul share their perspectives on.

Data Silos Could Be Your Biggest Cloud Liability

In an always-on industrial economy, fragmented data is a liability. Your analytics reports may look flawless, but if they’re built on data silos scattered across edge, core, and cloud, they’re built on a fault line. Data silos drive-up costs, distort the critical decisions meant to drive competition, and prevent organizations from reaching a state of data singularity — where data becomes unified, portable, and continuously usable for AI.

Ai-Powered Test Automation: A Complete Guide for Engineering Leaders

Your developers are shipping more code than ever. GitHub Copilot, Cursor, and tools like them have fundamentally changed developer throughput - some teams are seeing 40-76% more code per person per sprint. That is the headline everyone celebrates. The part that keeps engineering leaders up at night is the other side of that equation: your testing pipeline has not changed at the same pace. Tests that used to gate two releases a week now need to gate ten.

Why 95% of AI pilots fail - and what it takes to scale in the agentic era

Last August, MIT released a landmark report that confirmed what many enterprise leaders had started to fear: most AI pilots are failing. After reviewing hundreds of AI initiatives, researchers found that 95% of generative AI pilots failed to reach production or deliver measurable results. The headline quickly hardened into a cliché: AI doesn’t scale.

I Let AI Audit My LinkedIn Strategy (Here's what happened)

If you’re consistently posting on LinkedIn, the hard part isn’t getting data — it’s analyzing it. Most people review posts one by one, compare impressions manually, and try to “spot patterns” by eye. That’s slow. And it makes strategy reactive. In this walkthrough, Kamil Rextin, founder of 42 Agency, uses the Databox MCP with Claude to run a fast, AI-driven analysis of his LinkedIn performance — the kind of first-pass review you’d normally assign to a junior analyst.

The LiteLLM Supply Chain Attack: A Complete Technical Breakdown of What Happened, Who Is Affected, and What Comes Next

In March 2026, security researcher isfinne discovered that LiteLLM version 1.82.8—the most popular open-source LLM proxy in the Python ecosystem, with approximately 97 million monthly downloads—contained credential-stealing malware published to PyPI. Within hours, version 1.82.7 was confirmed to carry a similar payload through a different injection method.

The AI Supply Chain Is Now Critical Infrastructure: Lessons from the TeamPCP Campaign That Hit Trivy, Checkmarx, and LiteLLM

In the span of five days in March 2026, a single threat actor—TeamPCP—compromised a vulnerability scanner (Trivy), a code analysis platform (Checkmarx), and the most widely used LLM proxy in the Python ecosystem (LiteLLM). The attack chain was surgical: each compromised tool provided credentials to attack the next target.