Systems | Development | Analytics | API | Testing

AI Connection Pooling Best Practices | DreamFactory

Key takeaways: For AI workloads, pooling must handle long connection hold times and heavy traffic. DreamFactory is a secure, self-hosted enterprise data access platform that provides governed API access to any data source, connecting enterprise applications and on-prem LLMs with role-based access and identity passthrough. Combined with tools like PgBouncer, these solutions free connections faster and improve scalability. Simple tweaks, such as segmenting pools and setting timeouts, can boost efficiency.

Ep 70 | AI Risk & Cybersecurity: Theresa Payton on the New Threat Landscape

As AI adoption accelerates, so do the risks that come with it. So what happens when AI puts cyberattack capabilities into everyone’s hands? In this episode of The AI Forecast, Paul Muller is joined by Theresa Payton to break down the new reality of AI-powered threats. Drawing on decades of experience as the first female White House CIO, CEO of Fortalice Solutions, and the author of four books on privacy and big data, Theresa explains why AI has fundamentally changed the rules of cybersecurity and why most organizations are still playing catch-up.

Stop the AI Iceberg | Secure AI Using Ontologies and Semantic Layers

Don’t let the "AI iceberg" sink your IP Most leaders only focus on the flashy models at the surface, but the real value—and the risk—is what’s underneath. Tony Seale and Jessica Talisman reveal why turning AI back onto your own data infrastructure to build connected ontologies is the key to security. This semantic foundation is the core of Agentic Analytics, ensuring your insights are grounded in your specific business logic rather than generic LLM guesses.

Ontology: The Secret to Semantic Layers | The Data & AI Chief Podcast

Is your AI-driven "autonomous enterprise" a reality or a peak-of-inflated-expectations dream? Most organizations rush toward the end state of AI agents without doing the foundational work of defining how their data actually relates through a robust ontology. In this episode of The Data & AI Chief, we sit down with Tony Seale, Founder of The Knowledge Graph Guys, and Jessica Talisman, CEO and Founder of The Ontology Pipeline. We break down why the "lost art" of data modeling and the development of semantic layers are the secret weapons for scaling Agentic Analytics.

How Semantic Layers and Ontologies Create Trusted AI

Learn why an organization’s ontology, a structured framework for how a business defines, connects, and makes sense of its data and knowledge, is the most valuable and most overlooked asset in any AI strategy. Jessica Talisman, CEO and Founder of The Ontology Pipeline, and Tony Seale, Founder of The Knowledge Graph Guys, break down what it actually takes to build trusted AI, covering everything from semantic layers and knowledge graphs to why provenance is non-negotiable.

Why Enterprise AI Can Get the Query Right and the Answer Wrong

Most teams deploying AI agents on their data are watching the wrong things. They check whether the query ran and whether the number looks plausible. When both checks pass, the agent gets credit for a correct answer, and the output flows into dashboards, decisions, and the next agent in the chain. There's a gap between those two checks and actual correctness, and it's where the expensive mistakes live. Getting to a correct answer requires more than a formally valid calculation.

Why traditional QA metrics fall short as AI enters the pipeline

Take this scenario: Your team ships a release with 91% code coverage. Every test in the suite passes. The pipeline is green, and leadership signs off. But two days later, a critical defect surfaces in production. Upon investigation, you find that the changed code was never actually tested, and the tests that were run covered different paths entirely. That 91% was real, but it was just measuring the wrong thing. And as AI tools generate more of the code inside those pipelines, the gap widens.

AI-Ready APIs for Legacy Systems

80% of enterprise apps still use decades-old systems, but accessing their data for AI is tough. The challenge? Security risks, outdated interfaces, and slow performance. Here's the solution: API abstraction. This method creates a secure, no-code layer between AI and legacy systems. It keeps your old code intact while enabling AI to access data safely and efficiently.