Empowering Customers: The Role of Confluent's Trust Center

The foundation of every successful customer relationship is trust. At Confluent, we understand that for our customers and prospects to innovate with confidence, they must have complete trust in the security and integrity of our platform. Our commitment goes beyond simply providing a secure product. It’s about empowering our customers with the tools and transparency they need to feel confident in their data streaming architectures.

2026 Predictions: What's Next for Data Streaming and AI | Life Is But A Stream

AI isn’t just evolving—it’s reshaping who your customers are, how systems operate, and what real time really means. From machines making purchase decisions to agents increasing query volume across databases, the realities of 2026 are forcing leaders to rethink data architecture and governance strategies at a fundamental level. In this episode, Joseph is joined by Will LaForest (Field CTO, Confluent), Adi Polak (Director of Developer Advocacy & Experience, Confluent), and independent analyst, Sanjeev Mohan, to break down critical insights from Confluent’s 2026 Predictions Report.

SpotCache: Scale AI-ready data without cloud-spend surprises

AI is changing how work gets done. But for many data leaders, it’s also creating a new challenge: managing the cloud bill. As more people (and more AI agents) query data, cloud data warehouse (CDW) spend can spike fast. Costs become harder to predict, and teams end up making tradeoffs—scaling AI insights or staying within budget. That tension creates a real bottleneck on the path to becoming AI-ready.

Sales Leaders: Turn Intuition into Impact #OnTheSpot with Spotter

Sales Leaders: Are you making decisions based on data, or just a "gut feeling"? If you want to move faster, you need to see this. James Smith, our SVP of EMEA, is demonstrating a hashtag#wowmoment that turns a vague intuition on SDR pipeline progression into a multi-million dollar revenue roadmap using Spotter.

Multi-Node Training with ClearML

Orchestrating distributed AI workloads Distributed (multi-node) training has become a requirement rather than an optimization for many modern AI workloads. As model sizes grow, datasets expand, and training timelines tighten, teams increasingly rely on multiple machines, often with multiple GPUs each, to complete training efficiently.