Systems | Development | Analytics | API | Testing

Streaming Data at Scale with Strategic Cloud Partners | Life Is But A Stream Podcast

Strategic partnerships don’t work without trust. And in the data streaming world, trust begins with transparency. In this episode, Elena Cuevas, Sr. Manager of Cloud Partner Solutions Engineering at Confluent, joins Joseph Morais to unpack what it takes to collaborate with hyperscalers like AWS, Google Cloud, and Microsoft Azure. From managing competitive overlaps to future-proofing enterprise data architectures, Elena shares her battle-tested strategies for driving alignment across complex stakeholder ecosystems.

New With Confluent Platform 8.0: Stream Securely, Monitor Easily, and Scale Endlessly

At Confluent, we’re committed to building the world's leading data streaming platform, which gives you the ability to stream, connect, process, and govern all of your data and make it available wherever it’s needed—however it’s needed—in real time. Today, we're excited to announce the release of Confluent Platform 8.0! This release builds on Apache Kafka 4.0, reinforcing our core capabilities as a data streaming platform.

Moving Up the Curve: 5 Tips For Enabling Enterprise-Wide Data Streaming

Confluent recently released its 2025 Data Streaming Report: Moving the Needle on AI Adoption, Speed to Market, and ROI. The report found that data streaming is delivering business value with 44% of IT leaders, driving up to 5x or more return on their data streaming investments. Explore the 2025 Data Streaming Report That said, as companies continue to expand their data streaming use cases, many struggle with nontechnical hurdles around scaling, setting up operations, and hitting organizational silos.

7 Steps to Build an AI-Powered Personalization Engine With Confluent & Databricks

The advancement and widespread availability of new artificial intelligence (AI) capabilities—through platforms like the Databricks Data Intelligence Platform and Mosaic AI—has completely reset expectations for engineering teams across every industry. Business now moves at a new pace, demanding rapid delivery of intelligent, real-time applications—instead of slowly stitched-together systems solving problems defined and scoped months prior.

The Easiest Way to Power Real-Time AI: Confluent Announces Delta Lake Support & Unity Catalog Integration for Tableflow

In the age of AI, the hunger for fresh, reliable data to power machine learning (ML) models and real-time analytics is insatiable. Yet, organizations frequently hit roadblocks when trying to bridge their operational data in motion, typically flowing through Apache Kafka, with their data at rest in data lakehouses. On one side, you have the data streaming platform, the central nervous system managing the real-time flow of business events.

Allium's Blueprint for Scaling Blockchain Data with Data Streaming | Life Is But A Stream Podcast

Blockchain may be decentralized, but reliable access to its data is anything but simple. In this episode, Ethan Chan, Co-Founder & CEO of Allium, shares how his team transforms blockchain firehoses into clean, queryable, real-time data feeds. From the pitfalls of hosting your own data streaming infrastructure to the business advantages of Confluent Cloud, Ethan reveals the strategic decisions that helped Allium scale from 3 to nearly 100 blockchains, without burning out their engineering team.

Unlocking Real-Time Analytics With Confluent Tableflow, Apache Iceberg, and Snowflake

Users of Snowflake and other data lakes and data warehouses need real-time data for artificial intelligence (AI) and analytical workloads—but they struggle to get that data into their lakes and warehouses. In response to this ubiquitous challenge, Confluent developed Tableflow.

Introducing KIP-848: The Next Generation of the Consumer Rebalance Protocol

The consumer group is a cornerstone of Apache Kafka, enabling scalable and fault-tolerant data consumption by allowing multiple consumer instances to share the workload of reading from topic partitions. The consumer rebalance protocol is the mechanism that coordinates which partitions are assigned to which consumers within a group.