Systems | Development | Analytics | API | Testing

Streaming Data to AI-Ready Tables: Tableflow for Delta Lake and Databricks Unity Catalog Is Now Generally Available

The true power of data emerges when streaming, analytics, and artificial intelligence (AI) connect—transforming real-time streaming data into actionable intelligence. Yet bridging that gap has long been one of the most complex challenges in modern data architecture. Confluent makes it effortless to capture and process continuous streams of data, while Databricks empowers teams to analyze, govern, and apply AI through Unity Catalog.

Faster, Smarter, More Context-Aware: What's New in Streaming Agents

When we first introduced Streaming Agents, we were solving a fundamental challenge: Every AI problem is a data problem. When data is missing, stale, or inaccessible, even the most advanced agents and LLMs fail to deliver. How do we build scalable agents that aren’t just powerful in isolation, but part of multi-agent systems that are event-driven, replayable, and grounded in accurate data?

Introducing Real-Time Context Engine: Simplified Context Engineering With Real-Time, Processed Data for AI

We’re excited to announce our Real-Time Context Engine, now available in Early Access. It’s a key part of Confluent Intelligence, our vision to bring real-time data directly to production AI systems through the power of Apache Kafka and Apache Flink.

Demo: Streaming Agents for price matching, with RAG, observability, and Real-Time Context Engine

Streaming Agents enable you to build, deploy, and orchestrate event-driven agents on Apache Flink and Apache Kafka. Embedded in the stream, they can tap into the latest enriched data and be the eyes and ears of a business, continuously monitoring and acting on live operational events. In this demo, Brenner Heintz, Staff Technical Marketing Manager at Confluent, shows how to build price matching agents, do vector search for retrieval augmented generation (RAG), and leverage Confluent’s Real-Time Context Engine to process and serve fresh context the moment it’s needed for AI decision-making.

Keynote: Building Intelligent Systems on Real-time Data

Join Jay Kreps, Confluent leadership, our customers, and industry thought leaders to learn how you can build intelligent systems with real-time data. We’ll show you why streaming is becoming ubiquitous across the business—and how that unlocks a shift-left approach: process and govern at the source, then reuse everywhere. Expect live demos and candid customer stories that make it concrete. Whether you’re a data leader, architect, or builder, you’ll leave with practical playbooks for bringing real-time AI to production. The future is here-let's ignite it together!

Why Apache Kafka Migration Costs Are Often Underestimated

As a critical, stateful system, migrating Apache Kafka deployments is virtually always a complex engineering project where the most significant expenses are often hidden. Scoping and committing to a Kafka migration requires multiple layers of careful calculation involving infrastructure choices, data complexity, team expertise, and risk tolerance. Underestimating these variables leads to blown budgets and extended timelines.

Introducing Confluent Private Cloud: Cloud-Level Agility for Your Private Infrastructure

If you’re on a platform team running Apache Kafka, you know it’s rarely simple. You’re expected to keep it stable, performant, and secure while juggling requests from every direction. Supporting multiple teams and partners leads to operational complexity that never really goes away.

Tableflow is Production Ready: Delta Lake, Unity Catalog, Azure Early Availability (EA), and More Enterprise-Grade Features

Data-driven organizations know that unlocking real-time analytics from streaming data isn’t just about collecting and transmitting events. It’s about getting high-quality, governed, and query-ready tables into the hands of analysts and business users while ensuring enterprise-grade security and compliance. Traditionally, moving data from Apache Kafka into analytic tables required complex ETL pipelines, manual data wrangling, and custom governance processes.

Unified Stream Manager: Manage and Monitor Apache Kafka Across Environments

If you’re running Confluent Platform or our new offering, Confluent Private Cloud, on-premises, you have your reasons: data sovereignty, regulatory compliance, or maybe a phased cloud migration. Your on-prem Apache Kafka isn’t going anywhere. It’s a critical part of your infrastructure.

Confluent and Your Data: A Partnership You Can Trust

At Confluent, we know that our platform must provide your business with resilience for your mission-critical applications, and we take that responsibility very seriously. Any unplanned outages can result in lost revenue, reputation damage, or fines. As incidents inevitably happen, your organization needs to know how to maximize your availability with our products.

The True Cost of Real-Time Data Streaming

Thanks to ever-increasing adoption technologies like Apache Kafka and Apache Flink, the continuous movement and streaming of real-time data has transformed how modern businesses operate… but is the cost of data streaming worth it? From powering personalized recommendations to enabling instant fraud detection, streaming is often seen as synonymous with innovation and competitive advantage. But like any investment, the cost-benefit equation has to make sense.

How to Build Real-Time Compliance & Audit Logging With Apache Kafka

Traditionally, compliance teams have had to rely on batch exports for their audit logs, a method that, while functional, is proving to be woefully inadequate in today's fast-paced digital landscape. The truth is, waiting hours, or even days, for batch exports of your audit data leaves your organization vulnerable.

Connect Migration Utility: Convert Self-Managed Connectors to Fully Managed in a Few Minutes

Migrating from self-managed Apache Kafka connectors to fully managed connectors has been a persistent challenge for data teams working on Confluent Cloud. While Confluent-managed connectors deliver enterprise-grade features, seamless upgrades, and comprehensive support that add up to significant development and operations cost savings, the journey to get there often feels daunting and opaque.

Lessons Learned With Confluent-Managed Connectors and Terraform

I’m a Data Streaming Engineer and a developer advocate, which means I spend a lot of time thinking about the day-to-day experience of building applications with data streaming and stream processing. I muse about a world of data in motion where entire organizations have the governance needed to manage, discover, and understand the complex relationships between data streams.

Confluent: The Real-Time Backbone for Agentic Systems

In the evolving landscape of agentic systems, Confluent and Google Cloud together emerge as critical enablers, providing the real-time infrastructure that underpins efficient, reliable, and intelligent data flow. This powerful synergy addresses key challenges in agent-to-agent (A2A) communication, interaction with external resources, and the overall stability and observability of complex multi-agent environments.

Leveraging Confluent Cloud Schema Registry with AWS Lambda Event Source Mapping

In our previous blog post, we introduced two ways that Confluent Cloud can integrate with AWS Lambda. One option is using Lambda’s Event Source Mapping (ESM) for Apache Kafka, wherein Lambda creates a consumer group, consumes records off the provided topic, and triggers the Lambda function. The record is polled by the ESM, and the consumed record subsequently acts as the event data provided to (and processed by) the Lambda function.

Cross-Cloud Data Replication Over Private Networks With Confluent

Modern businesses don’t run in just one place. Your applications might live in Amazon Web Services (AWS), your analytics in Microsoft Azure, and critical systems on-premises. The challenge? Keeping all that data connected and flowing in real time—without adding complexity or risk. As more organizations adopt these multicloud strategies, the need for secure, private data replication has become critical.

Monitor Kafka Streams Health Metrics in Confluent Cloud

It’s 3 a.m., and an alert fires: Your critical Kafka Streams application is lagging. The frantic troubleshooting begins. Is it a consumer group rebalance? You start searching through application logs across multiple pods. Is it a problem with the Apache Kafka cluster itself? You switch to your cluster monitoring dashboards to check broker health. Or is there a silent bottleneck hidden deep in your application code? Without the right instrumentation, you're flying blind.

Beyond Compliance: Confluent's Commitment to Trust and Transparency

In today's fast-paced digital world, real-time data streaming has become indispensable for modern enterprises, powering everything from instant insights to enhanced customer experiences. As organizations move critical data infrastructure to the cloud, the need for robust security, risk management, and unwavering compliance is more important than ever. According to the 2025 Data Streaming Report, investments in security remain among the highest priority for 94% of surveyed IT leaders.

No More Swamps: Building a Better-Governed Data Lake Architecture

Two data challenges exist across almost all organizations: access and trust. These issues scale exponentially as an organization grows to the point that it can no longer hand around sheets of paper or approve database access. The demand for better data access drove the history of data warehousing, following the ethos that better decisions come from more data and that compute would catch up with demand. However, the hunger for collecting more data didn’t come without a cost.

The Future of Coding: How Cursor and WarpStream Power AI Productivity | Life Is But A Stream

Software development is changing fast. With Cursor, Anysphere is building an AI-forward IDE that fuses human creativity with machine intelligence. At the heart of this transformation is data streaming—making it possible to train models responsibly, deliver lightning-fast Tab completions, and scale telemetry without breaking engineering velocity. In this episode, engineer Alex Haugland shares how WarpStream gives Cursor sovereignty over user data, how telemetry and accounting pipelines strengthen product decisions, and why “coding is really just a bug” in how we interact with computers.

Expanding the AI Data Landscape: Confluent's Q3 Integrations Summary

In an era when every second counts, enterprises that can act on information the moment it arrives are positioned to win—and real-time streaming data is the fuel that brings artificial intelligence (AI) to life. Powering agentic AI and advanced analytics can’t be done with static or delayed data; organizations need a comprehensive, reliable supply of streaming data representing their entire businesses.

Cross-Data-Center Apache Kafka Replication: Decision Framework & Readiness Playbook

Building distributed systems is a huge undertaking, but the complexity doesn’t end once your application or platform is “production ready.” Keeping these systems online and operational through cloud region outages, a network partition, or just scheduled maintenance is a constant challenge. The bottom line: you don’t want data pipelines for essential business services, customer-facing products, or enterprise data platforms to go dark.

Scaling Kafka Streams Applications: Strategies for High-Volume Traffic

As the adoption of real-time data processing accelerates, the ability to scale stream processing applications to handle high-volume traffic is paramount. Apache Kafka, the de facto standard for distributed event streaming, provides a powerful and scalable library in Kafka Streams for building such applications. Scaling a Kafka Streams application effectively involves a multi-faceted approach that encompasses architectural design, configuration tuning, and diligent monitoring.