Systems | Development | Analytics | API | Testing

Developer Experience in the Age of AI: Developing a Copilot Chat Extension for Data Streaming Engineers

Three in 4 programmers have tried artificial intelligence (AI). This factoid comes from a recent Wired survey on the habits of engineers with respect to AI tooling like GitHub Copilot. Though Wired used a pool of only around 700 engineers, Gartner’s prediction from a year ago was that 75% of enterprise software engineers would use AI by 2028. To many of us, it’s starting to feel like that’s already happened.

New With Confluent Platform 8.0: Stream Securely, Monitor Easily, and Scale Endlessly

At Confluent, we’re committed to building the world's leading data streaming platform, which gives you the ability to stream, connect, process, and govern all of your data and make it available wherever it’s needed—however it’s needed—in real time. Today, we're excited to announce the release of Confluent Platform 8.0! This release builds on Apache Kafka 4.0, reinforcing our core capabilities as a data streaming platform.

Moving Up the Curve: 5 Tips For Enabling Enterprise-Wide Data Streaming

Confluent recently released its 2025 Data Streaming Report: Moving the Needle on AI Adoption, Speed to Market, and ROI. The report found that data streaming is delivering business value with 44% of IT leaders, driving up to 5x or more return on their data streaming investments. Explore the 2025 Data Streaming Report That said, as companies continue to expand their data streaming use cases, many struggle with nontechnical hurdles around scaling, setting up operations, and hitting organizational silos.

7 Steps to Build an AI-Powered Personalization Engine With Confluent & Databricks

The advancement and widespread availability of new artificial intelligence (AI) capabilities—through platforms like the Databricks Data Intelligence Platform and Mosaic AI—has completely reset expectations for engineering teams across every industry. Business now moves at a new pace, demanding rapid delivery of intelligent, real-time applications—instead of slowly stitched-together systems solving problems defined and scoped months prior.

The Easiest Way to Power Real-Time AI: Confluent Announces Delta Lake Support & Unity Catalog Integration for Tableflow

In the age of AI, the hunger for fresh, reliable data to power machine learning (ML) models and real-time analytics is insatiable. Yet, organizations frequently hit roadblocks when trying to bridge their operational data in motion, typically flowing through Apache Kafka, with their data at rest in data lakehouses. On one side, you have the data streaming platform, the central nervous system managing the real-time flow of business events.

Unlocking Real-Time Analytics With Confluent Tableflow, Apache Iceberg, and Snowflake

Users of Snowflake and other data lakes and data warehouses need real-time data for artificial intelligence (AI) and analytical workloads—but they struggle to get that data into their lakes and warehouses. In response to this ubiquitous challenge, Confluent developed Tableflow.

Introducing KIP-848: The Next Generation of the Consumer Rebalance Protocol

The consumer group is a cornerstone of Apache Kafka, enabling scalable and fault-tolerant data consumption by allowing multiple consumer instances to share the workload of reading from topic partitions. The consumer rebalance protocol is the mechanism that coordinates which partitions are assigned to which consumers within a group.

How to Query Apache Kafka Topics With Natural Language

Modern companies generate large volumes of data, but often the internal users who need that data to do their jobs—data engineers, managers, business analysts, and developers—can find it challenging to quickly figure out answers to their questions. Apache Kafka is a powerhouse for real-time data processing of high-throughput workloads, and many organizations use Kafka to enable self-service access to data streams.

Confluent unites batch and stream processing for faster, smarter agentic AI and analytics

On Confluent Cloud for Apache Flink®, snapshot queries combine batch and stream processing to enable AI apps and agents to act on past and present data. New private networking and security features make stream processing more secure and enterprise-ready.