Systems | Development | Analytics | API | Testing

Latest Posts

Data Products, Data Contracts, and Change Data Capture

Change data capture (CDC) has long been one of the most popular, reliable, and quickest ways to connect your database tables into data streams. It is a powerful pattern and one of the most common and easiest ways to bootstrap data into Apache Kafka®. But it comes with a relatively significant drawback—it exposes your database’s internal data model to the downstream world.

New with Confluent Platform: Seamless Migration Off ZooKeeper, Arm64 Support, and More

With the increasing importance of real-time data in modern businesses, companies are leveraging distributed streaming platforms to process and analyze data streams in real time. Many companies are also transitioning to the cloud, which is often a gradual process that takes several years and involves incremental stages. During this transition, many companies adopt hybrid cloud architectures, either temporarily or permanently.

How to Use Confluent for Kubernetes to Manage Resources Outside of Kubernetes

Apache Kafka® cluster administrators often need to solve problems like how to onboard new teams, manage resources like topics or connectors, and maintain permission control over these resources. In this post, we will demonstrate how to use Confluent for Kubernetes (CfK) to enable GitOps with a CI/CD pipeline and delegate resource creation to groups of people without distributing admin permission passwords to other people in the organization.

Confluent's Customer Zero: Building a Real-Time Alerting System With Confluent Cloud and Slack

We talk a lot about how customers can use Confluent as the data backbone for event streaming applications and enable a new class of event-driven microservices by completely decoupling services from one another. With Confluent, organizations can rapidly build and deploy business applications with greater flexibility, support larger scale, and be more responsive to customer demands. But we don’t just talk about it, we do it ourselves as Confluent’s “Customer Zero”!

Extending the Confluent CLI With Custom Plugins

A good command line interface is essential for developer productivity. If you look at any of the major cloud providers, they all have a robust CLI API that enables you to achieve high productivity. The key benefits of a CLI include: Confluent offers a powerful CLI that lets you quickly create and manage Apache Kafka® clusters and Apache Flink® compute pools and all associated operations with both.

Getting Started with OAuth for Confluent Cloud Using Azure AD DS

Released in December 2022, OAuth support on Confluent Cloud allows Confluent Cloud users to integrate their own third-party identity provider (IdP) with Confluent Cloud, centralizing account management across all of their cloud services. This article explains how to configure Azure Active Directory DS (Azure AD DS) and Confluent Cloud so that the Azure Directory can be used to authenticate and authorize applications to use Confluent Cloud clusters.

Simplify and Accelerate Your Data Streaming Workloads With an Intuitive User Experience

Happy holidays from Confluent! It’s that time in the quarter again, when we get to share our latest and greatest features on Confluent Cloud. To start, we’re thrilled to share that Confluent ranked as a leader in The Forrester Wave™: Streaming Data Platforms, Q4 2023, and The Forrester Wave(™): Cloud Data Pipelines, Q4 2023! Forrester strongly endorsed Confluent’s vision to transform data streaming platforms from a “nice-to-have” to a must-have.

Making Flink Serverless, With Queries for Less Than a Penny

Imagine easily enriching data streams and building stream processing applications in the cloud, without worrying about capacity planning, infrastructure and runtime upgrades, or performance monitoring. That's where our serverless Apache Flink® service comes in, as announced at this year’s Current | The Next Generation of Kafka Summit.

Enterprise Apache Kafka Cluster Strategies: Insights and Best Practices

Apache Kafka® has become the de-facto standard for streaming data, helping companies deliver exceptional customer experiences, automate operations, and become software. As companies increase their use of real-time data, we have seen the proliferation of Kafka clusters within many enterprises. Often, siloed application and infrastructure teams set up and manage new clusters to solve new use cases as they arise.