Systems | Development | Analytics | API | Testing

Kafka

What is Apache Flink?

Learn the basics of Apache Flink® and how to get started with simple, serverless Flink! Flink is a powerful, battle-hardened stream processor that has rapidly grown in popularity, becoming the de facto standard for stream processing and a top-five Apache project. Kai Waehner, Field CTO at Confluent, explains how Flink fits into your data streaming architecture, why stream processing is needed for real-time data, and how Flink’s underlying architecture provides a number of advantages.

Making Flink Serverless, With Queries for Less Than a Penny

Imagine easily enriching data streams and building stream processing applications in the cloud, without worrying about capacity planning, infrastructure and runtime upgrades, or performance monitoring. That's where our serverless Apache Flink® service comes in, as announced at this year’s Current | The Next Generation of Kafka Summit.

What is Confluent?

Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations.

Enterprise Apache Kafka Cluster Strategies: Insights and Best Practices

Apache Kafka® has become the de-facto standard for streaming data, helping companies deliver exceptional customer experiences, automate operations, and become software. As companies increase their use of real-time data, we have seen the proliferation of Kafka clusters within many enterprises. Often, siloed application and infrastructure teams set up and manage new clusters to solve new use cases as they arise.

Asynchronous Events | Microservices 101

Asynchronous events are a communication pattern that is used to build robust and scalable systems. These events are often pushed through a messaging platform such as Apache Kafka. Among their benefits are the ability to optimize resource usage, more flexibility for scaling, and new ways to recover from failure without losing data.

Introducing Data Portal in Stream Governance

Today, we’re excited to announce the general availability of Data Portal on Confluent Cloud. Data Portal is built on top of Stream Governance, the industry’s only fully managed data governance suite for Apache Kafka® and data streaming. The developer-friendly, self-service UI provides an easy and curated way to find, understand, and enrich all of your data streams, enabling users across your organization to build and launch streaming applications faster.

4 reasons to integrate Apache Kafka and Amazon S3

Amazon S3 is a standout storage service known for its ease of use, power, and affordability. When combined with Apache Kafka, a popular streaming platform, it can significantly reduce costs and enhance service levels. In this post, we’ll explore various ways S3 is put to work in streaming data platforms.

A Deep Dive Into Sending With librdkafka

In a previous blog post (How To Survive an Apache Kafka® Outage) I outlined the effects on applications during partial or total Kafka cluster outages and proposed some architectural strategies to handle these types of service interruptions. The applications most heavily impacted by this type of outage are external interfaces that receive data, do not control request flow, and possibly perform some form of business transaction with the outside world before producing to Kafka.

Continuous Integration and Delivery (CI/CD) | Microservices 101

Continuous Integration (CI) is the process of automatically building and testing your code on every source control commit. Continuous Delivery (CD) takes this further and automatically deploys the code to production on every commit. Used together these techniques allow code to be built, tested, and deployed automatically through a robust CI/CD pipeline. CHAPTERS.