Confluent

Mountain View, CA, USA
2014
  |  By Olivia Greene
Today, we’re excited to announce the general availability of Data Portal on Confluent Cloud. Data Portal is built on top of Stream Governance, the industry’s only fully managed data governance suite for Apache Kafka® and data streaming. The developer-friendly, self-service UI provides an easy and curated way to find, understand, and enrich all of your data streams, enabling users across your organization to build and launch streaming applications faster.
  |  By Jakub Korab
In a previous blog post (How To Survive an Apache Kafka® Outage) I outlined the effects on applications during partial or total Kafka cluster outages and proposed some architectural strategies to handle these types of service interruptions. The applications most heavily impacted by this type of outage are external interfaces that receive data, do not control request flow, and possibly perform some form of business transaction with the outside world before producing to Kafka.
  |  By Peter Moskovits
Stepping into the world of Apache Kafka® can feel a bit daunting at first. I know this firsthand—while I have a background in real-time messaging systems, shifting into Kafka’s terminology and concepts seemed dense and complex. There’s a wealth of information out there, and it’s sometimes difficult to find the best (and, ideally, free) resources.
  |  By Derek Nelson
At this year’s Current, we introduced the public preview of our serverless Apache Flink® service, making it easier than ever to take advantage of stream processing without the complexities of infrastructure management. This first iteration of the service offers the Flink SQL API, which adheres to the ANSI standard and enables any user familiar with SQL to use Flink.
  |  By Konstantin Knauf
The Apache Flink PMC is pleased to announce the release of Apache Flink 1.18.0. As usual, we are looking at a packed release with a wide variety of improvements and new features. Overall, 174 people contributed to this release completing 18 FLIPS and 700+ issues. Thank you! Let's dive into the highlights.
  |  By David Peterson
It's not hard, it's just new. How can you, your business unit, and your enterprise utilize the exciting and emerging field of Generative AI to develop brand-new functionality? And once you’ve figured out your use cases, how do you successfully build in Generative AI? How do you scale it to production grade?
  |  By Robert Yokota
The Confluent Schema Registry plays a pivotal role in ensuring that producers and consumers in a streaming platform are able to communicate effectively. Ensuring the consistent use of schemas and their versions allows producers and consumers to easily interoperate, even when schemas evolve over time.
  |  By Satish Duggana
We are proud to announce the release of Apache Kafka® 3.6.0. This release contains many new features and improvements. This blog post will highlight some of the more prominent features. For a full list of changes, be sure to check the release notes.
  |  By Lucia Cerchie
Ever dealt with a misbehaving consumer group? Imbalanced broker load? This could be due to your consumer group and partitioning strategy! Once, on a dark and stormy night, I set myself up for this error. I was creating an application to demonstrate how you can use Apache Kafka® to decouple microservices. The function of my “microservices” was to create latte objects for a restaurant ordering service.
  |  By Confluent
Confluent launches the industry's only serverless, cloud-native Flink service to simplify building high-quality, reusable data streams. Confluent expands Stream Governance capabilities with Data Portal, so teams can easily find all the real-time data streams in an organisation. New Confluent Cloud Enterprise offering lowers the cost of private networking and storage for Apache Kafka.
  |  By Confluent
Asynchronous events are a communication pattern that is used to build robust and scalable systems. These events are often pushed through a messaging platform such as Apache Kafka. Among their benefits are the ability to optimize resource usage, more flexibility for scaling, and new ways to recover from failure without losing data.
  |  By Confluent
Streaming data brings with it some changes in how to perform joins. In this video, David Anderson and Dan Weston talk about how and when to use temporal joins to combine your data.
  |  By Confluent
Continuous Integration (CI) is the process of automatically building and testing your code on every source control commit. Continuous Delivery (CD) takes this further and automatically deploys the code to production on every commit. Used together these techniques allow code to be built, tested, and deployed automatically through a robust CI/CD pipeline. CHAPTERS.
  |  By Confluent
Lucia Cerchie explains what an Apache Kafka® Consumer Group ID is, and what role it plays in work sharing and rebalancing.
  |  By Confluent
Polyglot Architecture is a feature of microservices that allows each microservice to be built using a different technology stack. This approach provides developers the freedom to select the best tools for the job and allows them to be more creative with their solutions. However, like with any powerful tool, it can have negative consequences if it isn't used properly. CHAPTERS.
  |  By Confluent
From vehicle communication to predictive maintenance, real-time ingestion, analysis, and control has become an increasingly important characteristic of the machines we use in everyday life.
  |  By Confluent
The Branch by Abstraction Pattern is a method of trunk-based development. Rather than modifying the code in a separate branch, and merging the results when finished, the idea is to make modifications in the main branch. An abstraction layer is used to ""branch"" the code along an old and new path. This approach has some key advantages, especially when decomposing a monolith.
  |  By Confluent
Lucia Cerchie explains what an Apache Kafka® cluster is, and why it's unique. Learn how Kafka supports speed, scalability, and durability through its cluster structure.
  |  By Confluent
The Strangler or Strangler Fig Pattern is a process for decomposing a monolith into microservices. It allows rapid delivery of business value while reducing risk. This video introduces the pattern and outlines how it can be used to decompose a monolith.
  |  By Confluent
Let Confluent show you how a data streaming platform will transform your business..
  |  By Confluent
Traditional messaging middleware like Message Queues (MQs), Enterprise Service Buses (ESBs), and Extract, Transform and Load (ETL) tools have been widely used for decades to handle message distribution and inter-service communication across distributed applications. However, they can no longer keep up with the needs of modern applications across hybrid and multi cloud environments for asynchronicity, heterogeneous datasets and high volume throughput.
  |  By Confluent
Why a data mesh? Predicated on delivering data as a first-class product, data mesh focuses on making it easy to publish and access important data across your organization. An event-driven data mesh combines the scale and performance of data in motion with product-focused rigor and self-service capabilities, putting data at the front and center of both operational and analytical use-cases.
  |  By Confluent
When it comes to fraud detection in financial services, streaming data with Confluent enables you to build the right intelligence-as early as possible-for precise and predictive responses. Learn how Confluent's event-driven architecture and streaming pipelines deliver a continuous flow of data, aggregated from wherever it resides in your enterprise, to whichever application or team needs to see it. Enrich each interaction, each transaction, and each anomaly with real-time context so your fraud detection systems have the intelligence to get ahead.
  |  By Confluent
Many forces affect software today: larger datasets, geographical disparities, complex company structures, and the growing need to be fast and nimble in the face of change. Proven approaches such as service-oriented (SOA) and event-driven architectures (EDA) are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but as this practical ebook demonstrates, they provide a more holistic and compelling approach when applied together.
  |  By Confluent
Data pipelines do much of the heavy lifting in organizations for integrating, transforming, and preparing data for subsequent use in data warehouses for analytical use cases. Despite being critical to the data value stream, data pipelines fundamentally haven't evolved in the last few decades. These legacy pipelines are holding organizations back from really getting value out of their data as real-time streaming becomes essential.
  |  By Confluent
In today's fast-paced business world, relying on outdated data can prove to be an expensive mistake. To maintain a competitive edge, it's crucial to have accurate real-time data that reflects the status quo of your business processes. With real-time data streaming, you can make informed decisions and drive value at a moment's notice. So, why would you settle for being simply data-driven when you can take your business to the next level with real-time data insights??
  |  By Confluent
Data pipelines do much of the heavy lifting in organizations for integrating and transforming and preparing the data for subsequent use in downstream systems for operational use cases. Despite being critical to the data value stream, data pipelines fundamentally haven't evolved in the last few decades. These legacy pipelines are holding organizations back from really getting value out of their data as real-time streaming becomes essential.
  |  By Confluent
Shoe retail titan NewLimits relies on a jumble of homegrown ETL pipelines and batch-based data systems. As a result, sluggish and inefficient data transfers are frustrating internal teams and holding back the company's development velocity and data quality.

Connect and process all of your data in real time with a cloud-native and complete data streaming platform available everywhere you need it.

Data streaming enables businesses to continuously process their data in real time for improved workflows, more automation, and superior, digital customer experiences. Confluent helps you operationalize and scale all your data streaming projects so you never lose focus on your core business.

Confluent Is So Much More Than Kafka:

  • Cloud Native: 10x Apache Kafka® service powered by the Kora Engine.
  • Complete: A complete, enterprise-grade data streaming platform.
  • Everywhere: Availability everywhere your data and applications reside.

Apache Kafka® Reinvented for the Data Streaming Era