Confluent

Mountain View, CA, USA
2014
  |  By Lucia Cerchie
I’m brand new to writing KIPs (Kafka Improvement Proposals). I’ve written two so far, and my hands sweat every time I hit send on an email with ‘ KIP’ in the title. But I’ve also learned a lot from the process: about Apache Kafka internals, the process of writing KIPs, the Kafka community, and the most important motivation for developing software: our end users. What did I actually write? Let’s review KIP-941 and KIP-1020.
  |  By logging enabled
Last year, we introduced the Connect with Confluent partner program, enabling our technology partners to develop native integrations with Confluent Cloud. This gives our customers access to Confluent data streams from within their favorite applications and allows them to extract maximum value from their data.
  |  By Confluent
Confluent's new AI Model Inference seamlessly integrates AI and ML capabilities into data pipelines. Confluent's new Freight clusters offer cost-savings for high-throughput use cases with relaxed latency requirements.
  |  By Marc Selwan
We’re excited to introduce Freight clusters—a new type of Confluent Cloud cluster designed for high-throughput, relaxed latency workloads that is up to 90% cheaper than self-managing open source Apache Kafka®. Freight clusters utilize the latest innovations in Confluent Cloud’s cloud-native engine, Kora, to deliver low cost networking by trading off ultra low latency performance.
  |  By Sven Erik Knop
Confluent has published official Docker containers for many years. They are the basis for deploying a cluster in Kubernetes using Confluent for Kubernetes (CFK), and one of the underpinning technologies behind Confluent Cloud. For testing, containers are convenient for quickly spinning up a local cluster with all the components required, such as Confluent Schema Registry or Confluent Control Center.
  |  By Confluent
Reimagined partner program will better enable SIs to drive growth and profitability, while helping customers realise their full potential with data streaming.
  |  By Maven plugin
There are plenty of materials available out there about Schema Registry. From Confluent alone, if you head to Confluent Developer and search “Schema Registry” you will discover an ever-growing repository of over 100 results including courses, articles, tutorials, blog posts, and more, providing comprehensive resources for enthusiasts and professionals alike.
The rise of fully managed cloud services fundamentally changed the technology landscape and introduced benefits like increased flexibility, accelerated deployment, and reduced downtime. Confluent offers a portfolio of 80+ fully managed connectors that enables quick, easy, and reliable integration of Confluent Cloud with popular data sources and sinks, connecting your entire system in real time.
  |  By demo
Have you ever wondered how to track events in a large codebase? I gave it a shot using Apache Kafka®! Read on to learn how to use GitHub data as a source, process it using a Kafka Streams topology, and send it to a Kafka topic.
  |  By Robert Yokota
In this article, we present some best practices and key concepts for using Confluent Schema Registry.
  |  By Confluent
For their inaugural episode, Anna McDonald (the Duchess), Matthias J. Sax (the Doctor), and their extinct friend, Phil, wax rhapsodic about all things eventing: you’ll learn why events are a mindset, why the Duchess thinks you’ll find event immutability relaxing, and why your event streams might need some windows. The Duchess & The Doctor Show features a question-driven format that delivers substantial, yet easily comprehensible answers to user-submitted questions on all things events and eventing, including Apache Kafka, its ecosystem, and beyond!
  |  By Confluent
Learn how Apache Flink® can handle hundreds or even thousands of compute nodes running 24/7 and still produce correct results.
  |  By Confluent
Learn how consumer partition assignment works in Apache Kafka.
  |  By Confluent
In this video, Adam Bellemare compares and contrasts Event-Driven and Request-Driven Architectures to give you a better idea of the tradeoffs and benefits involved with each. Many developers start in the synchronous request-response (RR) world, using REST and RPC to build inter-service communications. But tight service-to-service coupling, scalability, fan-out sensitivity, and data access issues can still remain.
  |  By Confluent
Every company faces the perennial problem of data integration but often experiences data silos, data quality issues, and data loss from point-to-point, batch-based integrations. Connectors decouple data sources and sinks through Apache Kafka, simplifying your architecture while providing flexibility, resiliency, and reliability at a massive scale.
  |  By Confluent
An Event-Driven Architecture is more than just a set of microservices. Event Streams should represent the central nervous system, providing the bulk of communication between all components in the platform. Unfortunately, many projects stall long before they reach this point.
  |  By Confluent
Tired of starting online tutorials only to realize they don't work on your machine? We've integrated Gitpod into our Confluent Developer courses to streamline your learning experience. See how it works in this short introduction video.
  |  By Confluent
Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations.
  |  By Confluent
Join the Confluent leadership team as they share their vision of streaming data products enabled by a data streaming platform built around Apache Kafka. Jay Kreps, Co-creator of Apache Kafka and CEO of Confluent, will present his vision of unifying the operational and analytical worlds with data streams and showcase exciting new product capabilities. During this keynote, the winner and finalists of the $1M Data Streaming Startup Challenge will showcase how their use of data streaming is disrupting their categories.
  |  By Confluent
Apache Flink® 1.19 is here! On behalf of the Flink community, David Anderson highlights key release updates with FLIPs for Legacy deprecations, Flink SQL, Observability, Flink Configuration, and Flink Connectors.
  |  By Confluent
Traditional messaging middleware like Message Queues (MQs), Enterprise Service Buses (ESBs), and Extract, Transform and Load (ETL) tools have been widely used for decades to handle message distribution and inter-service communication across distributed applications. However, they can no longer keep up with the needs of modern applications across hybrid and multi cloud environments for asynchronicity, heterogeneous datasets and high volume throughput.
  |  By Confluent
Why a data mesh? Predicated on delivering data as a first-class product, data mesh focuses on making it easy to publish and access important data across your organization. An event-driven data mesh combines the scale and performance of data in motion with product-focused rigor and self-service capabilities, putting data at the front and center of both operational and analytical use-cases.
  |  By Confluent
When it comes to fraud detection in financial services, streaming data with Confluent enables you to build the right intelligence-as early as possible-for precise and predictive responses. Learn how Confluent's event-driven architecture and streaming pipelines deliver a continuous flow of data, aggregated from wherever it resides in your enterprise, to whichever application or team needs to see it. Enrich each interaction, each transaction, and each anomaly with real-time context so your fraud detection systems have the intelligence to get ahead.
  |  By Confluent
Many forces affect software today: larger datasets, geographical disparities, complex company structures, and the growing need to be fast and nimble in the face of change. Proven approaches such as service-oriented (SOA) and event-driven architectures (EDA) are joined by newer techniques such as microservices, reactive architectures, DevOps, and stream processing. Many of these patterns are successful by themselves, but as this practical ebook demonstrates, they provide a more holistic and compelling approach when applied together.
  |  By Confluent
Data pipelines do much of the heavy lifting in organizations for integrating, transforming, and preparing data for subsequent use in data warehouses for analytical use cases. Despite being critical to the data value stream, data pipelines fundamentally haven't evolved in the last few decades. These legacy pipelines are holding organizations back from really getting value out of their data as real-time streaming becomes essential.
  |  By Confluent
In today's fast-paced business world, relying on outdated data can prove to be an expensive mistake. To maintain a competitive edge, it's crucial to have accurate real-time data that reflects the status quo of your business processes. With real-time data streaming, you can make informed decisions and drive value at a moment's notice. So, why would you settle for being simply data-driven when you can take your business to the next level with real-time data insights??
  |  By Confluent
Data pipelines do much of the heavy lifting in organizations for integrating and transforming and preparing the data for subsequent use in downstream systems for operational use cases. Despite being critical to the data value stream, data pipelines fundamentally haven't evolved in the last few decades. These legacy pipelines are holding organizations back from really getting value out of their data as real-time streaming becomes essential.
  |  By Confluent
Shoe retail titan NewLimits relies on a jumble of homegrown ETL pipelines and batch-based data systems. As a result, sluggish and inefficient data transfers are frustrating internal teams and holding back the company's development velocity and data quality.

Connect and process all of your data in real time with a cloud-native and complete data streaming platform available everywhere you need it.

Data streaming enables businesses to continuously process their data in real time for improved workflows, more automation, and superior, digital customer experiences. Confluent helps you operationalize and scale all your data streaming projects so you never lose focus on your core business.

Confluent Is So Much More Than Kafka:

  • Cloud Native: 10x Apache Kafka® service powered by the Kora Engine.
  • Complete: A complete, enterprise-grade data streaming platform.
  • Everywhere: Availability everywhere your data and applications reside.

Apache Kafka® Reinvented for the Data Streaming Era