Systems | Development | Analytics | API | Testing

Confluent

Connect with Confluent: Celebrating One Year and 50+ Integrations

In just 12 short months, the Connect with Confluent (CwC) technology partner program has transformed from a new, ambitious initiative to expand the data streaming ecosystem into a thriving portfolio that’s rapidly increasing the breadth and value of real-time data. The program now provides a portfolio of 50+ integrations, each one amplifying the capabilities of Confluent's unified data streaming platform for Apache Kafka and Apache Flink.

Let Flink Cook: Mastering Real-Time Retrieval-Augmented Generation (RAG) with Flink

Commercial and open source large language models (LLMs) are evolving rapidly, enabling developers to create innovative generative AI-powered business applications. However, transitioning from prototype to production requires integrating accurate, real-time, domain-specific data tailored to your business needs and deploying at scale with robust security measures.

Apna Unlocks AI Job Matching for 50 Million Users With Confluent & Onehouse

Since its beginnings just five years ago, Apna has become the leading jobs site for tens of millions of workers in India, the largest labor market in the world. Today, Apna has more than 50 million registered users, resulting in more than 5 million interviews and 100,000 jobs activated per month.

Unlock Real-Time Value from DynamoDB Data with Confluent's CDC Source Connector

Over the years, Amazon DynamoDB has grown into a feature-rich NoSQL database that has deep integrations with various services such as Amazon S3 and AWS Lambda. As businesses increasingly depend on data for decision-making, it is common to use data residing in DynamoDB to contextualize or even drive events at a granular level (as opposed to bulk or batch).

How to source data from AWS DynamoDB to Confluent using the DynamoDB CDC Source Connector

This is a one-minute video showing an animated architectural diagram of the integration between Amazon DynamoDB and Confluent Cloud using the all new, fully managed DynamoDB CDC Source connector. This real-time data pipeline doesn’t require you to write or maintain code.

Watermark Alignment Explained in 2 Minutes | Apache Flink in Action

Watermark alignment is a relatively new feature in Apache Flink. It lets you cope with the problem of needing to temporally join streams with mismatched event frequencies, e.g., one stream’s updates are much more frequent than those of the stream(s) with which you need to join it. In this video we’ll break the feature down, and relate how it can help you better manage your Apache Flink integration.

Spring Into Confluent Cloud With Kotlin-Part 1: Producers and Consumers

Hey, you! Yeah, you! The puzzled-looking Spring Boot developer, scouring the web for a guide on integrating your microservices with Apache Kafka on Confluent Cloud with Stream Governance. Admit it, you’ve been Googling nonstop for the past hour and all you’ve found are examples using StringSerializer/StringDeserializer with not even the slightest mention of "schema registry-aware" serialization methods. And I bet the examples you found are implemented in Java.

5 Years of Confluent Cloud Connectors: Exploring Your Top Connector Picks

This summer marks five years since we announced our first fully managed connector on Confluent Cloud in 2019, the Amazon S3 Sink Connector. Since then, our connector offerings have not only expanded significantly but also enabled teams to send hundreds of petabytes of data throughput. Today, we support over 80 pre-built, fully managed connectors, custom connectors, and secure private networking.