Systems | Development | Analytics | API | Testing

Confluent

Shift left to write data once, read as tables or streams

Shift Left is a rethink of how we circulate, share and manage data in our organizations using DataStreams, Change Data Capture, FlinkSQL and Tableflow. It addresses the challenges with multi-hop and medallion architectures using batch pipelines by shifting the data preparation, cleaning and schemas to the point where data is created and as a result, you can build fresh trustworthy datasets as streams for operational use cases or Apache Iceberg tables for analytical use cases.

How to source data from AWS DynamoDB to Confluent using DynamoDB Streams and AWS Lambda

This is a one-minute video showing an animated architectural diagram of the integration between Amazon DynamoDB and Confluent Cloud using DynamoDB Streams and AWS Lambda. Details of the integration are provided via narration.

How Developers Can Use Generative AI to Improve Data Quality

It sounds counterintuitive—using a technology that has trust issues to create more trustworthy data. But smart engineers can put generative AI to work to improve the quality of their data, allowing them to build more accurate and trustworthy AI-powered applications.

Confluent + WarpStream = Large-Scale Streaming in your Cloud

I’m excited to announce that Confluent has acquired WarpStream, an innovative Kafka-compatible streaming solution with a unique architecture. We’re excited to be adding their product to our portfolio alongside Confluent Platform and Confluent Cloud to serve customers who want a cloud-native streaming offering in their own cloud account.

How Producers Work: Kafka Producer and Consumer Internals, Part 1

I shouldn’t have to convince anyone that Apache Kafka is an incredibly useful and powerful technology. As a distributed event streaming platform, it’s adept at storing your event data and serving it up for downstream consuming applications to make sense of that information––in real time or as close to real time as your use case permits. The real beauty of Kafka as a technology is that it can do it with very little effort on your part. In effect, it’s a black box.

Connect with Confluent: Celebrating One Year and 50+ Integrations

In just 12 short months, the Connect with Confluent (CwC) technology partner program has transformed from a new, ambitious initiative to expand the data streaming ecosystem into a thriving portfolio that’s rapidly increasing the breadth and value of real-time data. The program now provides a portfolio of 50+ integrations, each one amplifying the capabilities of Confluent's unified data streaming platform for Apache Kafka and Apache Flink.

Let Flink Cook: Mastering Real-Time Retrieval-Augmented Generation (RAG) with Flink

Commercial and open source large language models (LLMs) are evolving rapidly, enabling developers to create innovative generative AI-powered business applications. However, transitioning from prototype to production requires integrating accurate, real-time, domain-specific data tailored to your business needs and deploying at scale with robust security measures.

Apna Unlocks AI Job Matching for 50 Million Users With Confluent & Onehouse

Since its beginnings just five years ago, Apna has become the leading jobs site for tens of millions of workers in India, the largest labor market in the world. Today, Apna has more than 50 million registered users, resulting in more than 5 million interviews and 100,000 jobs activated per month.

Unlock Real-Time Value from DynamoDB Data with Confluent's CDC Source Connector

Over the years, Amazon DynamoDB has grown into a feature-rich NoSQL database that has deep integrations with various services such as Amazon S3 and AWS Lambda. As businesses increasingly depend on data for decision-making, it is common to use data residing in DynamoDB to contextualize or even drive events at a granular level (as opposed to bulk or batch).