Systems | Development | Analytics | API | Testing

Kafka

Lenses 6 - Developer Experience designed for multi-Kafka

With the new branding, we’ve also redefined how developers work with real-time data and data architectures. Lenses 6 is a new version of Developer Experience designed to empower developers to operate data seamlessly across multiple clusters and environments. With Global SQL Studio. This is what we mean by Autonomy in Data Streaming.

How to Set Up Networking on Confluent Cloud

Setting up network connections can often seem difficult or time consuming. This video provides a wayfinding introduction to help you get networking up and running for all cluster types on Confluent Cloud, showing you your networking options for each cluster type when running on AWS, Azure, or Google Cloud, respectively.

How to migrate from Kafka to Confluent Cloud with limited downtime

In this short video, a Confluent Solutions Engineering will run through the high-level steps on how to get started with your migration. And even better, once you’re done watching, you can download our comprehensive migration kit for a step by step guide of everything I’ve talked about and more.

Exposing and Controlling Apache Kafka Data Streaming with Kong Konnect and Confluent Cloud

We announced the Kong Premium Technology Partner Program at API Summit 2024, and Confluent was one of the first in the program. This initial development was all about ensuring that the relationship between Kong and Confluent — from a business and product perspective — fully represented our joint belief that the world of data streaming and the world of APIs are converging.

Introducing Lenses 6.0 Panoptes

Organizations today face complex data challenges as they scale, with more distributed data architectures and a growing number of teams building streaming applications. They will need to implement Data Mesh principles for sharing data across business domains, ensure data sovereignty across different jurisdictions and clouds, and maintain real-time operations.

Scaling Kafka with WebSockets

Kafka is a highly popular realtime data streaming platform, renowned for handling massive volumes of data with minimal latency. Typical use cases include handling user activity tracking, log aggregation and IoT telemetry. Kafka’s architecture, based on distributed partitions, allows it to scale horizontally across multiple brokers. But although Kafka excels at high data throughput, scaling it to manage thousands of client connections can be costly and complex.