Systems | Development | Analytics | API | Testing

What's New in Confluent Clients for Kafka: Python Async GA, Schema Registry Upgrades

Hey, fellow Apache Kafka developers! It’s time for another update on the Confluent client ecosystem. Following our recent architectural milestones, we’re excited to announce the release of librdkafka 2.13.0, which powers the latest versions of our Python, JavaScript, .NET, Go, and C/C++ clients. In this release, you’ll find numerous improvements to the Python experience as well as critical security and Schema Registry enhancements for everyone.

Confluent Intelligence expands real-time business data to enterprise AI

Support for the Agent2Agent protocol helps connect AI agents anywhere in real time so they can collaborate at enterprise scale. Multivariate Anomaly Detection takes anomaly detection to the next level, stopping problems before they start.

New in Confluent Intelligence: A2A, Multivariate Anomaly Detection, Vector Search for Cosmos DB, Amazon S3 Vectors, and More

As AI models are increasingly commoditized, the value driver for enterprises is no longer “Which large language model (LLM) are we using?” but “How can we use our data for reliable, real-time AI decisioning?” Agentic AI systems—where agents plan, decide, and act autonomously—are only as useful as the context they have. When that context is stale, fragmented, or locked away behind brittle point-to-point integrations, even the best models fail to deliver.

Kafka Copy Paste (KCP): How to Migrate to Confluent Cloud in Days, Not Weeks

While Apache Kafka is incredibly powerful, self-managing brokers, upgrades, capacity, security, and incidents can quickly distract teams from what matters most: building real-time applications and delivering business value. Confluent Cloud can remove that operational burden, yet migration can still be seen as risky and tedious.

How to Break Off Your First Microservice

The road from monolithic architecture to cloud-native, microservices application is rarely a straightforward engineering exercise. There's often a significant gap between understanding the theoretical benefits of microservices and successfully extracting each service from a mature, long-running codebase. Many teams exploring microservices migration struggle most with the first extraction. How do you make that initial step concrete, low-risk, and reversible?

Beyond Zero-Ops: Architectural Precision for MongoDB Atlas Connectors

Whether you’re streaming change data capture (CDC) events from MongoDB to Apache Kafka or sinking high-velocity data from Kafka into MongoDB for analytics, the following best practices ensure a secure, performant, and resilient architecture. This technical deep dive covers implementing the MongoDB Atlas Source and Sink Connectors on Confluent Cloud.

How to Future-Proof Architectures With Continuous Availability Via Hybrid & Multicloud

When designing on-premises and cloud systems, you have to balance resilience, security, and scalability. But ultimately, what your organization and business leaders care about is the bottom-line: today’s costs and tomorrow’s risk. As a result, hybrid and multicloud strategies are often viewed as simply a backup or disaster recovery strategy, instead of a path to availability your applications and business operations can really count on.

Do Customers Really Care If You Love Them?

Customers don’t buy software because they feel loved. They buy it because the product works, solves a real problem, meets security, scalability, and reliability requirements, and fits their budget. No amount of empathy or friendliness can compensate for missing features or poor performance. So at first glance, it’s easy to assume that great products alone win customer loyalty. But once the contract is signed and the product is in use, the rules change.

Disaster Recovery in 60 Seconds: A POC for Seamless Client Failover on Confluent Cloud

I’ve worked with Apache Kafka since 2019, and deciding how to design and implement client failover was a sticking point in almost every use case I dealt with. Even for Confluent customers—who have the benefit of features such as Confluent Replicator, Multi-Region Clusters, and Cluster Linking—ensuring seamless failover between Kafka environments is a challenging problem.

Focal Systems: Boosting Store Performance with an AI Retail Operating System and Real-Time Data

Every second a product sits out-of-stock on a shelf, revenue quietly drains away. Customers walk out empty-handed, and businesses lose customers as well as valuable insights into what’s actually happening on the floor. At Focal Systems, we have built Shelf AI, a system that continuously “sees” store shelves, detects stockouts, and guides teams to replenish in time—so products are on the shelf when customers need them.

Enterprise Cluster Autoscaling, Private Networking and Reduced TCO

ABOUT CONFLUENT Confluent is pioneering a fundamentally new category of data infrastructure focused on data in motion. Confluent’s cloud-native offering is the foundational platform for data in motion – designed to be the intelligent connective tissue enabling real-time data, from multiple sources, to constantly stream across the organization. With Confluent, organizations can meet the new business imperative of delivering rich, digital front-end customer experiences and transitioning to sophisticated, real-time, software-driven backend operations.