We're excited to announce the release of the next generation of Control Center for Confluent Platform, which delivers higher partition limits, faster spin-up time, metrics freshness, and simpler operational overhead. Confluent introduced Confluent Control Center in 2016 as part of Confluent Platform, simplifying Apache Kafka operations and delivering end-to-end visibility into data pipelines.
Your teams want the immediate insights of stream processing with the scale and historical context of batch processing—but traditional data infrastructure forces you to resort to disparate tooling or manual workarounds to bridge that gap. This quarter’s release, coming to you live from Current London, brings new features in Confluent Cloud that fundamentally change this dynamic by seamlessly unifying stream and batch processing.
We’re excited to announce the General Availability (GA) of the Confluent fully managed V2 connector for Apache Kafka for Azure Cosmos DB! This release marks a major milestone in our mission to simplify real-time data streaming from and to Azure Cosmos DB using Apache Kafka. The V2 connector is now production-ready and available directly from the Confluent Cloud connector catalog.
This article first appeared on VentureBeat. Businesses know they can’t ignore artificial intelligence (AI)—but when it comes to building with it, the real questions aren’t What can AI do? It’s What can it do reliably? And more importantly, Where do we start? This post introduces the VISTA Framework, a structured approach to prioritizing AI opportunities.
You may have noticed that the phrase “Let’s take that offline” is gradually being replaced by “Let’s connect async.” Both expressions are a type of white flag, surrendering to the reality that a tricky issue needs to be resolved in a private conversation rather than in a group call. It’s often music to the attendees’ ears because it means the meeting is almost over.
This article originally appeared on BigDataWire on Feb. 26, 2025. Artificial intelligence (AI) agents are set to transform enterprise operations with autonomous problem-solving, adaptive workflows, and scalability. But the real challenge isn’t building better models. Agents need access to data and tools as well as the ability to share information across systems, with their outputs available for use by multiple services—including other agents.
Just as some problems are too big for one person to solve, some tasks are too complex for a single artificial intelligence (AI) agent to handle. Instead, the best approach is to decompose problems into smaller, specialized units so that multiple agents can work together as a team. This is the foundation of a multi-agent system—networks of agents, each with a specific role, collaborating to solve larger problems. When building a multi-agent system, you need a way to coordinate how agents interact.
Not long ago, I wrote about a growing problem in enterprise AI: agents that don’t talk to each other. You’ve got a customer relationship management (CRM) agent doing its thing, a data warehouse agent crunching numbers, a knowledge bot quietly surfacing documents—but none of them are sharing what they know. Instead of a smart, connected ecosystem, we’re stuck with isolated pockets of intelligence: an island of agents.
Whether we like it or not, when it comes to building data pipelines, the ETL (or ELT; choose your poison) process is never as simple as we hoped. Unlike the beautifully simple worlds of AdventureWorks, Pagila, Sakila, and others, real-world data is never quite what it claims to be. In the best-case scenario, we end up with the odd NULL where it shouldn’t be or a dodgy reading from a sensor that screws up the axes on a chart.