Systems | Development | Analytics | API | Testing

Lenses 6.1 - Kafka connectivity to your Copilot & self-service data replication

Here at Lenses we’re as always laser-focused on making engineering streaming apps and managing Kafka, not just less stressful, but delightful. 6.1 is another big step forward. It starts with the Lenses MCP Server. It connects your AI assistant such as Cursor and Claude, bringing the knowledge of the internet with the context of your Kafka environment. It has the power to transform the work of engineers building and managing streaming apps.

Topic & data multi-Kafka governance with your AI-assistant

If you’ve been running Kafka for a while, with any luck you have quite a few engineering teams onboarded, potentially with hundreds or even thousands of applications. Hopefully the Lenses.io Developer Experience platform helped in this adoption. But finding the right balance between governance and openness can be tricky.

Lenses MCP for Kafka

If 2024 was the year enterprises adopted Generative AI, 2025 is the year Agentic AI became a reality. In the last six months, the conversations I’m having with engineering leaders have quickly shifted from simple chatbots to AI-enabled IDEs, copilots and autonomous systems that take action. This has to a large part been helped thanks to MCP. To date there have been 10,000+ MCP integrations built by the community, so to say it is a success is an understatement.

How Multi-Kafka impacts data replication strategy

Imagine an airline system monitoring traffic around an airport. If it detects a major delay, countless systems may need to react instantly: Ground operations to adjust flows. Some of these systems will still connect via API, traditional MQ or iPaaS technologies, but the data’s volume and urgency and the ease of decoupling apps make architecting with Kafka the better fit. The natural question is: should all these applications & systems connect to the same Kafka cluster?

The True Cost of Kafka Replication

Kafka cluster-to-cluster data replication is critical to many use cases: disaster recovery (DR), cloud or data center migration, testing applications with production-like data, and multi-region data distribution. Easy replication of data between clusters: The business case is clear, but the cost model is not. Some solutions appear free but impose heavy operational burden.