Systems | Development | Analytics | API | Testing

Kafka Migrations Need More Than a Replicator

Jonas Best & Patrick Polster Kafka migrations are one of the riskiest infrastructure projects a platform team can take on. Miss a dependency and a downstream app starts reprocessing events it already handled leading to breaking SLAs and eroding trust with application teams. Migrate without visibility and you risk a major production issue. The instinct is to reach for a replication tool and call it done. But replication is only one piece of the puzzle.

Lenses 6.2 - Trusting Agents to build & operate event-driven applications

At Lenses, our goal has always been to help organizations get the most out of their streaming data. We started with visibility into the Apache Kafka, moving up to the part that drives value, the application layer and now the Agentic layer. Lenses 6 moved us into a multi-Kafka world, as increasing, our clients aren’t just running on one type of Kafka anymore, and as sovereign cloud becomes increasingly topical (no pun intended) this is only increasing.

The $11 Billion Question: What the acquisition of Confluent by IBM means

What’s remarkable is how long Confluent competed at the highest level. Creating a category and type of application is hard; transitioning to cloud and surviving against hyper scalers is even harder. That alone is a huge achievement. Some see this as a pressured exit. But another way to look at it is as a strategic purchase by IBM to strengthen its position in enterprise data movement and integration.

JSON schemas to control your Lenses & infrastructure provisioning

When we talk about JSON schema in the world of Kafka and streaming, you may assume the schema for the events/messages. But how many times have you fumbled in the configuration about trying to get an application deployed? Schemas that describe how to configure and deploy Applications or applications-as-code, are also important, allowing us to automate application landscapes. Especially as we will soon be a wash with catalogs for AI Agents, MCP servers etc.

Fat Fingers? Not With Our K2K Config Schema Protector!

Picture this: It's 3 AM. You’re on-duty in case there is an outage. A team in the other part of the world merged PR and released a new version of K2K Replicator and it crashed. Consumer group lag is spiking to the universe. You’re paged & woken up, went to your laptop, the team already reverted PR, things are stabilising, but what really happened, you have to investigate now as postmortem has to be done.

Topic & data multi-Kafka governance with your AI-assistant

If you’ve been running Kafka for a while, with any luck you have quite a few engineering teams onboarded, potentially with hundreds or even thousands of applications. Hopefully the Lenses.io Developer Experience platform helped in this adoption. But finding the right balance between governance and openness can be tricky.

Lenses 6.1 - Kafka connectivity to your Copilot & self-service data replication

Here at Lenses we’re as always laser-focused on making engineering streaming apps and managing Kafka, not just less stressful, but delightful. 6.1 is another big step forward. It starts with the Lenses MCP Server. It connects your AI assistant such as Cursor and Claude, bringing the knowledge of the internet with the context of your Kafka environment. It has the power to transform the work of engineers building and managing streaming apps.

Lenses MCP for Kafka

If 2024 was the year enterprises adopted Generative AI, 2025 is the year Agentic AI became a reality. In the last six months, the conversations I’m having with engineering leaders have quickly shifted from simple chatbots to AI-enabled IDEs, copilots and autonomous systems that take action. This has to a large part been helped thanks to MCP. To date there have been 10,000+ MCP integrations built by the community, so to say it is a success is an understatement.

How Multi-Kafka impacts data replication strategy

Imagine an airline system monitoring traffic around an airport. If it detects a major delay, countless systems may need to react instantly: Ground operations to adjust flows. Some of these systems will still connect via API, traditional MQ or iPaaS technologies, but the data’s volume and urgency and the ease of decoupling apps make architecting with Kafka the better fit. The natural question is: should all these applications & systems connect to the same Kafka cluster?