Systems | Development | Analytics | API | Testing

Latest Posts

4 data streaming trends for 2025

Buckle up, we’re past the AI hype. Now, it’s about making intelligent systems that act on our behalf. In 2025, AI isn’t just a tool– it’s becoming our core way of operating, powered by real-time data. How we stream, manage and monetize that data will define the next generation of business. Here, we zoom into four examples of what autonomous real-time intelligence could look like in the coming year.

Luggage lost in a world of streaming data

The need to democratize and share data inside and outside your organization, as a real-time data stream, has never been more in demand. Treating real-time data as a product, and adopting Data Mesh practices, is the way forward. Here, we explain the concept through a real-life example of an airline building applications that process data across different domains.

Luggage lost in a world of streaming data

Democratizing and sharing data inside and outside your organization, as a real-time data stream, has never been more in demand. Treating data as-a-product and adopting Data Mesh practices is leading the way. Here, we explain the concept through a real-life example of an airline building applications that process data across different domains.

Introducing Lenses 6.0 Panoptes

Organizations today face complex data challenges as they scale, with more distributed data architectures and a growing number of teams building streaming applications. They will need to implement Data Mesh principles for sharing data across business domains, ensure data sovereignty across different jurisdictions and clouds, and maintain real-time operations.

SQL for data exploration in a multi-Kafka world

Every enterprise is modernizing their business systems and applications to respond to real-time data. Within the next few years, we predict that most of an enterprise's data products will be built using a streaming fabric – a rich tapestry of real-time data, abstracted from the infrastructure it runs on. This streaming fabric spans not just one Apache Kafka cluster, but dozens, hundreds, maybe even thousands of them.

UI-driven GitOps: Opening up Kafka without giving up governance

As Kafka evolves in your business, adopting best practices becomes a must. The GitOps methodology makes sure deployments match intended outcomes, anchored by a single source of truth. When integrating Apache Kafka with GitOps, many will think of Strimzi. Strimzi uses the Operator pattern for synchronization. This approach, whilst effective, primarily caters to Kubernetes-based Kafka resources (e.g. Topics). But this isn’t ideal.

4 lessons from Kafka Summit London 2024

It was lovely to see so many of the community and hear about the latest data streaming initiatives at Kafka Summit this year. We always try to distill the sea of content from the industry’s premier event into a digestible blog post. This time we’ll do it slightly differently and summarize some broader learnings, not only from the sessions we saw, but the conversations we had across the two days.

Lenses 5.5 - Self-service streaming data movement, governed by GitOps

In this age of AI, the demand for real-time data integration is greater than ever. For many, these data pipelines should no longer be configured and deployed by centralized teams, but distributed, so that each owner creates their flows independently. But how to simplify this, whilst practicing good software and data governance? We are introducing Lenses 5.5.

4 reasons to integrate Apache Kafka and Amazon S3

Amazon S3 is a standout storage service known for its ease of use, power, and affordability. When combined with Apache Kafka, a popular streaming platform, it can significantly reduce costs and enhance service levels. In this post, we’ll explore various ways S3 is put to work in streaming data platforms.