London, UK
Sep 10, 2023   |  By Mateus Henrique Cândido de Oliveira
One of the most important questions in architecting a data platform is where to store and archive data. In a blog series, we’ll cover the different storage strategies for Kafka and introduce you to Lenses’ S3 Connector for backup/restore. But in this first blog, we must introduce the different Cloud storage options available. Later blogs will focus on specific solutions, explain in more depth how this maps to Kafka and then how Lenses manage your Kafka topic backups.
May 16, 2023   |  By Adamos Loizou
Kafka adoption is growing fast. Very fast. At Lenses, we’re pushing out new features to increase developer productivity, reduce manual effort & improve the cost & hygiene of operating your Kafka platform. Only a few weeks since Lenses 5.1, yet here we are again with more goodies in our release 5.2.
Apr 4, 2023   |  By Adamos Loizou
Hello again. We strive to improve the productivity of developers building event-driven applications on the technology choices that best fit your organization. AWS continues to be a real powerhorse for our customers. Not just for running the workloads, but in supporting them with their native services: MSK Kafka, MSK Connect and now increasingly Glue Schema Registry. This is bringing a strong alternative to Confluent and their Kafka infrastructure offerings.
Apr 4, 2023   |  By David Sloan
With version 5.1, Lenses is now offering enterprise support for our popular open-source Secret Provider to customers. In this blog, we’ll explain how secrets for Kafka Connect connectors can be safely protected using Secret Managers and walk you through configuring the Lenses S3 Sink Connector with the Lenses Secret Provider plugin and AWS Secret Manager.
Apr 3, 2023   |  By Andrew Stevenson
With increased applications developed by different engineering teams on Kafka comes increased need for data governance. JSON is often used when streaming projects bootstrap but this quickly becomes a problem as your applications iterate, changing the data structures with add new fields, removing old and even changing data formats. It makes your applications brittle, chaos ensues as downstream consumers fall over due to miss data and SREs curse you.
Mar 1, 2022   |  By Eleftherios Davros
Until recently, teams were building a small handful of Kafka streaming applications. They were usually associated with Big Data workloads (analytics, data science etc.), and data serialization would typically be in AVRO or JSON. Now a wider set of engineering teams are building entire software products with microservices decoupled through Kafka. Many teams have adopted Google Protobuf as their serialization, partly due to its use in gRPC.
Feb 16, 2022   |  By Christina Daskalaki
Kafka is a ubiquitous component of a modern data platform. It has acted as the buffer, landing zone, and pipeline to integrate your data to drive analytics, or maybe surface after a few hops to a business service. More recently, though, it has become the backbone for new digital services with consumer-facing applications that process live off the stream. As such, Kafka is being adopted by dozens, (if not hundreds) of software and data engineering teams in your organization.
Oct 4, 2021   |  By Antonios Chalkiopoulos
Today, I’m thrilled to announce that Lenses.io is joining Celonis, the leader in execution management. Together we will raise the bar in how businesses are run by driving them with real-time data, making the power of streaming open, operable and actionable for organizations across the world. When Lenses.io began, we could never have imagined we’d reach this moment.
Oct 4, 2021   |  By Stefan Bocutiu
Apache Kafka has grown from an obscure open-source project to a mass-adopted streaming technology, supporting all kinds of organizations and use cases. Many began their Apache Kafka journey to feed a data warehouse for analytics. Then moved to building event-driven applications, breaking down entire monoliths. Now, we move to the next chapter. Joining Celonis means we’re pleased to open up the possibility of real-time process mining and business execution with Kafka.
Sep 20, 2021   |  By Alex Durham
Here we are, our screens split and fingers poised to forage through two days of fresh Kafka content at Kafka Summit Americas. Tweet us your #KafkaSummit highlights if they’re missing here and we can add them to the round-up.
Apr 20, 2021   |  By Lenses
How to add metadata tags and descriptions to topics and other entities of your #Kafka real-time #datacatalog
Apr 15, 2021   |  By Lenses
Lenses 4.2 brings extends data observability into PostgreSQL instances to better explore and debug your streaming microservices. Here, Andrea walks you through creating a connection to #Postgres and exploring data with #SQL
Nov 24, 2020   |  By Lenses
Matteo de Martino from the Lenses.io Engines team shows you how to read, process & manipulate streaming data with #SQL in any #ApacheKafka. Plus a look at why we use SQL as a common language for streaming in the first place.
Nov 23, 2020   |  By Lenses
Adamos Loizou walks through how to wrangle data in #Kafka.
Oct 8, 2020   |  By Lenses
Automate your deployment of Lenses.io for your #AWS Managed Streaming for #ApacheKafka (#MSK) directly into your AWS VPC through portal.lenses.io. You'll be practicing #DataOps in minutes.
Oct 7, 2020   |  By Lenses
Explore Apache Kafka & Lenses.io through a demo environment packed with sample data and data flows.
Oct 6, 2020   |  By Lenses
David Esposito, a Solutions Architect from #Aiven, explores load testing in Kafka Office Hours, a recurring forum for #ApacheKafka thought-sharing hosted by our partner Lenses.io.
Sep 30, 2020   |  By Lenses
Walkthrough of how to create custom Serde classes that deserialise #Protobuf data in #ApacheKafka to explore and process in Lenses.io with #SQL
Aug 25, 2020   |  By Lenses
The new open-source #ApacheKafka Connect sink connector for #S3 gives you full control on how to sink data to S3 and save money on long term storage costs in #Kafka. The connector has the ability to flush data out in a number of different formats including #AVRO, #JSON, #Parquet and #Binary as well as ability to create S3 buckets based on partitions, metadata fields and value fields.
Aug 25, 2020   |  By Lenses
Demo of how to manage your #KafkaConnect connectors in Lenses.io with #SQL support for mapping fields between source & target systems, #RBAC and secret management to avoid exposing credentials in your configuration.
Jul 24, 2020   |  By Lenses
DataOps is the art of progressing from data to value in seconds. For us, its all about making data operations as easy and fast as using the email.
Jul 23, 2020   |  By Lenses
Apache Kafka is a popular and powerful component of modern data platforms. But it's complicated. Complicated to run, complex to manage and crucially - it's near impossible to drive Kafka adoption from the command line across your organization. So here's your how-to for seeing it through to production (... and possibly fame and fortune). We cover key principles for Kafka observability and monitoring.
Jul 1, 2020   |  By Lenses
Lenses, a DataOps platform, accelerates time to value, opening up data streams to a wide audience. Lenses enables rapid construction and deployment of data pipelines at scale, for enterprises, with governance and security.
Jul 1, 2020   |  By Lenses
Lenses, a DataOps platform, accelerates time to value, opening up data streams to a wide audience. Lenses enables rapid construction and deployment of data pipelines at scale, for enterprises, with governance and security.

Lenses ® is a DataOps platform that provides cutting edge visibility into data streams, integrations, processing, operations and enables ethical and secure data usage for the modern data-driven enterprise.

Accelerate time to value, open up data in motion to a wide audience, enable rapid construction and deployment of data pipelines at scale, for enterprises, with governance and security.

Why Lenses?

  • Confidence in production: Everyone’s scared of the dark. Lenses gives you visibility into your streams and data platform that you’ve never had. That means you don’t need to worry running in production.
  • Keeping things simple: Life’s hard enough without having to learn new languages and manage code deployments. Build data flows in minutes with just SQL. Reduce the skills needed to develop, package and deploy applications.
  • Being more productive: Build, debug and deploy new flows in minutes not days/weeks. In fact, many of our customers build and deploy apps to production 95% faster.

25,000+ Developers Trust Lenses for data operations over Kafka.