Systems | Development | Analytics | API | Testing

Latest Posts

Increase compliance with Kafka audits

Suppose that you work for a government tax agency. You recently noticed that some tax fraud incident records have been leaked on the darknet. This information is held in a Kafka Topic. The incident response team wants to know who has accessed this data over the last six months. You panic. It is a common requirement for business applications to maintain some form of audit log, i.e. a persistent trail of all the changes to the application’s data to respond to this kind of situation.

How to create a Kafka topic (the safe way)

We live in a dynamic world. It is safe to say that companies aim to speed up time-to-market and out-innovate their competition with Kafka, but at the same time struggle with some limitations. These can range from compliance-related setbacks for regulations such as GDPR, CCPA and HIPAA, to self-service slip-ups that could see a whole Kafka cluster going down. Even something as seemingly innocuous as configuring and creating a Kafka Topic can lead to operational U-turns, slowdowns and even downtime.

Lenses magnified: Enhanced, secure, self-serve developer experience for Kafka

In our world of streaming applications, developers are forever climbing a steep learning curve to stay successful with technologies such as Apache Kafka. There is no end to the debt and the detail you need to manage when it comes to Kafka - and particularly since it doesn’t come with guardrails to help you out, the stakes for making mistakes are high.

Kafka Summit Europe 2021: Top talks & takeaways

In keeping with tradition, we’ve chowed our way through the entire all-you-can-eat buffet of Kafka Summit Europe 2021 talks to bring you the best content to catch up on. Jay Kreps’ keynote this year addresses a few Kafka-shaped elephants in the room, as well as an overall shift in the way event-driven business is surfacing. On the technology front, Confluent announced new support for Kubernetes as an orchestration layer running on Kafka in a private cloud.

Change Data Capture and Kafka to break up your monolith

Getting data from a database into Kafka is one of the most frequent use cases we see. For data integration between enterprise data sources when migrating from monolith to microservices, what better than CDC? We talked about breaking up a monolith and the importance of data observability previously. Now we’re showing you how to do it with a typical microservices architecture pattern including PostgreSQL, Debezium and Apache Kafka.

In the event-driven galaxy, which metadata matters most?

As a developer, you're no stranger to your vast and varied data environment… Or are you? The tremendous amount of data your organization collects is stored in various sources and formats. You need a way to understand where and what data is, to be able to do what you need to do: build amazing event-driven applications.

NEW Lenses: PostgreSQL & metadata to navigate your Kafka galaxy

When you’re one of many developers commanding streaming applications running in Apache Kafka, you want enough data observability to fly your own data product to the moon. But you also want to boldly go where no developer has gone before to discover new applications. At the same time, you don’t want to be exposed to sensitive data that summons you to your compliance team, crashing you back down to earth.

Creating your managed Kafka shortlist

You’ve been handed the not-so-easy task of scoping a managed Kafka for your team. How do you start the shortlist? Post something on Reddit? Skim read a gazillion review blogs? Crash Google Chrome opening a thousand tabs to compare feature lists? If you’re going to run a Kafka POC with two or three vendors, or you’re trying to find the best Kafka for your business, how can you narrow down your selection? Let’s get to it.

Architecting Apache Kafka for GDPR compliance

Once upon a time (2017), in an office far far away, you may have been cornered in a conversation with someone from Legal about GDPR. It could have gone something like this: “You there, Data Engineer” “Yep, that’s me” “What PII do we have residing in this Apache Kafka database?” You probably mumbled something about Kafka not being a database. “And who can read/ write the data?