Systems | Development | Analytics | API | Testing

Kafka

4 Key Types of Event-Driven Architecture

Adam Bellemare compares four main types of Event-Driven Architecture (EDA): Application Internal, Ephemeral Messaging, Queues, and Publish/Subscribe. Event-Driven Architectures have a long and storied history, and for good reason. They offer a powerful way to build scalable and decoupled architectures. But thanks to its long history, people often have different ideas of what EDA means depending on when they first encountered this architecture.

How to Evolve your Microservice Schemas | Designing Event-Driven Microservices

Schema evolution is the act of modifying the structure of the data in our application, without impacting clients. This can be a challenging problem. However, it gets easier if we start with a flexible data format and take steps to avoid unnecessary data coupling. When we find ourselves having to make breaking changes, we can always fall back to creating new versions of our APIs and events to accommodate those changes.

What is a Kafka Consumer and How does it work?

Now that your data is inside your Kafka cluster, how do you get it out? In this video, Dan Weston covers the basics of Kafka Consumers: what consumers are, how they get your data flowing, and best practices for configuring consumers in a real-time data streaming system. You will also learn about offsets, consumer groups, and partition assignment.

#12 Kafka Live Stream | HTTP Sink Connector & Business Automation with Make

See the new Lenses Kafka to HTTP Sink Connector in action with Lenses.io and @itsmake. In this 30 minute session, we show you how to trigger APIs that automate your business processes: a message in Kafka calls a Make workflow, then triggering an automation in Salesforce.

What is the Listen to Yourself Pattern? | Designing Event-Driven Microservices

The Listen to Yourself pattern is implemented by having a microservice emit an event to a platform such as Apache Kafka, and then consuming its own events to perform internal updates. It can be used as a solution to the dual-write problem since it separates Kafka and database writes into different processes. However, it also provides added benefits because it allows microservices to respond quickly to requests by deferring processing to a later time.

Using Streams Replication Manager Prefixless Replication for Kafka Topic Aggregation

Businesses often need to aggregate topics because it is essential for organizing, simplifying, and optimizing the processing of streaming data. It enables efficient analysis, facilitates modular development, and enhances the overall effectiveness of streaming applications. For example, if there are separate clusters, and there are topics with the same purpose in the different clusters, then it is useful to aggregate the content into one topic.

Introducing Apache Kafka 3.7

We are proud to announce the release of Apache Kafka® 3.7.0. This release contains many new features and improvements. This blog post will highlight some of the more prominent features. For a full list of changes, be sure to check the release notes. See the Upgrading to 3.7.0 from any version 0.8.x through 3.6.x section in the documentation for the list of notable changes and detailed upgrade steps.

Apache Kafka 3.7: Official Docker Image and Improved Client Monitoring

Apache Kafka® 3.7 is here! On behalf of the Kafka community, Danica Fine highlights key release updates, with KIPs from Kafka Core, Kafka Streams, and Kafka Connect. Kafka Core: Kafka Streams: Kafka Connect: Many more KIPs are a part of this release. See the blog post for more details.