Systems | Development | Analytics | API | Testing

Data Streaming

What is a Kafka Consumer and How does it work?

Now that your data is inside your Kafka cluster, how do you get it out? In this video, Dan Weston covers the basics of Kafka Consumers: what consumers are, how they get your data flowing, and best practices for configuring consumers in a real-time data streaming system. You will also learn about offsets, consumer groups, and partition assignment.

What is the Listen to Yourself Pattern? | Designing Event-Driven Microservices

The Listen to Yourself pattern is implemented by having a microservice emit an event to a platform such as Apache Kafka, and then consuming its own events to perform internal updates. It can be used as a solution to the dual-write problem since it separates Kafka and database writes into different processes. However, it also provides added benefits because it allows microservices to respond quickly to requests by deferring processing to a later time.

Apache Kafka 3.7: Official Docker Image and Improved Client Monitoring

Apache Kafka® 3.7 is here! On behalf of the Kafka community, Danica Fine highlights key release updates, with KIPs from Kafka Core, Kafka Streams, and Kafka Connect. Kafka Core: Kafka Streams: Kafka Connect: Many more KIPs are a part of this release. See the blog post for more details.

What is the Event Sourcing Pattern? | Designing Event-Driven Microservices

Event Sourcing is a pattern of storing an object's state as a series of events. Each time the object is updated a new event is written to an append-only log. When the object is loaded from the database, the events are replayed in order, reapplying the necessary changes. The benefit of this approach is that it stores a full history of the object. This can be valuable for debugging, auditing, building new models, and a variety of other situations. It is also a technique that can be used to solve the dual-write problem when working with event-driven architectures.

What is the Transactional Outbox Pattern? | Designing Event-Driven Microservices

The transactional outbox pattern leverages database transactions to update a microservice's state and an outbox table. Events in the outbox will be sent to an external messaging platform such as Apache Kafka. This technique is used to overcome the dual-write problem which occurs when you have to write data to two separate systems such as a database and Apache Kafka. The database transactions can be used to ensure atomic writes between the two tables. From there, a separate process can consume the outbox and update the external system as required.

What is the Dual Write Problem? | Designing Event-Driven Microservices

The dual write problem occurs when you try to write to two separate systems and need them to be atomic. If one write fails, and the other succeeds, you can end up with inconsistent state. This is an easy trap to fall into, and it can be difficult to avoid. We'll explore what causes the dual-write problem and explore both valid and invalid solutions to it.