Wix.com – 5 Event Driven Architecture Pitfalls!

Wix.com migrated from a request-reply RPC style system to an event driven architecture and, not surprisingly, ran into a few issues. One of the developers wrote a blog post outlining five event driven architecture pitfalls they experienced. Here’s my review of that post, and hopefully sheds more light on their problems and solutions.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Reliable Publishing

When using an event driven architecture, you’ll be publishing events as a communication mechanism to other parts of your system. You’re telling other parts of your system that something occurred. That “something” is generally that some state change or side effect has occurred.

Other parts of your system can then become dependent on events being published when certain things occur, mainly if various parts of your system are used in a workflow or business process driven by events.

For example, when a payment is processed in the Payment Service, a PaymentCompleted event is published to Kafka. The Inventory Service consumes the PaymentCompleted event and decreases inventory levels based on the Order.

What happens if you make a state change to MySQL, but fail to publish an event to Kafka?

In their example, they process a payment and persist it in MySQL, but it fails to publish the PaymentCompleted event. This means that now the inventory is inconsistent with paid Orders.

One solution to this is using the Outbox Pattern. I’ve covered it in another blog post, but the gist is that you persist your events with your business state in the same transaction into your primary database. Then separately, often in another process or thread, you publish the event. If the event is published successfully, you then delete that event from your primary database.

Another option they chose is to have separate durable storage for the events in case of a failure to publish to Kafka. Then you would publish the events from that fallback durable storage. It’s a similar concept, except it’s not guaranteed since saving state and your event to separate durable storage isn’t atomic (no distributed transaction).

Event Sourcing

One widespread misconception is that Event Sourcing involves using the events as a mechanism for state and for communicating with other service boundaries. Conflating these two ideas can cause a whole lot of complexity.

Event Sourcing is about using events as a way to persist state. Using events that represent state transitions. This has nothing to do with publishing these events as a mechanism for communication with other services.

Events in Event Sourcing are implementation details within a single service boundary. They are internal.

Event Driven Architecture Pitfalls

This means you can choose to use event sourcing and not publish events for other services to consume.

You could also choose not to use event sourcing for any service and publish events for other service boundaries to decouple.

Don’t conflate the two concepts of state and communication.

Distributed Tracing

Another challenge, which is getting better over recent years, especially with OpenTelemery, is a visualization of a workflow when in an even driven architecture.

It isn’t easy to understand all the different services involved when you’re decoupled through publish/subscribe. The entire point is decoupling, which makes it difficult to see the causation and correlation. You have services consuming events and publishing events.

When event choreography is involved, it can be challenging to see the start and end of a workflow. What if something failed mid-way through? How do you know some business process isn’t completed or is in a “hanging” state? You need visibility. Check out my post on Distributed Tracing using OpenTelemery and Zipkin.

Claim Check

Large messages aren’t good. They can be a problem because they can overwhelm your broker or event log, such as Kafka. Meaning you don’t want to have to transfer large message payloads over the wire for every consumer from the broker. Generally, you want to keep event/message payloads small, but how would you do that if you have a message that contains a large image?

The Claim Check Pattern solves this by having the message/event reference where the full contents are.

As an example, a large image may be persisted in blob storage. The event/message will contain an identifier that the consumer will use to know where to locate the file in blob storage. This way, the consumer can retrieve the large payload (image) from blob storage rather than from the message itself.

Check out my post on the Claim Check Pattern for more.

Idempotent Consumers

Duplicate events will occur. This means that consumers need to be prepared that might consume the same event more than once. There are various reasons for this happening, including a different event with the same payload published. Another reason can be the Outbox Pattern mentioned above.

Using my outbox pattern example, if the PaymentCompleted event is consumed by the Inventory service more than once, it will deplete the inventory levels more than they should.

You want your consumers to be idempotent. You want to handle the same event without having a negative side effect.

How you implement this greatly depends on the types of events you publish. If you’re publishing Change Data Capture (CDC) or “Entity Changed” events, you’d want to have a versionId on each event that indicates which version of the entity was when the event was published. This way, consumers can keep track of which version they have and only process the event if it’s newer than their current version.

I generally try to avoid these style events and focus more on domain events involved in workflow. A unique ID associated with every event can be tracked to know if you’re processing an event more than once.

Check out my post on creating Idempotent Consumers for more.

Event Driven Architecture Pitfalls

While Event Driven Architecture is a great way to build a robust system that is decoupled, it has a lot of gotchas and pitfalls that you need to be aware of. Hopefully, this post provides some more insights so you don’t have to figure it out all on your own! All of the problems you’ll run into have solutions/patterns have are well-established and have been around for a long time.

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Do you want to use Kafka? Or do you need a Queue?

Do you want to use Kafka? Or do you need a message broker and queues? While they can seem similar, they have different purposes. I’m going to explain the differences, so you don’t try to brute force patterns and concepts in Kafka that are better used with a message broker.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Partitioned Log

Kafka is a log. Specifically a partitioned log. I’ll discuss the partition part later in this post and how that affects ordered processing and concurrency.

Kafka Log

When a producer publishes new messages, generally events, to a log (a topic with Kafka), it appends them.

Kafka Log

Events aren’t removed from a topic unless defined by the retention period. You could keep all events forever or purge them after a period of time. This is an important aspect to note in comparison to a queue.

With an event-driven architecture, you can have one service publish events and have many different services consuming those events. It’s about decoupling. The publishing service doesn’t know who is consuming, or if anyone is consuming, the events it’s publishing.

Consumers

In this example, we have a topic with three events. Each consumer works independently, processing messages from the topic.

Consumers

Because events are not removed from the topic, a new consumer could start consuming the first event on the topic. Kafka maintains an offset per topic, per consumer group, and partition. I’ll get to consumer groups and partitions shortly. This allows consumers to process new events that are appended to the topic. However, this also allows existing consumers to re-process existing messages by changing the offset.

Just because a consumer processes an event from a topic does not mean that they cannot process it again or that another consumer can’t consume it. The event is not removed from the topic when it’s consumed.

Commands & Events

A lot of the trouble I see with using Kafka revolves around applying various patterns or semantics typical with queues or a message broker and trying to force it with Kafka. An example of this is Commands.

There are two kinds of messages. Commands and Events. Some will say Queries are also messages, but I disagree in the context of asynchronous messaging.

Commands are about invoking behavior. There can be many producers of a command. There is a required single consumer of a command. The consumer will be within the logical boundary that owns the definition/schema of the command.

Sending Commands

Events are about notifying other parts of your system that something has occurred. There is only a single publisher of an event. The logical boundary that publishes an event owns the schema/definition. There may be many consumers of an event or none.

Publishing Events

Commands and events have different semantics. They have very different purposes, and how that also pertains to coupling.

Commands vs Events

By this definition, how can you publish a command to a Kafka topic and guarantee that only a single consumer will process it? You can’t.

Queue

This is where a queue and a message broker differ.

When you send a command to a queue, there’s going to be a single consumer that will process that message.

Queue Consumer

When the consumer finishes processing the message, it will acknowledge back to the broker.

Queue Consumer Ack

At this point, the broker will remove the message from the queue.

Queue Consumer Ack Remove Message

The message is gone. The consumer cannot consume it again, nor can any other consumer.

Consumer Groups & Partitions

Earlier I mentioned consumer groups and partitions. A consumer group is a way to have multiple consumers consume from the same topic. This is a way to concurrently scale and process more messages from a topic called the competing consumer pattern.

A topic is divided into partitions. Events are appended to a partition within a topic. There can only be one consumer within a consumer group that processes messages from a partition.

Meaning you will process messages from a partition sequentially. This allows for ordered processing.

As an example of the competing consumers pattern, let’s say we have two partitions in a topic. Each partition right now is a single event in each. We have two consumers in a single consumer group. We’ve defined that the top consumer will consume from the top partition, and the bottom consumer will consume from the bottom partition.

Kafka Partition

This means that each consumer within our consumer group can process each message concurrently.

Kafka Partition

If we publish another message to the top partition, this means the top consumer again is the one responsible for consuming it. If it was busy processing another message, the bottom consumer, even if it’s available, will not consume it. Only the top consumer is associated with the top partition.

This allows you to consume messages in order, so as long as you associate them to the same partition.

In contrast, the competing consumers’ pattern with a queue works slightly differently as we don’t have partitions.

If we have two messages in a topic, and we have two consumers within a single consumer group.

Competing Consumers

Messages are consumed by any free/available consumer. Because there are two free consumers, both messages will be consumed concurrently.

Competing Consumers

Even though messages are distributed FIFO (First-in-First-Out), that doesn’t mean we will process them in order.

Why does this matter? With Kafka partitions, you can process messages in order. Because there is only a single consumer within a consumer group associated with a partition, you’ll process them one by one. This isn’t possible with queues. The downside is that if you publish messages to a partition faster than you can consume them, you can end up in a backlog disaster.

Kafka or Message Broker Queues & Topics?

Hopefully, this post (and video) illustrated some of the differences. The primary issue I’ve come across is people using Kafka but trying to apply patterns and concepts (commands, competing consumers, dead letter queues) that are typical with a message broker using queues and topics, but it just doesn’t fit.

Typically when you’re creating an asyncronous workflow, you’re consuming events and sending commands. While you technically can create a topic for commands, you can’t guarantee there won’t be more than a single consumer. Is this a big deal? To me, semantics matter. If you’re already using Kafka and don’t want to introduce another piece of infrastructure like a queue/message broker, then I understand the reasoning for doing so.

Understanding the differences in how the competing consumers pattern works. If you’re not configured correctly and are publishing to a single partition, then you can’t increase throughput by adding another consumer to a consumer group.

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

You also might like

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Event Choreography for Loosely Coupled Workflow

What’s Event Choreography Workflow? Let’s back up a bit to answer that. Event Driven Architecture is a way to make your system more extensible and loosely coupled. Using events as a way to communicate between service boundaries. But how do you handle long-running business processes and workflows that involve multiple services? Using RPC is going back to tight coupling, which we’re trying to avoid with Event Driven Architecture. So what’s a solution? Event Choreography.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

RPC

So why not just use RPC via an HTTP API or gRPC? Well, the point of using asynchronous messaging and an event driven architecture is to be loosely coupled and remove temporal coupling. Temporal coupling matters because a long-running business process or workflow that involves many different services means that all services need to be available, and there can be no failures. You can add retries for transient failures, but if a significant issue with a service or data causes the workflow to fail, you have no recourse to resolve the issue. You can be stuck in an inconsistent state.

For example, if we had many service-to-service RPC calls, and far down the call stack, there is a failure, how do you handle that?

RPC Service to Service

If Service A and Service C made state changes, they would need to have some type of compensating action to reverse the state change. There is no distributed transaction; you can’t just roll back the state of each service database. You need to deal with failures in each service that makes a service-to-service RPC call.

When all services are working correctly, there are no issues. But the moment you have a service that becomes unavailable, any workflow that involves that service will be immediately affected, likely causing the entire workflow to fail and leaving your system in an inconsistent state.

There are a whole other set of issues with RPC between services that I’m not even mentioning here, one of them being latency. Check out my post REST APIs for Microservices? Beware!

RPC Orchestration

Now you might think instead of making service-to-service RPC calls, is to make some type of orchestration that would make the RPC calls.

RPC Orchestration: Call Service A

After it makes the first RPC call to the first service, it would make the following subsequent calls to other services in order.

RPC Orchestration: Call Service B

This would address the failures as the orchestrator could handle the retries upon failures. If there is a more extended failure/outage, it will make a request for a compensating action to void/undo a call to a previous service.

RPC Orchestration: Failure

While this orchestrator sounds better than service-to-service RPC, we haven’t solved much. We still have a high degree of coupling, and our workflow will fail if a service is down or unavailable. If we have a failure and we need to make a call back to a previous service to perform some type of compensating action, what happens if that call fails? Again, we’re back to being in an inconsistent state.

Event Choreography

We can loosely couple between services with an event driven architecture and the publish-subscribe pattern and remove temporal coupling by publishing and consuming events.

For example, Service A receives a request from the client that kicks off the workflow.

Event Choreography

After making its state change to its database, Service A would publish an event to our message broker.

Event Choreography

At this point, the request from the Client to Service A is complete. Service B will consume the message/event published by Service A. Service B consumes this message to do its part of the workflow.

Event Choreography

Once Service B has successfully consumed the message, it will publish an event to the broker, which will kick off the next service that is a part of the workflow.

Event Choreography

Service C is the next service involved in the workflow, and it consumes the event published by Service B.

Event Choreography

Because we aren’t temporally coupled, each service consumes a message and processes it independently.

This means if Service C is unavailable or has a backlog of messages to process, that doesn’t break the workflow. Once the service becomes available again, and the message is processed, the workflow continues.

Service Unavailable

Is Event Choreography a silver bullet for workflow involving multiple services? No. Of course not. The challenge with event choreography for workflow is when you have more extended or complex workflows that involve more than just a few services. It is hard to visualize a workflow because there is no centralized orchestration to understand the entire workflow.

If you have complex workflows that involve many different services, check out my post on Workflow Orchestration for Resilient Systems

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out the YouTube Membership or Patreon for more info.

You also might like

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design