Sponsor: Do you build complex software systems? See how NServiceBus makes it easier to design, build, and manage software systems that use message queues to achieve loose coupling. Get started for free.

Stop using trivial Guard Clauses! Try this instead

Guard clauses, on the surface, sound like a great idea. They can reduce conditional complexity by exiting a method or function early. However, I find guard clauses used in the real world to be of little value. Often polluting application-level code for trivial preconditions. I will refactor some code to push those preconditions forcing the edge of your application so your domain focuses on real business concerns.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Null Checks

The most common case of guard clauses is doing null checks. To illustrate this, I looked at the eShopOnWeb sample application and will use it as an example.

In the example above, two guard clauses do null checks on the anonymousId and userName. While this is incredibly common, I’d rather not have to deal with these preconditions mainly because this method is in the BasketService and a part of the core application code.

Often these types of preconditions are very inconsistent. Since the userName is likely passed around through various layers, does each method that accepts the username have this guard clause? Likely not.

As an example, the above code creates a new instance of the Basket passing the userName to the constrictor. Here’s the constructor.

Sure the TransferBasketAsync method was doing the null check, but does this Basket class get created anywhere else? Is it doing a null check? As you can imagine, if you did this everywhere, you’d have a ton of repetitive trivial code for these null check preconditions.

Forcing Valid Values

I don’t want to litter my core application code with trivial guard clauses, such as null checks, as my example. Instead, I want my core code always to accept valid values so that it doesn’t need to concern itself with doing these guard clauses.

In doing so, you’re pushing the responsibility to the outer edge of your application to produce valid values. If you think about a web application, there is some translation from an HTTP request into your application code. You want to force that translation at the edge, which is your web endpoint, into valid domain values.

One way to accomplish this, as in my example with the userName is to define a type (a record struct) that, during creation, forces the value not to be null.

Then we can use this type wherever we accept the userName as a parameter. Instead of accepting a string for the anonymousId and username in the TransferBasketAsync method, we can move this to the Username type.

To call this TransferBasketAsync in the BasketService, you must construct a Username type. In this example, this is done on a Razor page.

In the above example, the userName comes from the HTTP request, which will be a string. We then construct a Username type passing in that string value. We’re pushing the validation to be at the edge of the application.

Guard Clauses

Our core application defines the Username type, but its usage/creation is at the edge. Nothing can get into the core application without being in a valid state.

Tests

Because of guard clauses, such as null checks, there tend to be tests associated with them. This was the case with this eShopOnWeb sample, where there were tests to confirm those null checks were done. But since we’ve moved to a type that forces it to be valid, these tests still passed because it was throwing. However, these tests are useless now, and we can remove them. We can now remove any tests that were related to doing a null check against the string username.

Instead, we have a few tests to confirm the behavior of our new Username type.

Primitive Obsession

As some others commented on the YouTube video, you might be thinking that I’ve introduced a value object to combat primitive obsession. While true, that’s not the seeing the root cause. The root cause isn’t primitive obsession. The issue is allowing your domain to accept invalid arguments that you need to guard against. This can apply to primitives that are null, as my example illustrates, but it can also be explicit invariants for generic examples, Money, or Date/times with Timezones. However, this can include very domain-specific values.

Guard Clauses

I’m not suggesting guard clauses are not helpful. They are. Exiting early within a method when preconditions aren’t met simplifies logic. However, littering trivial guard clauses all over your codebase is not helpful. Force the outer edge of your application to construct valid values that are passed into your core domain code.

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

You also might like

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Do you want to use Kafka? Or do you need a Queue?

Do you want to use Kafka? Or do you need a message broker and queues? While they can seem similar, they have different purposes. I’m going to explain the differences, so you don’t try to brute force patterns and concepts in Kafka that are better used with a message broker.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Partitioned Log

Kafka is a log. Specifically a partitioned log. I’ll discuss the partition part later in this post and how that affects ordered processing and concurrency.

Kafka Log

When a producer publishes new messages, generally events, to a log (a topic with Kafka), it appends them.

Kafka Log

Events aren’t removed from a topic unless defined by the retention period. You could keep all events forever or purge them after a period of time. This is an important aspect to note in comparison to a queue.

With an event-driven architecture, you can have one service publish events and have many different services consuming those events. It’s about decoupling. The publishing service doesn’t know who is consuming, or if anyone is consuming, the events it’s publishing.

Consumers

In this example, we have a topic with three events. Each consumer works independently, processing messages from the topic.

Consumers

Because events are not removed from the topic, a new consumer could start consuming the first event on the topic. Kafka maintains an offset per topic, per consumer group, and partition. I’ll get to consumer groups and partitions shortly. This allows consumers to process new events that are appended to the topic. However, this also allows existing consumers to re-process existing messages by changing the offset.

Just because a consumer processes an event from a topic does not mean that they cannot process it again or that another consumer can’t consume it. The event is not removed from the topic when it’s consumed.

Commands & Events

A lot of the trouble I see with using Kafka revolves around applying various patterns or semantics typical with queues or a message broker and trying to force it with Kafka. An example of this is Commands.

There are two kinds of messages. Commands and Events. Some will say Queries are also messages, but I disagree in the context of asynchronous messaging.

Commands are about invoking behavior. There can be many producers of a command. There is a required single consumer of a command. The consumer will be within the logical boundary that owns the definition/schema of the command.

Sending Commands

Events are about notifying other parts of your system that something has occurred. There is only a single publisher of an event. The logical boundary that publishes an event owns the schema/definition. There may be many consumers of an event or none.

Publishing Events

Commands and events have different semantics. They have very different purposes, and how that also pertains to coupling.

Commands vs Events

By this definition, how can you publish a command to a Kafka topic and guarantee that only a single consumer will process it? You can’t.

Queue

This is where a queue and a message broker differ.

When you send a command to a queue, there’s going to be a single consumer that will process that message.

Queue Consumer

When the consumer finishes processing the message, it will acknowledge back to the broker.

Queue Consumer Ack

At this point, the broker will remove the message from the queue.

Queue Consumer Ack Remove Message

The message is gone. The consumer cannot consume it again, nor can any other consumer.

Consumer Groups & Partitions

Earlier I mentioned consumer groups and partitions. A consumer group is a way to have multiple consumers consume from the same topic. This is a way to concurrently scale and process more messages from a topic called the competing consumer pattern.

A topic is divided into partitions. Events are appended to a partition within a topic. There can only be one consumer within a consumer group that processes messages from a partition.

Meaning you will process messages from a partition sequentially. This allows for ordered processing.

As an example of the competing consumers pattern, let’s say we have two partitions in a topic. Each partition right now is a single event in each. We have two consumers in a single consumer group. We’ve defined that the top consumer will consume from the top partition, and the bottom consumer will consume from the bottom partition.

Kafka Partition

This means that each consumer within our consumer group can process each message concurrently.

Kafka Partition

If we publish another message to the top partition, this means the top consumer again is the one responsible for consuming it. If it was busy processing another message, the bottom consumer, even if it’s available, will not consume it. Only the top consumer is associated with the top partition.

This allows you to consume messages in order, so as long as you associate them to the same partition.

In contrast, the competing consumers’ pattern with a queue works slightly differently as we don’t have partitions.

If we have two messages in a topic, and we have two consumers within a single consumer group.

Competing Consumers

Messages are consumed by any free/available consumer. Because there are two free consumers, both messages will be consumed concurrently.

Competing Consumers

Even though messages are distributed FIFO (First-in-First-Out), that doesn’t mean we will process them in order.

Why does this matter? With Kafka partitions, you can process messages in order. Because there is only a single consumer within a consumer group associated with a partition, you’ll process them one by one. This isn’t possible with queues. The downside is that if you publish messages to a partition faster than you can consume them, you can end up in a backlog disaster.

Kafka or Message Broker Queues & Topics?

Hopefully, this post (and video) illustrated some of the differences. The primary issue I’ve come across is people using Kafka but trying to apply patterns and concepts (commands, competing consumers, dead letter queues) that are typical with a message broker using queues and topics, but it just doesn’t fit.

Typically when you’re creating an asyncronous workflow, you’re consuming events and sending commands. While you technically can create a topic for commands, you can’t guarantee there won’t be more than a single consumer. Is this a big deal? To me, semantics matter. If you’re already using Kafka and don’t want to introduce another piece of infrastructure like a queue/message broker, then I understand the reasoning for doing so.

Understanding the differences in how the competing consumers pattern works. If you’re not configured correctly and are publishing to a single partition, then you can’t increase throughput by adding another consumer to a consumer group.

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

You also might like

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Just store UTC? Not so fast! Handling Time zones is complicated.

Should you store dates & times in your database as UTC? It’s pretty standard advice if you’re working in a system that needs to record dates and times from many different time zones. But this advice doesn’t hold true when dealing with dates and times in the future; here are some things you need to consider.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Just store UTC? Not really.

You’ll hear/read pretty standard advice to store all dates/times as UTC when storing dates/times in a database. As users enter data into your system, you might convert their specific local date time to UTC and then save that in your database.

For example, say you have a web application where the user must enter a date/time value. They would be specifying it in their local time. This date/time in ISO format would be sent to your Server/API as 2022-08-02T18:00:00-400. This is their local date time with their current time zone offset. Your app would then convert this to UTC, which would be 2022-08-02T22:00:00Z

Then when we need to return data to the client, we fetch it out of the database, which is in UTC, and send that UTC date/time to the client, where it can then convert it to its local date/time.

Sounds straightforward, right?

There are two reasons why I think this is the standard advice given on storing date/times. The first is because you’re standardizing your date/times in your database, which means you can query your data and compare it against the UTC. This allows you to sort, filter, etc., all on a standardized date/time. You won’t need any conversion at runtime to do any sorting and filtering.

The second reason I think this is standard advice is that people are only thinking about date/times that are in the past. And that’s where the most significant problem lies, date/times in the future.

Time zones & Daylight Saving Time

A date/time in the future, converted to UTC as of right now, might change. How? Because Time Zone rules can change. Daylight Saving Time rules can also change.

When you do a conversion from a local date/time as of right now, at this very moment, you’re applying the current timezone and daylight saving time rules. But that’s not to say these rules will always be like this. They change more than you think!

This means that when you convert a local date time that’s in the future to a UTC date/time and then save that to your database, you’re basing on the rules at the time of persistence, not what it would be in the future when that date/time is realized.

So what should you be saving in your database? Let’s walk through an example.

Here is an object with a future date/time with a time zone offset and a location in Toronto, Canada.

Date Time offset

As mentioned, if the timezone rules or daylight saving time rules change, this could be incorrect as the time zone offset could be wrong. How would we correct or convert this date time to the correct value? How would we know what the correct value should be? You wouldn’t be able to, as you have no information about what time zone was used. All you have is the time zone offset.

If we were storing UTC instead, we still have the exact same problem. We cannot convert it to the correct time if there are any rule changes.

UTC

Time Zone Database

One solution is using the Time Zone Database. If you’re using a library for handling dates and times (eg, JodaTime, NodaTime), it is likely already using the Time Zone Database (tzdb) under the hood. The Time Zone Database contains all time zone boundaries, UTC offsets, and daylight-saving rules.

It also contains a time zone identifier (IANA) that you can use to reference any time you specify a local date. In your database, you can then persist this time zone identifier with your local date/time.

IANA

You’ll notice the dateTime property doesn’t have the time zone offset anymore. It’s just the literal date/time the user specified. This is because now we have the IANA time zone name (America/Toronto) that relates to that literal date/time.

As mentioned earlier, the reason to standardize to UTC is so you can query your database effectively for sorting and filtering. We can still do this at the time of persistence in addition to the literal local date/time and time zone name.

IANA with UTC

So as mentioned, time zone and daylight-saving time rules can change. So what happens if they do? Well, what we can also record is the version of the Time Zone Database. This way, we know what version and rules were used when we converted to UTC.

TZDB Version

When rules change, the time zone database will get updated. We can query our database looking for future date/times where the tzdb is an older version and then redo the conversion of the dateTimeLocal using the IANA name to convert back to the correct UTC and update that value in our database.

Just store UTC?

While the standard advice of “just store UTC” can work, it will only work if you’re storing date/time that is in the past. If you need to store date/time in the future, then you want to record the time zone name and the version of the tzdb. And for querying purposes, you’ll want to convert to UTC. And if there ever is a change to any rules, you can update the UTC date/times.

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

You also might like

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design