Biggest scam in software dev? Best Practices.

What’s the biggest scam in tech that is deemed acceptable? Best practices. Everything has trade-offs and your context matters. Let me give various examples to get out of this dogma about best practices.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Best Practices

The topic for this video/came up when I came across this tweet from David Fowler.

I couldn’t agree more. Too often, best practices, principles, practices, patterns, etc., are treated as rules without considering the reasons they exist or how they apply in your context.

To illustrate this, let’s look at an older ASP.NET Core template example:

If you’re unfamiliar with the above code, don’t worry. Here’s the gist.

The Program class is the entry point when our process starts. It creates the default web host and tells it to use the Startup class.

The Startup class defines all of our dependencies used in the ConfigureServices method, and then the Configure method defines all the middleware that an HTTP request will route through.

Now let’s look at a newer version that consolidates this.

This new version is much more condensed and does away with the Startup. It’s more procedurally done but still specifies the types to be registered for DI and all the middleware. It’s ultimately doing the same thing as the prior version.

When this new template came out, there were a lot of reactions similar to this:

If you follow any principle as the law without being pragmatic, you’ll land in more complexity than you started with.

The Startup class just added indirection, for what reason? So it could separate registering types and middleware? What benefit does that have? In the example above, none in my opinion. The Startup class is being called by the ASP.NET Core Runtime. That’s it. If however, in your context you were registering types for DI for ASP.NET Core and possibly some other ServiceCollection used in a separate project that was shared, then sure, it could make sense.

The template is just an illustration. It’s not a best practice.

Example

Here’s another example in the world of messaging. My post/video about McDonald’s Journey to Event-Driven Architecture, it shows how they were using DynamoDB as a fallback when they couldn’t publish to Kafka.

This allows them not to lose messages that need to get published on a topic in Kafka. Separately there is an AWS lambda that pulls from DynamoDB and then attempts to publish to the Kafka Topic. It’s basically a retry mechanism.

Now if you were to listen to “best practices”, you’d likely find out that a common solution is to use the Transactional Outbox (Outbox Pattern).

Using an outbox means that you persist outgoing events that should be published to your primary database along with your business data within the same transaction. This way, you always reliably persist your business data and outbound events in an atomic operation. Separately you have a process that pulls the events from the “outbox” in your primary database and publishes them to your broker.

So does McDonalds have a bunch of terrible developers because they aren’t following a “best practice”. No. Obviously not. There’s tradeoffs of course.

With the outbox pattern, you could be adding a significant amount of additional writes to your primary database. Also depending on process that publishes events, it could also add more load because of queries to the outbox. Regardless, you are absolutely going to add more load to your primary database.

Is this a big deal in your context? If it is, maybe the outbox pattern isn’t a great idea.

If that’s the case, your option could be to use a fallback of some other durable storage, in their case DynamoDB. Are you guaranteed not to lose events? No, because DynamoDB could also be unavailable when Kafka was unavailable due to network issues. But that may be a trade-off they are willing to accept.

Context is King

There’s also a constant comparison of large companies in a positive or negative way. Amazon is often the example used.

Well Amazon does it, so we should too.

I think Microservices was an example of this where large organizations were using Microservices as a way to decompose a large system because of organizational constraints. Individual small teams owning services. Then microservices started getting adopted by a small team within a small company where they have 20 microservices with 3 developers.

There’s also the flip side, where the comment is

We’re not Amazon so I don’t need to do that

But everything lies somewhere in the middle. Absolutely you’re not Amazon, but there likely are things to be learned that can apply to your context.

Context is King.

Your context matters in determining what is appropriate based on your architecture, design, team, etc. Not everything is a “best practice” to you.

I wish we could as an industry rename “best practices” into “maybe this a good idea if you have these set of constraints, then you should think about it”.

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Underrated skill as a developer

What do you think is an important skill to have when in a role or wanting to be in a position that requires making decisions around software architecture and design? There was a phase in the middle of my career that changed my point of view. I’d like to explain what is an underrated skill that I credit for making various technical decisions, including those around architecture and design.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Budget

lacking skill as a developer

This tweet resonated with me because there was a phase in my career where I created proposals, quotes, and estimates for software development projects. If we landed the work, then it was executing and building the software and the entire project lifecycle.

Understanding the budget, the actual costs (mainly time), while the project was being worked on was eye-opening for someone that never really thought about the bottom line.

Independent consultants that do project work will understand this however, if you aren’t exposed to revenues and costs, you’re likely oblivious to the bottom line.

lacking skill as a developer

Bottom Line

There are many ways to structure software development projects around an interactive design, change orders, value-based pricing, etc. I’m using a simple example to get the gist across.

Let’s say you are an independent contractor and provided a quote to a client for a software project. You’ve established some deliverables for some fee. How you came up with a fee (your revenue) was likely based on the number of hours you think it will take to complete the work, multiplied by an hourly rate.

40 Hours x $100/hr = $4000

So your fee for the deliverable is $4000, based on the assumption it will take 40 hours of work. What happens when the project ends up taking 80 hours? You still get paid $4000, but you’ve reduced your hourly rate to $50/hr. You’re affecting your bottom line.

This may seem obvious and trivial. It is. But most developers have little to no thought or insights into how their work affects the bottom line. Depending on your context, knowing how it affects the bottom line could be irrelevant. I’m not suggesting you absolutely must know. However, it could significantly impact your decision-making if you should be aware but and don’t have the skill as a developer to understand the trade-offs.

For example, if you’re working in a startup with a limited runway, time is of the essence. Making decisions and moving forward is critical. Bike shedding gets you nowhere. How often have you worked as a developer for a company, seeing people bike shed for hours or days over something that has little to no value? It’s just a waste of time.

I genuinely believe in Parkinson’s law and some of the corollaries.

“Work complicates to fill the available time”

The reverse can also be true.

“Work contracts to fit in the time we give it.”

Barber, Cam. “How to write a speech in 15 minutes”Vivid method. Retrieved 11 November 2014.

If you have a task that should take 4 hours, but you only have 1 hour available, you’ll find a way to get it done within an hour.

Value

Why do thinking about revenue, costs, and time matter? Well, because it affects ROI. And my decision-making is often around ROI. As much as developers love thinking about code, tools, libraries, and frameworks, you’re likely writing software to provide value. Being able to make decisions around this is a skill as a developer.

If you’ve used Scrum and done sprint planning, provided story points, or done any type of estimating, someone else determines the ROI. Determining based on story points if something is worth implementing is the ROI is high enough. You won’t work on a feature that will take a long time to develop but has little value.

Technical Debt

When I talk about technical debt, I’m not talking about what someone thinks is crappy code.

Technical debt is an explicit choice.

You’re deciding to incur debt now so you can get further ahead, right now. That comes at the cost of needing to repay that debt in the future. In a startup, you often take on technical debt now, so you have a future. You don’t want to gold plate anything. You want to develop to “good enough” and move forward.

If you don’t have any concept of costs, time, or value, how could you decide to incur technical debt?

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design

Abstractions to easily swap implementations? Not so fast.

Why do you create an abstraction? One reason is to simplify the underlying concept and API. Another reason, probably more common, is that the internal implementation might change. While this can be true, it’s not always as straightforward as you’d think. I will give a couple of examples of things to think about when you’re designing an API.

YouTube

Check out my YouTube channel, where I post all kinds of content accompanying my posts, including this video showing everything in this post.

Expected Behavior

I will use a Repository as the example in most of this post since it’s pretty common and (generally) understood.

So we have this repository interface to define various ways we can fetch data and add/update/delete data from a data store. One important aspect to note is this abstraction defines that we are, in some ways, also based on collection-based data sets.

As for implementations, generally, there will be one that is using an ORM such as Entity Framework or the database SDK depending on which type of database you are using, such as a document store.

A common example used for another implementation is a cache. This could be in memory or a distributed cache, but I often hear it used as an example of why having an abstraction such as this is useful.

Based on the example of the typical IRepository above, it could be a terrible idea because it doesn’t support the same expected behavior as our other implementation hitting a database. Why? Because the cache is inherently stale.

If your calling code (consumer) depends on the IRepository, previously provided by the DB direct implementation, and you change that to the Cached Repository, the expected behavior will change to the consumer. For example, if you were trying to read your own right, you won’t because your cache is not consistent.

Expectation matters. If you thought your repository was fully consistent and you swapped the implementation and it’s not, that could have some implications you’d need to address. In this situation, you’d more likely want to be explicit about when you would want cache data either through a different interface (ICachedReadRepository) or through parameters/options of the called method to be explicit that it’s ok to return cached (stale) data.

Supported Model

To continue with the repository example, we are returning and persisting entities based on the current state of entities, either with a relational database or a document store. But how would this abstraction hold up if we wanted to instead be event sourcing as a way to persist state?

As an example of persisting current state of a Product Entity in a relational database.

This same entity using event sourcing would be persisting the events in a stream.

This is a very different way of persisting state. The abstraction you create will be based on the implementation you have in mind. This is typical when you end up creating an abstraction after you’ve created the implementation.

Abstractions you create have a certain model in mind. That won’t fit every model. That’s entirely OK. However, realize that your abstractions are based on your understanding of the implementations you have.

Here’s what an event-sourced repository and implementation might look like. Very different from the previous one, as we are concerned about event streams, not tables or collections.

The fundamental idea of appending events to an event stream in comparison to persisting current state changes our abstraction.

The abstraction you create will fit a specific model.

Implementation Leak

If you create an abstraction with a single implementation in mind, you’re going to build it around that model and that implementation. If you create an abstraction after the fact based on a single implementation, you’ll end up in the same place.

There isn’t anything wrong with this. However, the idea that abstractions are some magical tool that allows you to swap out concrete implementations seamlessly isn’t true. Where this leads is the question, if you’re creating an abstraction for every implementation, why? You’re likely getting it wrong, as you only have a single implementation your abstraction is based around.

Often it’s not until you’ve built many implementations that your abstraction simplifies the underlying concepts that you’re trying to abstract.

Join!

Developer-level members of my YouTube channel or Patreon get access to a private Discord server to chat with other developers about Software Architecture and Design and access to source code for any working demo application I post on my blog or YouTube. Check out my Patreon or YouTube Membership for more info.

Follow @CodeOpinion on Twitter

Software Architecture & Design

Get all my latest YouTube Vidoes and Blog Posts on Software Architecture & Design