Sponsorship is available! If you’d like to sponsor CodeOpinion.com and have your product or service advertised exclusively (no AdSense) on every post, contact me.
As you start enqueuing more background jobs with Hangfire, you might need to increase the number of Consumers that can process jobs. Scaling Hangfire can be done in a couple of ways that I’ll explain in this post, along with one tip on what to be aware of when starting to scale out.
Check out my YouTube channel where I post all kinds of content that accompanies my posts including this video showing everything that is in this post.
Producers & Consumers
First, let’s clear up how Hangfire works with producers and consumers.
A Hangfire producer is creating background jobs but not executing them. This is when you’re using the BackgroundJobClient from the Hangfire.
Once you call Enqueue, the job is stored in Hangfire JobStorage. There are many job storages you can use, such as SQL Server, Redis, and others.
The Hangfire Server, which is a consumer, will get that job from JobStorage and them execute the job.
There are two ways of scaling Hangfire to add more consumers. Worker Threads and Hangfire Servers.
A Hangfire server can be hosted along with ASP.NET Core, or entirely standalone in a Console app using the Generic Host, which is the example below.
The Hangfire Server uses multiple threads to perform background jobs. Meaning it can process a background job per thread within the Hangfire server. This allows you to execute background jobs concurrently.
By default, the number of threads it uses is 5 per Processor Count. With a maximum of 20.
Math.Min(Environment.ProcessorCount * 5, 20);
However, you can configure this by setting the WorkerCount in the AddHangfireServer()
Ultimately your constrained by the host where this Hangfire Server is running. Regardless if you’re running in a container, VM, or Physical Server, you’re going to be constrained by the CPU and Memory of the host. Meaning, you cannot just arbitrarily set the WorkerCount to a very high number as you could max out CPU if you had a high number of concurrent jobs that are CPU intensive.
You’ll have to monitor your application and the background jobs specific to your app to determine what the right number is. The default is a good default.
The second option for scaling Hangfire is to simply run more Hangfire Servers.
Now when running two instances of my hangfire Server, the dashboard shows each server with 30 worker threads. This means our application can process 60 jobs concurrently.
Hangfire fully manages dispatching jobs to the appropriate server. You simply need to add servers to increase consumers, which ultimately increases the number of jobs you can process.
Once you start scaling out by increasing the overall worker count or adding Hangfire servers, you will want to pay attention to downstream services.
If for example, your background jobs interact with a Database, you’re going to now be adding more load to the database because you’re going to be performing more jobs concurrently.
Just be aware that adding more consumers can move the bottleneck to downstream servers. They also become a constraint when trying to scale your system that’s using Hangfire.