One of the really nice features of Event Store is the Persistent Subscriptions that were implemented in v3.2.0. I was previously using catch-up subscriptions but needed the ability to have many worker processes handle events from the same stream. First lets take a look at couple of the subscription models Event Store suports.
Sponsor: Do you build complex software systems? See how NServiceBus makes it easier to design, build, and manage software systems that use message queues to achieve loose coupling. Get started for free.
Catch Up SubscriptionsAs mentioned, I previously would use catch-up subscriptions for various use cases. One of them would be for sending emails on specific events occurring in the system. A worker process would subscribe to the $all event stream and handle incoming messages accordingly. The issue is that catch up subscriptions are controlled by the client. The start position of the subscription is stored by the client. This information must be maintained by the client. This means I did not have a built-in way to have multiple worker processes handle events from the same stream but only processing each event only once. Notice I wrote built-in above. Yes I could of created shared datasource that each worker process would share in order to determine which messages where being handled by which process. I didn’t want to go down this road. The next option would be to have multiple worker processes but each worker process would only be handling a single or small subset of events.
Persistent SubscriptionsThe competing consumers messaging pattern is implemented using Persistent Subscriptions. They differ from catch-up subscription as the position of which event to publish is kept on the server rather than the client. If you have used RabbitMQ or ActiveMQ, the Persistent Subscriptions in Event Store will feel similar. I think the title Competing Consumers really does illustrate how it works. You can have many work processes (Consumers) but only one of them will receive an event from the server. Lets say we have 3 worker processes which are all subscribing to the same subscription. When an event is published from the server, only one worker process will receive that specific event. Not all 3. In the catch-up subscription model, all 3 would receive the same event. What this allowed me to do in my email example, is have multiple worker processes handle specific events. And if I wanted to create multiple subscriptions for single or small subset of events, as I could with catch-up, I can do the same with persistent subscriptions but now have multiple worker processes as well.
GroupsWhen you create a persistent subscription, you will define a group name and the event stream you want to subscribe to. When your client (consumers) want to use this subscription, they specify the same group name and the stream.
I’ve rewritten the above thanks to Greg Young’s correction in the comments.The server will dispatch N events to the consumer at any given time. The number of events is configurable when connecting to the subscription as you can define a buffer size which is the number of in-flight messages the client is allowed. It can also not acknowledge the message, or if a timeout period expires without either a retry can occur.
5 thoughts to “Event Store Persistent Subscriptions”
One slight correction:
“The server will dispatch only one event to a consumer at any given time. The event at this point is marked as in-process. The client will acknowledge it has processed the message. It can also not acknowledge the message, or if a timeout period expires without either a retry can occur.”
There are N messages at any given time (configurable) this is crucial for performance (otherwise you pick up the latency of the send->ack per message. Also don’t need to synchronously ack/nak you can example push them to async operations and ack/nak on completion.
Thanks Greg, appreciate the correction. I’ve rewrote that section to clarify.
Also interested in the quirks in the API, maybe we can fix some of them!
Well it’s no so much the API per say, more to do with debugging in VS and message timeout.