Let’s try batch completion of messages on Azure Service Bus

In the last post, we deferred the message completion task into the background to remove the additional latency from the message receive callback. This time we want to save even more latency by batching multiple message completions together into a single Azure Service Bus broker call. It turns out this is possible with the Azure SDK.

The MessageReceiver has a method called CompleteBatchAsync which accepts an IEnumerable<Guid> called lockTokens. But where do we get these lock tokens from? It turns out the BrokeredMessage type that is passed into the OnMessageAsync callback contains a GUID property called LockToken. The lock token is a GUID generated by the service bus which uniquely identifies a message on the broker. So if we managed to track the lock tokens for the messages, we receive we can at a later stage execute CompleteBatchAsync. Let’s see how this would change our message receive logic.

var lockTokensToComplete = new ConcurrentStack<Guid>();
 
receiveClient.OnMessageAsync(async message =>
{
   try {
      await DoSomethingWithTheMessageAsync().ConfigureAwait(false);
      lockTokensToComplete.Push(message.LockToken);
   catch(Exception) {
      // in case of an exception make it available again immediately
      await message.AbandonAsync().ConfigureAwait(false);
   }
},..)

With this small change, we would be pushing lock tokens of messages to be completed into a ConcurrentStack. Only if a message fails we’d be directly abandoning the message asynchronously inside the message receive callback. The assumption / we are making here is that the contention on the ConcurrentStack will be far less than the latency on the remote call to complete a message. But at some point, we have to complete the messages.Can we do it directly in the callback like illustrated with the following code?


receiveClient.OnMessageAsync(async message =>
{
   // same as before
   var lockTokens = new Guid[100];
   if(lockTokensToComplete.Count > 100 && lockTokensToComplete.TryPopRange(lockTokens) > 0) {
      await receiveClient.CompleteBatchAsync(lockTokens).ConfigureAwait(false);
   }
   // same as before
},..)

We’ve chosen a concurrent stack here in order to be able to do range pops of lock tokens. This might seem a bit strange since we’d we completing messages with the lock token in another order than we received it. Normally that is not a big deal since in concurrency scenarios we should not make any assumptions about ordering.

With a lot of messages in the queue, and when the queue constantly has a particular load of messages coming in, this code would improve the latency. We would only do a remote call every hundred messages. So we theoretically save 99% of the latency we’d have when completing every message. What a success! But wait, there is a problem in this code: What if we only receive one message every ten seconds?

Assuming a peek lock duration of roughly thirty seconds, we would almost always run into the expiration of the peek lock duration and therefore the messages would be processed over and over again until they’d get dead lettered by the Azure Service Bus. That’s some bad hat harry!

So how can we make this trouble maker better? Well, I think you know it already: That’s a topic for another post.

About the author

Daniel Marbach

2 comments

By Daniel Marbach

Recent Posts