Improved batch completion loop for Azure Service Bus

In the last post, we created a dedicated completion loop inside a worker thread to complete messages in batches. There are a few obvious or not so obvious things we can improve in our code. But first, how did it look like again?

// same as before
var batchCompletionTask = Task.Run(async () => {
   while(!token.IsCancellationRequested) {
      var lockTokens = new Guid[100];
      int numberOfItems = lockTokensToComplete.TryPopRange(lockTokens)
      if(numberOfItems > 0) {
         await receiveClient.CompleteBatchAsync(lockTokens).ConfigureAwait(false);
      }
      await Task.Delay(TimeSpan.FromSeconds(5), token).ConfigureAwait(false);
   }
});

We could make a simple assumption: If the numberOfItems returned by TryPopRange is equal the maximum lock token range we want to complete in batches (here one hundred) then we have potentially more things to complete, and we can try to avoid the delay.

// same as before
var batchCompletionTask = Task.Run(async () => {
   while(!token.IsCancellationRequested) {
      // same as before
      if(numberOfItems == lockTokens.Length) {
         continue;
      }
      await Task.Delay(TimeSpan.FromSeconds(5), token).ConfigureAwait(false);
   }
});

So if we were able to fill the lock token array we assume we have more to fetch and continue in the while loop. If the ConcurrentStack happens to be empty after the continuation, we will then end up in the Task.Delay. If there are still lock tokens to fetch, we will continue fetching at least another round. What else could we improve?

We randomly pick a Guid array size of one hundred. An Azure Service Bus request can be up to 256 KB in size. A Guid has a size of 16 bytes. One hundred Guids, therefore would have a size of roughly 1.6 KB. We could easily increase the array size to a much larger number. For example, 5000 Guids would mean a payload size of approximately 80 KB which is roughly a third of the maximum payload size. This seems to be a reasonable trade-off. Of course, you’d need to make your analysis based on your application’s need and your Service Bus tier and data center you are connecting to.

We could also start tweaking the delay interval. Are five seconds too much for faster receiving of messages? Or should we increase the timespan for lighter load? These are all questions that are hard to answer without particular non-functional requirements of your application or system.

We have shed light on message receiving and mostly batch completion outside the receive loop. In the next installment, we will circle back to the message receive loop and see how we can get even more receive performance and how this potentially impacts our completion logic.

 

About the author

Daniel Marbach

1 comment

By Daniel Marbach

Recent Posts