Azure Service Bus .NET SDK Deep Dive – Sender side batching

Shows how to batch multiple messages together into one single operation against Azure Service Bus, for more posts in this series go to Contents.

In the article about atomic sends, we talked about the fact that each send operation executed against Azure Service Bus can fail and potentially has to be retried. Another important aspect though that wasn’t covered in the previous article is that every send operation has latency costs associated with it. Let’s assume we want to send ten messages to a destination like the following

var client = new QueueClient(connectionString, destination);
for (var i = 0; i < 10; i++)
{
    var message = new Message();
    message.Body = Encoding.UTF8.GetBytes($"Deep Dive{i}");
    await client.SendAsync(message);
}

From my crappy home DSL over WIFI, I get the following latencies to the closest datacenters in Azure. Of course, these figures are not realistic latencies because you would not run your code over WIFI, and hopefully your code would run as PaaS directly in Azure so your latency would be much lower.

But for the sake of this example, let’s take the average of all those latencies, which is a well rounded 50 ms (keep in mind averages are actually meaningless in real-world scenarios but I don’t want to complicated things here unnecessarily). So sending the above messages will take roughly 500 ms because we are “charged” the average latency cost on every send operation. But I have good news for you. In cases where we know, we are going to send multiple messages to the same destination, we can collect them together and send them in one single operation to Azure Service Bus.

var client = new QueueClient(connectionString, destination);
var messages = new List<Message>();
for (var i = 0; i < 10; i++)
{
    var message = new Message();
    message.Body = Encoding.UTF8.GetBytes($"Deep Dive{i}");
    messages.Add(message);
}
await client.SendAsync(messages);

Now all the ten messages are batched together into a single operation that might need to be retried, but in case it is successful we only pay the latency costs once and this time it is in our example 50 ms only which makes a difference of 450 ms in total. You might be thinking: Great let’s throw as many messages as we can into a single batch. I have some news for you, though. There are limits. The limits are the following:

  • You can batch up to 256 KB of messages into a single operation on the standard tier and up to 1 MB of messages into a single operation on the premium tier
  • For transactional sends (atomic operations within a TransactionScope) you can batch up to 100 messages into a single operation

So on the standard tier, the following code would throw a MessageSizeExceededException because the total bytes we are sending to Azure Service Bus exceed 256 KB.

for (var i = 0; i < 6500; i++)
{
    var message = new Message();
    message.Body = Encoding.UTF8.GetBytes($"Deep Dive{i}");
    messages.Add(message);
}

// will throw MessageSizeExceededException 
await client.SendAsync(messages);

And for transactional sends the following code will throw a QuotaExceededException regardless of the tier being used.

for (var i = 0; i < 101; i++)
{
    var message = new Message();
    message.Body = Encoding.UTF8.GetBytes($"Deep Dive{i}");
    messages.Add(message);
}

// will throw QuotaExceededException
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
    await client.SendAsync(messages);
    scope.Complete();
}

In the video below you can see the code in action against a standard tier.

By default the underlying AMQP client implementation also does some batching even if you are not using the API shown in this blog post. The difference is though that the batching over AMQP happens implicitly based on predefined parameters while here we are explicitly instructing the client to batch multiple well known operations together.

About the author

Daniel Marbach

Add comment

Recent Posts