Home / Archive by category ".NET" (Page 3)

Don’t spoil your Thanksgiving sales with naive Azure Service Bus client code

Edit: The initial title was: “Naive receiving of messages on Azure Service Bus destroys Thanksgiving sales” I changed it because I don’t want to imply that Azure Service Bus as a service is not behaving correctly. Sorry if this cause any inconvenience. 

The Azure Service Bus SDK provides a lot of very smart functionality to build robust libraries, frameworks and application code on top of Azure Service Bus. In the most simplistic scenario, you write code like

var receiveClient = QueueClient.CreateFromConnectionString(connectionString, queueName, ReceiveMode.PeekLock);
receiveClient.OnMessageAsync(async message =>
{
   // do something with the message
   await message.CompleteAsync().ConfigureAwait(false);
},
new OnMessageOptions { AutoComplete = false, MaxConcurrentCalls = 1 })

This code instructs the SDK to push messages into your code passed into OnMessagesAsync as soon as they are available with a maximum concurrency of one, meaning that your delegate will be executed at maximum once concurrently. So if more messages are available in the queue, the OnMessageAsync delegate will only be called when a previous push of a message is over. Since AutoComplete is set to false it is the responsibility of the delegate implementation to call CompleteAsync (completes the messages) or AbandonAsync (puts the message back into the queue) on the message. The code sample shown here uses the PeekLock mode. In the PeekLock mode, the client sends a request to the server to receive a message and sends another request to complete the message. During the lock duration (can be configured) the message is not visible to other clients connected to the same queue. It might become visible again if a client fails to complete or abandon the message in the given PeekLock duration. For our simplistic sample above we already need to know quite a bit. Assuming the delegate would take one second to execute in our code above we could handle a message every second only. This means we have a throughput of one message per second (1 msg/s).

Imagine the receive client would consume our Thanksgiving special sales order queue and our customers would send in 2000 orders per second (yeah our imaginary products are that amazing 😉 ). In one minute of our special Thanksgiving sale, we would receive 120000 orders, but we would be able to consume 60 orders only. To catch up with those 120K orders, it would require us roughly 33 hours to catch up with the orders we received in one minute! Now imagine we’d have a lucky Thanksgiving rush and actually sell products at that rate for ten minutes. We would need almost 24 days to process these orders. I’m sure this code wouldn’t please our bosses and certainly not our imaginary customers!

With our snail orders we’d probably loose all of them and get really bad reviews online. We can do better for sure but that is a topic of the next post.

Context Matters

Async/await makes asynchronous code much easier to write because it hides away a lot of the details. Many of these details are captured in the SynchronizationContext which may change the behavior of your async code entirely depending on the environment where you’re executing your code (e.g. WPF, Winforms, Console, or ASP.NET). By ignoring the influence of the SynchronizationContext you may run into deadlocks and race conditions.

The SynchronizationContext controls how and where task continuations are scheduled and there are many different contexts available. If you’re writing a WPF application, building a website, or an API using ASP.NET you’re already using a special SynchronizationContext which you should be aware of.

SynchronizationContext in a console application

To make this less abstract, let’s have a look at some code from a console application:

public class ConsoleApplication
{
    public static void Main()
    {
        Console.WriteLine($"{DateTime.Now.ToString("T")} - Starting");
        var t1 = ExecuteAsync(() => Library.BlockingOperation());
        var t2 = ExecuteAsync(() => Library.BlockingOperation()));
        var t3 = ExecuteAsync(() => Library.BlockingOperation()));

        Task.WaitAll(t1, t2, t3);
        Console.WriteLine($"{DateTime.Now.ToString("T")} - Finished");
        Console.ReadKey();
    }

    private static async Task ExecuteAsync(Action action)
    {
        // Execute the continuation asynchronously
        await Task.Yield();  // The current thread returns immediately to the caller
                             // of this method and the rest of the code in this method
                             // will be executed asynchronously

        action();

        Console.WriteLine($"{DateTime.Now.ToString("T")} - Completed task on thread {Thread.CurrentThread.ManagedThreadId}");
    }
}

Where Library.BlockingOperation() may be a third party library we’re using that blocks the thread. It can be any blocking operation but for testing purposes, you can use Thread.Sleep(2) as an implementation.

When we run the application, the output looks like this:

16:39:15 - Starting
16:39:17 - Completed task on thread 11
16:39:17 - Completed task on thread 10
16:39:17 - Completed task on thread 9
16:39:17 - Finished

In the sample, we create three tasks that block the thread for some period of time. Task.Yield forces a method to be asynchronous by scheduling everything after this statement (called the _continuation_) for execution but immediately returning control to the caller. As you can see from the output, due to Task.Yield all the operations ended up being executed in parallel resulting in a total execution time of just two seconds.

Now let’s port this code over to ASP.NET.

Continue reading

Avoid ThreadStatic, ThreadLocal and AsyncLocal. Float the state instead!

In the article on The dangers of ThreadLocal I explained how the introduction of async/await forces us to unlearn what we perceived to be true in the “old world” when Threads dominated our software lingo. We’re now at a point where we need to re-evaluate how we approach thread safety in our codebases while using the async/await constructs. In today’s post-thread era, we should strive to remove all thread (task-local) state and let the state float into the code which needs it. Let’s take a look at how we can implement the idea of floating state and make our code robust and ready for concurrency. Continue reading

The dangers of ThreadLocal

Languages and frameworks evolve. We as developers have to learn new things constantly and unlearn already-learned knowledge. Speaking for myself, unlearning is the most difficult part of continuous learning. When I first came into contact with multi-threaded applications in .NET, I stumbled over the ThreadStatic attribute. I made a mental note that this attribute is particularly helpful when you have static fields that should not be shared between threads. At the time that the .NET Framework 4.0 was released, I discovered the ThreadLocal class and how it does a better job assigning default values to thread-specific data. So I unlearned the ThreadStaticAttribute, favoring instead ThreadLocal<T>.

Fast forward to some time later, when I started digging into async/await. I fell victim to a belief that thread-specific data still worked. So I was wrong, again, and had to unlearn, again! If only I had known about AsyncLocal earlier.

Let’s learn and unlearn together!

If you want to learn more about async/await and why you shouldn’t use ThreadStatic or ThreadLocal with async/await, I suggest you read the full blog post originally posted on the particular blog.

TransactionScope and Async/Await. Be one with the flow!

I’m doing a series of async/await related blog post on the particular blog. This one might also be interesting for you.

You might not know this, but the 4.5.0 version of the .NET Framework contains a serious bug regarding System.Transactions.TransactionScope and how it behaves with  async/await. Because of this bug, a TransactionScope can’t flow through into your asynchronous continuations. This potentially changes the threading context of the transaction, causing exceptions to be thrown when the transaction scope is disposed.

This is a big problem, as it makes writing asynchronous code involving transactions extremely error-prone.

The good news is that as part of the .NET Framework 4.5.1, Microsoft released the fix for that “asynchronous continuation” bug. The thing is that developers like us now need to explicitly opt-in to get this new behavior. Let’s take a look at how to do just that.

TL;DR

  • If you are using TransactionScope and async/await together, you should really upgrade to .NET 4.5.1 right away.
  • A TransactionScope wrapping asynchronous code needs to specifyTransactionScopeAsyncFlowOption.Enabled in its constructor.

If you want to learn more about async/await and how to flow TransactionScopes over continuations, I suggest you read the full blog post originally posted on the particular blog.

Async/Await: It’s time

I wanted to briefly mention a blog post which I wrote on the particular blog. It is a product centric announcement around the Particular Platform about the move towards an async-only API. Although it is product centric I believe this post contains a lot of valuable information around async/await and its benefits and caveats. In my biased opinion, it is definitely worth a read.

Async/Await is a language feature introduced in C# 5.0 together with Visual Studio 2012 and the .NET 4.5 runtime. With Visual Studio 2015 almost ready to be shipped to end-users, we can see that async/await has been around for quite some time now. Yet NServiceBus hasn’t been exposing asynchronous APIs to its users. Why the await?

We have been carefully observing the adoption of async/await in the market and weighing its benefits against its perceived complexity. Over the years async/await has matured and proven its worth. Now the time has come to discuss our plans regarding the async/await adoption in NServiceBus and our platform.

TL;DR

  • Future versions of the NServiceBus API will be async only.
  • Synchronous versions of the API will be removed in future versions.
  • Microsoft and the community have made this decision for you, by moving toward making all APIs which involve IO operations async-only.
  • Don’t panic! We will help you through this difficult time.

You’re going to be blown away by the additional benefits that asynchronous NServiceBus APIs will provide. But first, let’s review why asynchronous APIs are such a big deal in the first place.

If you want to learn more about async/await, I suggest you read the full blog post originally posted on the particular blog.