In my last post I showed you the TransactionScope class and how you can write your own enlistments to participate in transactions. The code we wrote was all synchronous. This time we are going deep into the abyss and change our code sample to a completely asynchronous API. Let’s explore how the code could look like: Continue reading
In my last installment I gave a brief overview over Service Bus for Windows Server. In this post I’m going to look at High Availability and why it is important. In my last project my job was to help a team out building a robust and reliable infrastructure which leverages Service Bus for Windows Server. On the first day we sat together and discussed the various questions the team had regarding reliability and availability. The first question started like that: “What can we do in code when the Service Bus for Windows Server is not available?” My answer was the following: “In code? Besides trying to reconnect to the Service Bus a configurable amount of time you cannot really do much. If your most important communication layer is down for a longer period of time your system should detect that problem and gracefully shut down its services. If you are not specifically building for an occasionally connected system your infrastructure needs to be made reliable and available. Trying to solve those concerns in your systems code is a waste of time and money.” We shaked hands, the customer said thank you and I went home. Problem solved without writing a single line of code Just joking!
“Use Messaging to transfer packets of data frequently, immediately, reliably, and asynchronously, using customizable formats. [..]” This quote from the Enterprise Integration Patterns book from Gregor Hohpe and Bobby Woolf shows that one of the fundamental principle of messaging is that the messages need to be transferred immediately and reliable. In order to achieve this our Service Bus infrastructure needs to be reliable and highly available. Because Service Bus for Windows Server is a broker-based transport the producers and consumers rely on the availability of a centralized infrastructure. But what could possibly go wrong? Continue reading
I would consider this blogpost as unnecessary knowledge for most programming tasks you do during your lifetime as a developer. But if you are involved in building libraries or tools used in integration scenarios you might consider this helpful. The .NET platform has under System.Transactions a very nifty class called TransactionScope. The TransactionScope allows you to wrap your database code, your messaging code and sometimes even third-party code (if supported by the third-party library) inside a transaction and only perform the actions when you actually want to commit (or complete) the transaction. As long as all the code inside the TransactionScope is executed on the same thread, all the code on the callstack can participate with the TransactionScope defined in your code. You can nest scopes, create new independent scopes inside a parent transaction scope or even creates clones of a TransactionScope and pass the clone to another thread and join back onto the calling thread but all this is not part of this blogpost. In this blogpost I will cover how to write so called enlistments which can participate with TransactionScope for your own code and and in the next post how to overcome the challenges you’ll face in asynchronous scenarios. Continue reading
Today I read this blog post about how to simplify test data preparation.
The author of the blog post states that setting up test data for tests is sometimes difficult and bloats up the test code, resulting in bad readability and maintainability. I completely agree with that.
The author continues by solving this problem by loading the test data from a file and using it in the test. That minimizes the code needed to set-up the test data, but results in a disconnect between the test and the data or example used for it. Leaving us with an obscure unit test.
We solve this problem differently.
This is a really quick announcement. I recently released Machine.Specifications 0.9.0. With that release I introduced a breaking change: I disabled the console output capturing by accident. If you are using console outputs in your specs and need to see them then I strongly advise you to upgrade to Machine.Specifications 0.9.1. You only need to upgrade the Machine.Specifications nuget package in your solution. None of the other components are affected. This is the beauty of the new architecture
Have fun and sorry for the inconvenience!
I updated my clean code cheat sheet.
This time there are just minor changes:
- Principles: mind-sized components
- Class Design: do stuff or know others, but not both
- Maintainability killers: tangles
- Refactoring patterns: refactor before adding functionality, small refactorings
- removed duplication regarding one test asserts one thing
- TDD principles: Test domain specific languages
- fixed a bug in the ATDD/TDD cycle (run all acceptance tests)
If you miss something or think that there is something just plain wrong, please write a comment below.
Link: Clean Code V2.4
Today we released the next version of Machine.Specifications. This release implements an important feature to move on in the future. We implemented a complete runner dependency abstraction. What does that mean? Let me take a step back.
The picture above shows the state of Machine.Specifications previous to V0.9.0. The console runner, the resharper runner, the TDnet runner and more were directly dependent upon the same Machine.Specifications version. This means when we release a new version of MSpec you actually had to use also the new version of the ReSharper, Console runner and more. This was not only for you as a user cumbersome it was also for us as the maintainers of the library. We had a massive repository with everything in it and released all as a “big chunk”. That made working, forking and all other git operations heavyweight because the repository was quite large. Continue reading
I attended TechEd Barcelona with a coworker. The venue was just amazing. TechEd was hosted in the Fira Barcelona. The floor space in the fira is 400000 m2. You really have to walk from session to session. But I think that has very positive influence on the conference experience. Because usually when you are always at the same location you are getting more and more tired (mentally) after each session. With the “long” distance walks between the sessions you grab a coffee or tea on the way and head to the next hall (up to 10 minutes depending on your walking speed). This gives you time to think about what you’ve heard in the sessions and also time to “exercise” your body. Good contrast to the sitting only experience of the sessions.
This is an overview of the sessions I visited:
- Day 1
- Day 2
- The Next Generation of Microsoft .NET (Link)
- TWC | A Game of Clouds: Black Belt Security for the Microsoft Cloud (Link)
- Entity Framework Now and Later (Link)
- Windows PowerShell Unplugged with Jeffrey Snover (Link)
- Introduction to NoSQL in Azure (Link)
- Architecting Secure Microsoft .NET Applications (Link)
- Country Drinks Party
- Day 3
- Day 4