I wrote code without tests that ran in production without defects, and I wrote buggy code with TDD (Test Driven Development). Time to look back at 35 years of coding and when tests help, and when there is something better. And especially, what these better things are.
In this part: What would you do if you weren’t allowed to write automated tests?
Take a moment and think about what you would do to still deliver quality software if you weren’t allowed to write automated tests.
I would do the following three things:
- Make it hard to make a mistake
- Make it easy to recover from a defect
- Make it easy to test manually
What would you do if you weren’t allowed to write Automated tests?
The ideal
The ideal would be to build software by composing small, simple pieces of code that are easy to grasp. So simple and easy that no defects can hide, it is obviously correct.

Let’s tackle the reasons for defects with simplicity
The first two ideas are about making us deal with non-happy-path scenarios. We can’t simply ignore unhappy paths and end up encountering defects because of our ignorance.
Nulls
Let’s use a programming language that has no nulls (F#, Rust, Ocaml, Haskell, …) or at least a way to have non-nullable reference types (as they are called in C#, Kotlin, …). The compiler prevents us from accidentally forgetting about null and running into NullReferenceExceptions. When we have a null value, Option.None, or Either.Left, or Maybe.None, we have to deal with them explicitly, which prevents defects.
Exceptions vs Results
Similarly, we should not use exceptions for non-exceptional issues. Using exceptions for exceptional cases, like a database being down or a full file system, is a good decision. However, for business rule violations (“No, this order can’t be processed.”), it is better to use Results because we can return results from methods and functions, and the caller needs to address both the good and the bad cases. Error cases cannot be ignored; they have to be handled explicitly.
The next two ideas aim to make it easier to reason about our codebase. Simpler reasoning leads to fewer wrong assumptions and, therefore, fewer defects.
No shared mutable state ➡️ Immutability
Having immutable data makes it easy to share it without worrying about other code changing the shared state. What you get as input into your method or function can’t “magically” change. The same holds inside a single method or function: when you look at a value, it can’t change halfway through the method or function. This also makes refactoring much safer because a variable change can’t be accidentally lost.
Composability
Functions are easier to compose than methods inside objects or classes. When you combine two or more functions, you get simply a new function. We can build large functions by repeatedly composing smaller functions. This way, we can achieve the ideal described above.
Grouping functions into modules does not hinder composition; we can still compose two functions to produce a new one.
Composing methods always needs an object as a host, which makes it harder to compose because you have to compose methods inside objects with other objects, resulting in dependency injection or ad-hoc creation of instances. Putting everything into a single god class is obviously not a good solution. (Of course, you can go with static methods, but then you are conceptually using functions 😉 )
There are more ideas to help prevent defects:
Tools
Obviously, there are many tools to prevent defects, such as compilers, linters, and analysers.
They look for common misuses and traps and warn us about potential problems.
Type System
The above tools work better when paired with a programming language with a strong type system.
Types help prevent the assignment of incorrectly “structured” values to variables or values.
We can also prevent primitive type obsession (using only base types for everything) by using types to encapsulate, wrap or annotate basic types to give them more meaning.
Instead of having a simple GUID, we can have an EmployeeId*; instead of a basic int, we can have a int<minutes>**, for example.
* an encapsulated GUID
** units of measure (example in F#)
Using more meaningful types, we can also prevent defects caused by ambiguous method or function arguments. E.g., a method that takes two GUIDs is harder to use than one that takes an EmployeeId and an OrderId when the compiler guarantees correct assignment.
No statements ➡️ only expressions
Using a programming language that does not know statements, but only expressions, and, therefore, forcing you to handle every return value, prevents defects caused by “forgotten” return values. If a method returns a Result (either Ok or Error), we must handle both cases; we can’t ignore the Error case, which likely leads to a future defect.
Once one is used to thinking in expressions rather than statements, code becomes easier to reason about, reducing the risk of defects. Yes, folds are easier to get correct than loops, once you are used to the syntax.


Parse don’t validate
A common source of errors is input data that is accepted as is and not validated (or better, parsed*).

The UnvalidatedEmailAddress accepts whatever the user enters, the EmailAddress is created through validation/parsing and will always be valid data. Wherever in the code we have an EmailAddress, we know it is valid. (I know that in the above example, one could create an invalid EmailAddress. There are ways to solve this.)
Using a discriminated union for the Contact instead of a record with both Email and Letter as optional or nullable fields prevents errors caused by data in an illegal state, by making that state unrepresentable.
*
validating: throw an error if the data is not valid
parsing: take the input data and parse what is possible, return a set of parsing errors if not parsable.
Make the code obvious – where no bugs can hide
One of the best ways to prevent defects is to write code that is obviously correct. Sounds easy, but it isn’t.
Readability
Most code I’ve seen over the years looks like the code on the left. Not bad, but F# with its pipe operator ( |> ) makes reading code, especially longer business logic, much easier because the code can be read from top-left to bottom-right, rather than through nested calls or with lots of local variables. Overall, there is less visual clutter when reading F# code than when reading C# (or Java, or C++, for example).

Obviously, it needs some getting used to the F# syntax, but after a while, reading code – and seeing potential defects – gets easier than in C# or Java code.
No Surprising Equality with Deep Equality
Another example is comparison:

All F# types provide deel equality out-of-the-box, whereas in C# the default is reference-equality. Even after years of using C#, this can still surprise you and lead to incorrect code. The tricky part is that equality can be hidden in code we use indirectly. Or how sure are you that a library call does not use some Contains, ContainsKey, [key], GroupBy, Union, or Distinct collection calls?
Non-cluttered business intent
The next sample is a bit long, but I need some build-up to explain why clutter-free business logic is so important in reducing possible defects – and, therefore, the number of needed tests.
In the sample, we have a very simple model of Customers and Data: Customer has an optional Name, Data has an Amount.
The method or function we need to implement gets a Customer-ID and a Data-ID as input. Then we need to do the following:
- Load the Customer by its ID
- Load the Data by its ID
- Get the Name of the Customer (if the Customer was found and the customer has a name)
- Get the Amount of the Data (if the Data was found)
- Return a tuple with the Name and Amount if all went well, otherwise return some kind of error
On the left, there is a typical OOP-ish implementation in C# – and on the right, there is an FP-ish F# implementation. This example is less about OOP vs. FP and more about language support for writing code with clear business intent.

Next, we implement the method and function to load a customer by its ID. We fake this and return a customer only when 42 is passed as the ID. Otherwise, we return an “error”. In C#, we return null as the error value; in F#, we return a Result with a meaningful error message. Using exceptions for non-exceptional errors is an anti-pattern in my world.

In case the customerId is not 42, Result.requireTrue returns an Error with the passed error message (note that Result supports any value as the error value, not just strings). This results in do! to “abort” the asyncResult block immediately, and the error is returned as the result of the whole block.
Then, we load the data. I simply fake this and always return the same data – good enough for the sample. The C# implementation uses a nullable return value (null means error) again. The F# implementation again uses a Result, which is Ok in this case.

We also need to get the customer’s name. Remember that the name is optional.

Finally, we can see the big difference in the next snapshot of code:

The C# code on the left becomes very cluttered due to error handling. We always have to check whether an error occurred. No, using exceptions for these kinds of errors leads to much worse problems 🙂
The F# code on the right uses an asyncResult computation expression to hide error handling. If anything inside the block returns an Error, this Error becomes the result of the whole block. By hiding tedious error handling, using pipes to make code read from top-left to bottom-right, we get code that shows the business intent much more clearly. It almost reads like a specification (take this, spec-driven AI agents 😀 ).
This is not about C# vs F# (I have other blog posts about this topic), but about language syntax being more or less helpful when writing code with clear business intent.
To complete the sample, here is some code showing how to call the above method and function:

Next time
In this post, I’ve shown you how you can use the compiler, linters, the type system, etc., to reduce the number of needed tests. The next post is about how to make recovery easier if a defect slips into production anyway.
[…] To test, or not to Test? Part 2 – Make it hard to make mistakes (Urs Enzler) […]