Your First day on the team = releasing Your first feature

Welcome to our team! Today is your first day, which means it’s the day you’ll release your first feature. You’ll see everything needed to design, implement, and release a feature in our system. We’ll touch on F# language features, our TDD style, and some architecture topics.

This blog post is part of the F# Advent Calendar 2024 (thanks, Sergey, for the organisation).

You might ask why this blog post is so incredibly long. I was often asked where to find code showing F# in a real-world application. So, this is my attempt at answering these questions. I’ll show you a complete, no-shortcuts feature slice of our application—a feature that I implemented recently. I won’t answer all the questions you’ll probably have, but feel free to ask in the comments or reply on social media. Now, get a cup of a warm drink of your choice and buckle in…

Step 1: Understand the domain

First, an ultra-short intro to our application: We build an attendance time system with duty planning and project time tracking. We’ll implement the feature to rename projects, showing you a typical command in our system. To simplify things, we’ll implement the feature for our public API (not our Web Client) so that we can stay 100% within F#.

Laica, our mascot, is happy to see you.

To better understand our task, we should first take a look at how a project is defined in the system. Because our clients can not only track projects, we use a generalised term: Undertaking.

[<Measure>]
type UndertakingId
type Undertaking =
    {
        UndertakingId: Guid<UndertakingId>
        StructureId: Guid<StructureId>
        Label: LabelWithTranslations
        Actions: StructureOrProjectAction list
        FinishedActions: Guid<ActionId> distinctList
        FinishedActionsSettings: FinishedActionsSettings
        Groups: UndertakingGroupAssignments
        Status: UndertakingStatus
        BookableRange: Range<Workday>
        PermissionDefinitionIds: Guid<PermissionDefinitionId> distinctList
        UndertakingSpecificPermissions: UndertakingSpecificPermission list
        Managers: Guid<EmployeeId> distinctList
        Code: string<Code>
    }
Images work better than our syntax highlighter, so find the code in the alt text.

The property Label is the one relevant to our task. Its type looks like this:

type LabelWithTranslations = { Label: string; Translations: Map<Language, Translation> }

It’s not the best naming we ever did 😅. But that’s the property we need to update – or better, we should update the data in the database.

Guid<Measure> is made possible by FSharp.UMX. It’s a lightweight way of giving basic types more meaning and type safety: you can’t assign a Guid<UndertakingId> to a Guid<StructureId>.

Step 2: Domain Modelling

Once we understand our task, we can write the types that model the changes in the domain’s model (I explicitly don’t use the term domain model because that is a design pattern we don’t use). We don’t need perfect understanding. We can iterate on the solution and problem. Working on the solution often leads to a better understanding of the problem. If we think it helps us, we can show the types modelling the domain to our business experts; they can understand the F# syntax of types well enough to provide feedback.

For our task to rename an undertaking, we need to add a new event in our model:

type UndertakingRenamed = { Label: LabelWithTranslations }

Yes, you guessed right. We use event sourcing. Instead of persisting the current state, we persist all changes (events) that happened. When we need the current state, we project (also called replay) all relevant events to get it. Event sourcing gives us auditing, history, and a better understanding of what happened in the system – especially useful when clients call our support to revert changes they made by “accident”.

The above event states that we renamed an undertaking and contains the new label. But something is missing: the undertaking’s identification and some metadata. That data is kept here:

The type UndertakingEvent holds the data shared by all undertaking events. The discriminated union UndertakingEventData lists all possible event details. So we add Rename of UnderakingRenamed there.

Yes, we could write it shorter as Rename of Label: LabelWithTranslations, but the above is simpler to migrate if we ever need to add more properties beside the label.

Step 3: Let a test drive us

Now that we have modelled the domain, let’s start implementing the behaviour of renaming an undertaking. We start on the outside. In this case, the outside is the public API. We need to extend the API to rename the undertaking/project. Let’s look at the controller’s current state, providing endpoints for undertakings. We use controllers because when we started with this application, we started with C#, and using controllers in F# isn’t bad at all.

type UndertakingsController(factory) =
    inherit FsharpPublicApiController(factory)

    let insertTransactionPoints (factory: IPerRequestFactory) =
        TransactionAdapter.insertTransactionPointsForApi factory.Storages

    static let jsonOptions = Settings.createPublicApiOptions []
    static member val JsonOptions = jsonOptions with get

    [<HttpPost>]
    [<Route("v1.0/undertakings/structures/{structureId}/undertakings")>]
    member self.CreateUndertaking([<FromRoute>] structureId) =
        taskResult {
            let! json = self.GetRequestJson ()

Currently, an endpoint exists to create an undertaking, update the code of an undertaking and some helper stuff for JSON serialisation and storing so-called transaction points (used to bill our clients when using the public API). The base class FSharpPublicApiController provides some things commonly needed by public API controllers (authentication, authorization, and so on). The factory injected is the composition root to create the things we need from our business logic. We use the ASP.NET service collection only for things injected directly into controllers. Everything else is created by the factory.

But before we implement the endpoint, let’s consider testing. We use a flavour of TDD (a flavour because I don’t want to discuss the details of what TDD exactly is—do what works for you).

We write a test against the public API so that we can think about how we want to call it, and what we expect in the response:

module Calitime.TimeRocket.Core.Undertakings.PublicApi.UpdateUndertaking

open System.Net
open System.Text.Json.Serialization
open Calitime.TimeRocket
open Calitime.TimeRocket.Api
open Calitime.TimeRocket.Core.Undertakings.Features.Undertakings.CreateUndertaking
open Calitime.TimeRocket.Core.Undertakings.PermissionGroup
open Calitime.TimeRocket.Core.Undertakings.Undertakings
open Calitime.TimeRocket.Ranges
open FsToolkit.ErrorHandling
open FSharp.UMX
open Xunit
open Calitime.TimeRocket.Core.Undertakings.Facade
open Calitime.TimeRocket.Core.Undertakings.Structures
open Calitime.TimeRocket.Api.Undertakings.PublicApi
open Calitime.TimeRocket.Core.Undertakings

[<Fact>]
let ``update undertaking label`` () =
    task {
        use! bootstrapper = PublicApiSpec.create ()

        let url = "public/v1.0/undertakings/structures/E50B9571-6B39-4FBA-8E46-5DED4A1479AB/undertakings/EB394049-696D-4FB7-BD83-776151D97B69"
        let request =
            // lang=json
            """
            {
                "label": { "default": "Project", "de": "Projekt" }
            }
            """

        let operationId = Guid.generateNew ()
        bootstrapper.SetupSuccessfulUndertakingOperation (fun x ->
            match x with
            | RenameUndertaking d -> RenameUndertaking { d with OperationId = operationId }
            | _ -> failwith "unexpected operation"
        )

        let! response = bootstrapper.AuthorizedClient.PutToApi (request, url)

        test <@ response.StatusCode = HttpStatusCode.OK @>
    }

Okay, we define how we want the request to look like, call the API and assert the correct response.
We can use the already existing infrastructure:

Bootstrapper: the bootstrapper created by calling PublicApiSpec.create () gives us access to the whole system. It encapsulates the composition root of our application; more precisely: the core (think in Hexagonal architecture) with its ports to execute commands and queries.
We fake the real core so we don’t have to setup all the data needed to execute a command. We just tell the bootstrapper to accept the RenameUndertaking command (or operation as we call it – yes, yes, naming is still hard). We also replace the operationId to have control over its value. In reality this GUID is newly created.

AuthorizedClient: we can make the request with the AuthorizedClient, which is correctly configured to make test calls on our local machine.

Test: the test function is a pimped version of the Unquote test function to assert that the result is as expected. See here for details.

Of course, the test fails because we get a 404 - Not Found.

Step 4: Implement the Controller

The next step is to implement the controller so that the test has a chance to succeed. We can “copy and edit” most of it because we have many similar endpoints already. As long as we are confident in the code, our simple test is good enough for us. If confidence is low, we should add a test around the thing we are unsure of. That could be an additional test going through the API, or some small scoped test around a single pure function, for example.

Let me guide you through the endpoint code:

taskResult*: simplifies dealing with validation and other business-related error cases. When we use let! we can just continue with the happy path. If an error occurs, the block is left early (kind of an early return).
* from FSharp.ErrorHandling

GetRequestJson: we get the request body’s JSON our way because this gives us access to the JSON for better validation-error messages. Using the ASP.NET model binder makes this unnecessarily hard.

factory: We create a factory for this request, which ensures that we can only act on the data for the current tenant. Over this factory, we can create all the things we need to execute the business logic for this request.

deserialize: We try to deserialize the JSON from the request body into a type defined by us that gives us some structure but is as tolerant to bad input as possible. The reason for this is that error messages from System.Text.Json are too bad to be useful.

validate: We validate the input and return an operationData that reflects the command/operation to be executed. We return a list because a single update could result in several commands, as you will see later.

ExecuteOperationFromDevice: The call to the business logic for every operationData we got from the request body’s JSON.

insertTransactionPoints: We bill our client.

taskResult: ActionResult.taskResult UndertakingsError.mapErrorToActionResult allows us to convert an Error into a corresponding HTML Status Code and response body. We don’t have to touch this because everything we need for updating the label is already present.

We skipped some code to make the above code run (just imaging that there were red squiggly lines under the stuff not already coded)

We need the type to which we deserialize the request’s body:

type UpdateUndertaking = { Label: Skippable<Label'>; Code: Skippable<string> }

We added the Label property. The Code property was already there (to change the code of an undertaking). We use Skippable from FSharp.System.Text.Json to mark optional JSON values.
If the Label property is present, we will update it.

Then, we need the deserialization function:

[<RequireQualifiedAccess>]
module UpdateUndertaking =
    let deserialize (jsonOptions: JsonSerializerOptions) (json: string) =
        try
            JsonSerializer.Deserialize<UpdateUndertaking> (json, jsonOptions) |> Ok
        with e ->
            match e.Message with
            | Regex @"Missing field for record type Calitime.TimeRocket.Api.Undertakings.PublicApi.UpdateUndertaking: (\w*)" [ fieldName ] ->
                Error (PublicApiValidation [ $"missing field {fieldName}" ])
            | Regex @"Missing field for record type Calitime.TimeRocket.Api.Undertakings.PublicApi.(.*): (\w*)" [ recordType
                                                                                                                  fieldName ] ->
                Error (
                    PublicApiValidation [ $"missing field {fieldName} in {recordType |> String.lowerFirstCharacter}" ]
                )
            | Regex @"The JSON value could not be converted to System.String." [] ->
                Error (
                    PublicApiValidation
                        [
                            "expected value as string. Check whether all string values are passed correctly."
                        ]
                )
            | _ -> Error (PublicApiValidation [ e.Message ])

After trying to deserialize the JSON into UpdateUndertaking, we pimp some deserialization exception messages and return them as Errors instead of exceptions so that we can take advantage of the taskResult in the calling method.

Regex is an active pattern to match values easily and make matched groups available as a list.

And finally, we validate the input and transform it into operation data:

validation: Computation expression from FSharp.ErrorHandling to easily work with validation and having the possibility to return multiple validation errors at once (by using and! syntax, see below).

If a label is present, we call the validation logic for a label. If this validation is successful, we will create the operation data to rename the undertaking.

Finally, we only keep operation data for which data was present in the JSON (List.choose filter out all None values).

To see and! in action, let’s look at the already existing validation logic for a label:

The type Label’ defines the fields. A label can have translations in four languages.

And here is the code for the validation:

With the and! keyword we can collect all validation errors and return them as a list.
While proofreading this post, I found a mistake: before de should be an and! instead of a let!. See, code reviews are useful!

The translations are mapped into a Map (F# Dictionary) by using a list comprehension with implicit yields.

In reality, all the above took several quick iterations, but we got a succeeding API test. So, we’re on to the next step.

If you wonder about our code formatting, we use Fantomas to lay out our code. Just write code that compiles and then let Fantomas layout it. It’s not perfect, but close 😊.

Step 5: Drive the core implementation by a test

We want to drive the business logic implementation guided by a test. This gives us focus, let’s us think outside-in by defining the “interface” first, and gives us confidence that the code works—especially when we want to refactor the code.

Luckily, there is already a lot in place that we can use. Instead of writing an individual test for every command possible, we often group them to reduce writing effort and duplication. In this case, we have a single test that executes all the different update commands on an undertaking. This way, we only have to set up everything needed once.

The TDD and unit-testing dogmatists typically object that a single test should have a single assertion. That is absolutely helpful when you don’t have the power of F# and Unquote, but we have, and you’ll see why multiple assertions aren’t a problem in this case (this is not a general statement! But in this case, this kind of test is good enough).

We call such test lifetime tests because they simulate a possible lifetime of a thing – an undertaking in this case.

Let’s take a look at what is already there:

The existing test sets up a bootstrapper. Note that this differs from the one in the Web API test. But it serves the same purpose: giving us access to the whole system, with faked external dependencies (e.g. service bus, callers for external service). As a side note, we can run these tests on a manually written in-memory database or a real one, which is very handy.

Before we can create an undertaking, we need to set up the data needed.

We need a structure because undertakings live inside a structure that defines how undertakings look like. We like to use parsers (Structure.parse), builders, and runners (Structure.persist) to make writing the tests easier. The list of managers is updated so that we later have permission to update the undertaking.

Then we need some groups:

Here we have some fancy stuff going on:

%%: a custom operator to quickly generate GUIDs and Workdays for tests, like: let someGuid: Guid<StructureId> = %%"structure" or let workday = %% "13.12.2024.

%: operator from FSharp.UMX to convert Guid into Guid<Measure> and vice-versa.

AsyncResult.assertSuccess: ensures the operation is successful; otherwise, it fails the test.

Async.sequential |> Async.Ignore: because we add multiple groups, we add them sequentially, and we ignore the result

Then we need some permission groups (who is allowed to read or write activities of an undertaking)

And then, we create the undertaking by executing the CreateUndertaking command/operation.

Now, it’s time to check whether the roundtrip of saving and loading the undertaking can be executed successfully:

Because we use Unquote (test) and give the value bound to the result a unique name, we will see this name in the error message, if the actual value differs from the expected one. So, no worries about having multiple assertions in a single test!

The existing test continues by manipulating the undertaking and making the corresponding assertions. But we can now add what is needed to test renaming:

The only new thing here is the object expression to update what was expected by the last assertion. let expected = { expected with Label = renameOperationData.Label } replaces the old label with the one we updated. expected was set by a step I skipped here.

Again, imagine some red squiggles under RenameUndertakingOperationData and RenameUndertaking, which we haven’t yet implemented.

Step 6: Extend the Business Logic facade

I mentioned earlier that we have a business logic core surrounded by the Public Web API Controller layer or adapters (whatever name you like better). Besides the Public Web API adapter, there are also the Web API adapters for our own client, and the Azure Functions adapter. All these adapters make calls to the business logic code through a facade. We have a facade per sub-system. A subsystem is a mostly independent part of our system, like attendance time, duty planning, or undertakings.
If you want to know more about our architecture, you can find a – also way too long – blog post here.

A facade defines all the operations that can be executed through it as a discriminated union:

type UndertakingOperationData =
    | CreateUndertaking of CreateUndertakingOperationData
    | ChangeGroupAssignments of ChangeGroupAssignmentsOperationData
    | FinishAction of FinishActionOperationData
    | RestartFinishedAction of RestartFinishedActionOperationData
    | UpdateStatus of UpdateStatusOperationData
    | UpdateBookableRange of UpdateBookableRangeOperationData
    | UpdateCode of UpdateCodeOperationData
    | UpdateFinishedActionsSettings of UpdateFinishedActionsSettingsOperationData
    | UpdateUndertakingManagers of UpdateUndertakingManagersOperationData
    | UpdateUndertakingActions of UpdateUndertakingActionsOperationData
    | UpdateUndertakingPermissions of UpdateUndertakingPermissionsOperationData
    | AddUndertakingSpecificPermission of AddUndertakingSpecificPermissionOperationData
    | UpdateUndertakingSpecificPermission of UpdateUndertakingSpecificPermissionOperationData
    | RemoveUndertakingSpecificPermission of RemoveUndertakingSpecificPermissionOperationData
    | SaveUndertakingPreset of SaveUndertakingPresetOperationData
    | DeleteUndertakingPreset of DeleteUndertakingPresetOperationData
    | RenameUndertaking of RenameUndertakingOperationData
    interface IOperationData

And provides a method to execute them:

type UndertakingsFeatures(startContext, storages: UndertakingsStorages, getNamedEmployees: Bridge.GetNamedEmployees) =

    member self.ExecuteOperation(operationData: UndertakingOperationData) =
        match operationData with
        | CreateUndertaking d ->
            createUndertaking
                storages.Structures.QueryEventsByStructureId
                storages.Undertakings.QueryEventsByUndertakingId
                storages.UndertakingGroups.QueryByStructureIdAndGroupIds
                storages.PermissionDefinitions.GetPermissionDefinitionEvents
                getNamedEmployees
                storages.Undertakings.PersistEventAndUndertaking
                startContext
                d
        | ChangeGroupAssignments d -> ...

The facade needs some things to be injected into it:

startContext: contains everything needed to execute workflows (out of scope for this blog post). Things like logger, message bus sender etc.

storages: contains the code used to access the database. We’ll see this later on.

getNamedEmployees: an adapter function to get data from another sub-system into the undertakings sub-system.

The facade is created by the PerRequestFactory we have seen earlier. That factory also creates the dependencies and passes them to the facade. In the tests, this is done by the bootstapper.

The ExecuteOperation method matches on the operationData and executes the corresponding command/operation function by partially applying the functions accessing dependencies.

The above method is for our client, but there is an equivalent for the public API to call:

abstract member ExecuteOperationFromDevice:
        UndertakingOperationData * DeviceOperationMetadata -> Async<Result<unit, UndertakingsError>>
    default self.ExecuteOperationFromDevice(operationData, metadata) =
        self.ExecuteOperation operationData (DeviceOperationMetadata metadata)

It simply delegates the call to the above method. We need this to fake this call in the Web API tests – it has to be virtual.

To support renaming undertakings, we need to add the discriminated union case (it’s already present in the picture further above) and implement the match case for it:

| RenameUndertaking d ->
            renameUndertaking
                storages.Undertakings.QueryEventsByUndertakingId
                storages.Undertakings.PersistEventAndUndertaking
                startContext
                d

At this point, we typically don’t know yet what storage functions we need, but this is a typical update, so we are confident that we will use a way to get events of a single undertaking and a way to persist an undertaking. If we are wrong, we’ll simply iterate on this code.

Step 7: Implement the command/operation

Now, it’s time to implement the actual command/operation.

First we need to define the operation data by specifying what the command needs. Sometimes, while implementing the command, we recognise that we missed stuff earlier. No problem, we go back to the code further out and add what is needed. Since we use vertical slices, all these things are close by and quick to navigate to. Yes, go read the blog post about the architecture stuff, too.

type RenameUndertakingOperationData =
    {
        OperationId: Guid<OperationId>
        StructureId: Guid<StructureId>
        UndertakingId: Guid<UndertakingId>
        Label: LabelWithTranslations
    }

The actual command implementation looks like this:

[<Literal>]
let RenameUndertakingOperationName = "RenameUndertaking"

let renameUndertaking
    queryUndertakingEvents
    persistEventAndUndertaking
    context
    (data: RenameUndertakingOperationData)
    metadata
    =
    asyncResult {
        let! undertaking =
            queryUndertakingEvents data.StructureId data.UndertakingId |!> UndertakingEvent.getUndertaking
            |> AsyncResult.requireSome UndertakingNotFound

        do! Authorization.validateUndertakingManager undertaking (metadata |> Requester.ofOperationMetadata)

        let renamed =
            {
                UndertakingEventId = Guid.generateNew ()
                UndertakingId = data.UndertakingId
                StructureId = data.StructureId
                Data = Renamed { Label = data.Label }
                Application = metadata |> OperationMetadata.getApplication
                TriggerId = %data.OperationId
            }

        let undertaking = undertaking |> UndertakingEvent.apply renamed

        do!
            OperationRunner.start context data.OperationId RenameUndertakingOperationName metadata
            |> OperationRunner.logAndPersistEventAndProjection
                persistEventAndUndertaking
                UndertakingEvent.getDomainDescription
                renamed
                undertaking
            |> OperationRunner.finish
    }

We use the asyncResult computation expression so that we can make asynchronous calls and use Results in case some business rule is violated and we must return an error. We use exceptions only for genuinely exceptional things like a lost database connection. Not so fun fact: database connection problems are not genuinely exceptions, once a system runs 24/7 with thousands of users, so better protect all the calls to external system with Polly, or something similar.

First, we get the undertaking from the database by querying the events of the undertaking and projecting them into a projection. AsyncResult.requireSome makes sure that we found the undertaking; otherwise, we return an UndertakingNotFound error.
The |!> operator is short for |> Async.map. We use this so often that we decided to use a dedicated operator.

Second, we authorize this command. We check whether the caller of the operation is allowed to manipulate this undertaking. If validation fails, it will return an Error, so we have to use do!.

Third, we create the renamed event.

Now, a short detour:
We use read models for data that cannot be easily queried from the events alone. Undertakings are such a case because, for example, the status of an undertaking is often part of the query. Therefore, we must store the new event and update the read model. We made an architectural decision that we do this in the same database transaction at once – not eventually consistent over a service bus message, for example. It’s a trade-off decision with advantages (primarily simplicity) and disadvantages (data inconsistency when two commands update the same undertaking at the exact moment – one event would get lost; this is, however, very unlikely in this specific case).

To update the read model, we need the current state of the undertaking. Therefore, we apply the new renamed event onto the current undertaking.

Lastly, we use the OperationRunner – a helper module – to store the event and read-model entry.

The operation runner provides common functionality to commands, like logging, auditing, or notifying other system parts about changes in the data. For example, when we change data relevant to the accounting, we must call a variant of the OperationRunner.handleChangedWorkday function. The system will then recalculate the accounting numbers.

OperationRunner.finish logs the command as successful. If any error or exception occurs while the operation runner executes, a compensation message will be written to the service bus and the whole command will be compensated. This is achieved by removing all the events with a TriggerId = operationId. Compensation allows us to work with different data stores. We use transactions only for our SQL databases.

As a side note: The functions we call on other modules, like Authorization.validateUndertakingManager, or the OperationRunner‘s functions are tested by unit tests, which were written when these functions were introduced. So, whenever we program something more complicated than our confidence level, we split the code out into a dedicated function and write tests around it—outside-in TDD.

When executing the test – I hope you didn’t forget about it in the meantime – we still get a compiler warning because we don’t handle the Renamed event data in the event projection code. F#’s exhaustive checks on pattern matches on discriminated unions make it easy to spot what has to be done next.

As a sidenote, the command/operation function takes the role of the aggregate as known from DDD (Domain Driven Design). It ensures the consistency of the data. After the function is executed, all business invariants hold (again).

Step 8: Implement the event projection

As a reminder, we add the Renamed event data to the existing list of event data:

type UndertakingRenamed = { Label: LabelWithTranslations }

type UndertakingEventData =
    | UndertakingCreated of UndertakingCreated
    | ActionFinished of ActionId: Guid<ActionId>
    | FinishedActionRestarted of ActionId: Guid<ActionId>
    | GroupAssignmentsUpdated of GroupAssignmentsUpdated
    | StatusUpdated of StatusUpdated
    | BookableRangeUpdated of BookableRangeUpdated
    | CodeUpdated of CodeUpdated
    | FinishedActionsSettingsUpdated of FinishedActionsSettingsUpdated
    | UndertakingManagersUpdated of UndertakingManagersUpdated
    | ActionsUpdated of ActionsUpdated
    | PermissionsUpdated of PermissionsUpdated
    | UndertakingSpecificPermissionAdded of UndertakingSpecificPermissionAdded
    | UndertakingSpecificPermissionUpdated of UndertakingSpecificPermissionUpdated
    | UndertakingSpecificPermissionRemoved of UndertakingSpecificPermissionRemoved
    | Renamed of UndertakingRenamed

[<Measure>]
type UndertakingEventId
type UndertakingEvent =
    {
        UndertakingEventId: Guid<UndertakingEventId>
        UndertakingId: Guid<UndertakingId>
        StructureId: Guid<StructureId>
        Data: UndertakingEventData
        Application: Application
        TriggerId: Guid
    }

The missing piece is to tell the event projector what to do when the Renamed event is projected. The “configuration” of the event projector is done in the getProjectionAction function:

module UndertakingEvent =

    let getProjectionAction event =
        match event.Data with
        | UndertakingCreated d ->
            Creates
                {
                    Undertaking.UndertakingId = event.UndertakingId
                    StructureId = event.StructureId
                    Label = d.Label
                    Actions = d.Actions
                    FinishedActions = DistinctList.empty
                    FinishedActionsSettings = d.FinishedActionsSettings
                    BookableRange = d.BookableRange
                    Groups = d.Groups
                    Status = d.Status
                    PermissionDefinitionIds = d.PermissionDefinitionIds
                    UndertakingSpecificPermissions = d.ProjectSpecificPermissions
                    Managers = d.Managers
                    Code = d.Code
                }
        | Renamed d -> Updates (fun undertaking -> { undertaking with Label = d.Label })

The function gets the event to project and returns an action describing what to do with it. In our case, we tell the projector to update the current value by changing the Label property to the value in the event data.

You can find more info about how we do event sourcing here.

Now, the compiler warning is gone because we handle all possible cases (the code above is truncated; all other cases are below the code shown). There is one thing missing to be able to run the test successfully: the database access.

Step 9: Implement the database access

Luckily for us, all we need is already implemented. So, we do a quick walkthrough.

We use an interface to define the ways how the application can interact with the database:

type UndertakingsStorage =
    abstract member PersistEventAndUndertaking: UndertakingEvent -> Undertaking option -> unit async
    abstract member PersistEventsAndUndertakings: (UndertakingEvent * Undertaking option) list -> unit async
    abstract member QueryEventsByUndertakingId:
        Guid<StructureId> -> Guid<UndertakingId> -> EventsForProjection<UndertakingEvent> async

We use an interface because we have two implementations: a manually written in-memory simulation and the actual SQL Server access. I’ll show you only the SQL Server access:

module UndertakingsSqlStorage =

    [<CLIMutable>]
    type UndertakingEventRow =
        {
            Type: string
            EventId: Guid<UndertakingEventId>
            StructureId: Guid<StructureId>
            UndertakingId: Guid<UndertakingId>
            TriggerId: Guid
            Data: string
            Version: int
            Application: DateTime
        }

    let serialize, deserialize =
        SystemTextJson.Settings.create
            SystemTextJson.Settings.DatabaseOptions
            [ AsStringConverter (Workday.serialize, Workday.parse) ]

    let serializeUndertaking, deserializeUndertaking =
        SystemTextJson.Settings.create
            SystemTextJson.Settings.DatabaseOptions
            [ AsStringConverter (Workday.serialize, Workday.parse) ]

UndertakingEventRow: Matches the table for the events for undertakings. CLIMutable is required so that Dapper can write the properties. We only use base types, so we don’t have to implement custom converters if possible.

serialize, deserialize: We set up JSON serialization options to serialize the event data of the undertaking events. We need a custom JSON converter for Workday.

Then, we need mapper functions to map events to and from rows:

let mapToRow (event: UndertakingEvent) =
        {
            Type =
                match event.Data with
                | UndertakingCreated _ -> "cre"
                | ActionFinished _ -> "fin"
                | FinishedActionRestarted _ -> "far"
                | GroupAssignmentsUpdated _ -> "gau"
                | StatusUpdated _ -> "su"
                | BookableRangeUpdated _ -> "bru"
                | CodeUpdated _ -> "cod"
                | FinishedActionsSettingsUpdated _ -> "asu"
                | UndertakingManagersUpdated _ -> "umu"
                | UndertakingEventData.ActionsUpdated _ -> "au"
                | PermissionsUpdated _ -> "pu"
                | UndertakingSpecificPermissionAdded _ -> "upa"
                | UndertakingSpecificPermissionUpdated _ -> "upu"
                | UndertakingSpecificPermissionRemoved _ -> "upr"
                | Renamed _ -> "ren"

            EventId = event.UndertakingEventId
            StructureId = event.StructureId
            UndertakingId = event.UndertakingId
            TriggerId = event.TriggerId
            Data = event.Data |> serialize
            Application = event.Application |> unwrapApplication
            Version = 100
        }

    let mapToEvent (row: UndertakingEventRow) =
        {
            UndertakingEvent.UndertakingEventId = row.EventId
            StructureId = row.StructureId
            UndertakingId = row.UndertakingId
            TriggerId = row.TriggerId
            Data =
                match row.Version with
                | 100 -> row.Data |> deserialize
                | _ -> failwith "Unknown version"
            Application = row.Application |> wrapApplication
        }

Type: It makes debugging much easier when we store the type of the event in the database row. It is never read back, but when we look at the data in the database, it makes understanding the data easier.

Version: The version is used in migration scenarios. I’ll show you that another day.

Finally, we have the implementation of the database access interface for undertakings:

let createStorage (context: PersistencyContext) =

        { new UndertakingsStorage with
            member self.PersistEventAndUndertaking event undertaking =
                self.PersistEventsAndUndertakings [ event, undertaking ]

            member _.PersistEventsAndUndertakings data =
                let commands =
                    [
                        for event, undertaking in data do
                            """
                            INSERT INTO undertakings.undertakingEvents
                            (type, eventId, structureId, undertakingId, triggerId, data, version, application)
                            VALUES
                            (@type, @eventId, @structureId, @undertakingId, @triggerId, @data, @version, @application)
                            """,
                            event |> mapToRow :> obj

                            """
                            DELETE FROM undertakings.undertakingReadModel
                            WHERE tenantId = @tenantId AND structureId = @structureId AND undertakingId = @undertakingId
                            """,
                            {|
                                tenantId = context.TenantId
                                structureId = event.StructureId
                                undertakingId = event.UndertakingId
                            |}

                            match undertaking with
                            | Some undertaking ->
                                """
                                INSERT INTO undertakings.undertakingReadModel
                                (structureId, undertakingId, status, data, version, application)
                                VALUES
                                (@structureId, @undertakingId, @status, @data, @version, @application)
                                """,
                                {|
                                    structureId = undertaking.StructureId
                                    undertakingId = undertaking.UndertakingId
                                    status = mapStatus undertaking.Status
                                    data = serializeUndertaking undertaking
                                    version = 100
                                    application = event.Application |> unwrapApplication
                                |}
                            | None -> ()
                    ]
                Sql.ExecuteInTransaction (context, commands)

We use an object expression to create an interface instance and specify the implementation of the methods on it.

Sql is a wrapper around Dapper that sends telemetry data to Azure Application Insights. The ExecuteInTransaction function executes all passed commands in a single transaction.

And here is the query method that we used in our command code:

member _.QueryEventsByUndertakingId structureId undertakingId =
                async {
                    let! rows =
                        Sql.Query<UndertakingEventRow> (
                            context,
                            """
                            SELECT eventId, structureId, undertakingId, triggerId, data, version, application
                            FROM undertakings.undertakingEvents
                            WHERE tenantId = @tenantId
                            AND structureId = @structureId
                            AND undertakingId = @undertakingId
                            AND compensatedAt IS NULL
                            """,
                            {|
                                tenantId = context.TenantId
                                structureId = structureId
                                undertakingId = undertakingId
                            |}
                        )
                    return rows |> List.map mapToEvent |> EventsForProjection
                }

EventsForProjection is a single case discriminated union that makes sure that we don’t mix up events for a single projection (e.g., single undertaking) with events for multiple projections (e.g., multiple undertakings)

Step 10: Release it

Now, our test runs successfully. We glance through the code and check our confidence level. If it were low, we’d add more tests. But in this case, confidence levels are high.

Time to release the code change by pushing the code. Optionally, if you want somebody else’s feedback on your changes, you can push your changes to a feature branch and ask another developer to review them; otherwise push to the main branch directly.

Once the build and test pipeline has finished successfully, we press the release button and the changes are live.

Step 11: See your changes at work

As our final step, we want to see our new feature live in action. So, let’s call the public API.

The simplest way is to use an F# script file:

We load some Nuget packages with #r, define some values, and then we can make a call to the public API (imaging that we also replace localhost with the correct value for the live system):

Let’s define the needed values for structureId, undertakingId and token by looking them up in the client. And finally, let’s run the script in FSI.

The request succeeds with a 200 OK. Woohoo!

Conclusions

We have built a small but complete feature in our application to rename a project (undertaking). We’ve seen all the layers (Web API ➡️ Facade ➡️ Core ⬅️ Database), and we applied many F# features to our benefit:

Computation Expressions (taskResult, validation, async) to simplify working with business errors and synchronicity.

Discriminated unions for modelling the domain closely and to have exhaustive pattern matching with compiler warnings.

(Exhaustive) Pattern matching: to easily spot missing patterns.

Active patterns for more straightforward pattern matching.

Partial Application (and currying) to inject dependencies.

Object Expressions to update record properties and create instances of interfaces.

Quotations for simple test assertions.

FSI to quickly execute some F# code.

I hope you enjoyed your first day on our team!

About the author

Urs Enzler

Add comment

Recent Posts