This blogpost is part of a larger blog post series. Get the overview of the content here.
Unless you’re on a separate network that will never, ever be connected to anything else… Well, not even then. Viruses, Trojans, etc. can still be brought in by users on CDs, DVDs, DOKs, etc. You can’t be 100% safe from everything
Never underestimate security risks. Perform a thread model analysis and after that balance costs against risks. What is the risk involved when a security breach occurs, how much does it cost to fix the potential problem, how much does it costs to fix the problem when the breach occurred… Declare potential problems, talk about it and include the public relationship and legal department.
Topology never changes unless a server goes down and is replaced. Or because of network reasons a server is moved to a different subnet. Of course also clients can connect and disconnect as needed. What will happen to the application when those hard coded / config-file values change?
Several solutions can be applied. The first and easy step is to never hard-code any infrastructure configuration aspects. You could also use resilient protocols (i.e. multicast) or other discovery mechanism (of course they have their drawbacks too). If you want to make sure that your system will still be able to maintain response time requirements when changes happen consider to introduce a chaos monkey.
Usually in small systems the administrator might have a good overview of the system. But what happens if he is killed by a bus? His replacement probably won’t know what to do. If there are multiple admins, rolling out various upgrades and patches, will everything grind to a halt? Will client software be able to work with a new version of the server?
Automate and test the deployment to different environments right from the start of the project. Design your system to be able to have multiple versions running in multiple locations concurrently. Enable the admin to take parts of the system down for maintenance without adversely affecting the rest.
Serialization adds communication overhead which increases the bandwidth and therefore has an impact on the transport costs. Hardware and network infrastructure has upfront and ongoing costs (maintenance, support, licensing, power…). Even in large cloud datacenters transferring data involves costs.
The effect of serialization on performance further strengthen the argument to stay away from chatting over the network. Architects need to make trade-offs between infrastructure costs and development costs – upfront vs. ongoing.
The next post will cover even more about fallacies.