I wrote code without tests that ran in production without defects, and I wrote buggy code with TDD (Test Driven Development). Time to look back at 35 years of coding and when tests help, and when there is something better. And especially, what these better things are.
In the final part, we examine how LLMs affect testing and conclude the series.
What about LLMs?
LLMs can generate a lot of code quickly. But, quality cannot be tested into software. Automated tests can only check explicitly stated facts. And exploratory manual testing can only be done in a very limited way.
So, the concepts I wrote about in the previous parts of this series help constrain the code generated by LLMs. There is less chance of introducing a defect.
IMHO, the current state of LLMs is still far from generating code well enough without human intervention (review and refactoring). But the concepts in this series help generate a better version.
I prefer using LLMs as sparring partners to find holes in my thinking and to review my code. And, of course, I asked an LLM about ideas to reduce the need for automated tests.
Here is its answer:
- Shift-Left Testing
- Utilize Static Analysis
- Apply Risk-Based Testing
- Embrace Property-Based Testing
- Use Test Automation Wisely
- Refactor Code for Testability
- Leverage Mutation Testing
- Adopt a Good CI/CD Pipeline
- Implement Contract Testing
- Use Parameterized Tests
- Optimize Test Suites by Removing Redundant Tests
- Utilize Code Coverage Smartly
- Leverage Consumer-Driven Contracts
- Reduce UI Tests Where Possible
- Use Monitoring and Observability Instead of Excessive Tests
- Encourage Exploratory Testing for High-Impact Areas
- Leverage AI-Powered Test Generation
- Implement Chaos Testing
- Adopt Feature Toggles for Controlled Releases
- Optimize Dependency Management
- Use Delta Testing
- Employ Smart Test Prioritization
- Design Systems for Fault Tolerance
- Monitor Production Instead of Over-Testing
- Implement Self-Healing Tests
- Use Model-Based Testing
- Apply Snapshot Testing
- Use Contracts for Third-Party Dependencies
- Reduce Overlapping Tests in Microservices
- Automate Test Environment Setup
- Validate with Canaries Instead of Excessive Staging Tests
- Reduce End-to-End Testing Bottlenecks
- Adopt Behavior-Driven Development (BDD)
- Use Dependency Injection for Easier Testing
- Optimize Assertions in Tests
- Adopt Specification-Based Testing
- Implement Feature Branch Testing
- Use Property-Based Testing Instead of Exhaustive Cases
- Design for Observability First
- Reduce Statefulness in Code
- Use Meta-Testing to Evaluate Effectiveness
- Combine Tests Where Possible
- Prioritize Happy Path Testing
- Explore Data-Driven Testing
- Use Golden Master Testing for Legacy Code
- Adopt Immutable Infrastructure
- Use Differential Testing
- Emphasize Static Typing and Compiler Guarantees
- Optimize Database Queries Instead of Testing Every Scenario
- Focus on Error Prevention Over Error Detection
- Implement Feature Flags for Gradual Rollouts
- Write Contracts for Third-Party Integrations
- Reduce System Complexity Through Modularization
- Use Time Travel Debugging Instead of Excessive Logging Tests
- Focus on Architectural Patterns That Minimize Testing Needs
- Apply Test-Impact Analysis
- Use Compiler Optimizations to Reduce Runtime Issues
- Apply Orthogonal Array Testing
- Emphasize Modular Design for Reusable Testing
- Build Resilient APIs Instead of Testing Every Scenario
- Reduce Duplication in Test Fixtures
- Implement Type-Driven Development (TDD)
- Use Continuous Deployment for Fast Feedback
- Reduce Test Combinations Using Pairwise Testing
- Optimize Integration Testing Through Contract-Driven Design
- Monitor Real-World Usage Instead of Hypothetical Edge Cases
- Favor Deterministic Code for Easier Testing
- Use Feature Deployment Strategies Instead of Excessive Pre-Release Testing
- Leverage Constraint-Based Testing
- Rely on Formal Verification for Critical Systems
- Favor Functional Programming Principles
- Use Static Code Contracts
- Implement Predictive Analytics for Failures
- Use Auto-Mocking for Faster Unit Testing
- Reduce UI Test Maintenance with Smart Locators
- Apply Hierarchical Test Design
- Use Database Constraints Instead of Tests
- Reduce Integration Testing for Stable APIs
- Validate Configuration Using Infrastructure-as-Code
- Reduce Manual Test Case Writing with Test Synthesis
- Favor Deterministic Builds
- Optimize Assertions to Focus on High-Impact Behavior
- Avoid Over-Mocking
- Use Cross-Browser Testing Selectively
- Implement Dynamic Test Configuration
It’s an answer from an LLM, so there are many good ideas in the list, along with some not-so-good ones, and ones that miss the question. Anyway, the list is useful as a source of inspiration.
Summary of the whole series
Depending on your programming language, you can replace a part of the automated tests with better design and language feature usage.
By replacing tests with compilers, linters, design, etc., we reduce the effort required to write and maintain these tests, leading to faster development and maybe even higher confidence.
We apply all the concepts I presented in this series. But we still have many automated tests, and we use different kinds for different purposes. The following picture gives an overview:

I hope you can take some inspiration from this series of posts. If so, I’d like to hear from you in the comments here or in a reply on social media (it helps my motivation to continue blogging). Happy coding!
[…] To test, or not to Test? Part 5 – Final words (Urs Enzler) […]