To test, or not to Test? Part 5 – Final words

I wrote code without tests that ran in production without defects, and I wrote buggy code with TDD (Test Driven Development). Time to look back at 35 years of coding and when tests help, and when there is something better. And especially, what these better things are.

In the final part, we examine how LLMs affect testing and conclude the series.

What about LLMs?

LLMs can generate a lot of code quickly. But, quality cannot be tested into software. Automated tests can only check explicitly stated facts. And exploratory manual testing can only be done in a very limited way.

So, the concepts I wrote about in the previous parts of this series help constrain the code generated by LLMs. There is less chance of introducing a defect.

IMHO, the current state of LLMs is still far from generating code well enough without human intervention (review and refactoring). But the concepts in this series help generate a better version.

I prefer using LLMs as sparring partners to find holes in my thinking and to review my code. And, of course, I asked an LLM about ideas to reduce the need for automated tests.

Here is its answer:

  1. Shift-Left Testing
  2. Utilize Static Analysis
  3. Apply Risk-Based Testing
  4. Embrace Property-Based Testing
  5. Use Test Automation Wisely
  6. Refactor Code for Testability
  7. Leverage Mutation Testing
  8. Adopt a Good CI/CD Pipeline
  9. Implement Contract Testing
  10. Use Parameterized Tests
  11. Optimize Test Suites by Removing Redundant Tests
  12. Utilize Code Coverage Smartly
  13. Leverage Consumer-Driven Contracts
  14. Reduce UI Tests Where Possible
  15. Use Monitoring and Observability Instead of Excessive Tests
  16. Encourage Exploratory Testing for High-Impact Areas
  17. Leverage AI-Powered Test Generation
  18. Implement Chaos Testing
  19. Adopt Feature Toggles for Controlled Releases
  20. Optimize Dependency Management
  21. Use Delta Testing
  22. Employ Smart Test Prioritization
  23. Design Systems for Fault Tolerance
  24. Monitor Production Instead of Over-Testing
  25. Implement Self-Healing Tests
  26. Use Model-Based Testing
  27. Apply Snapshot Testing
  28. Use Contracts for Third-Party Dependencies
  29. Reduce Overlapping Tests in Microservices
  30. Automate Test Environment Setup
  31. Validate with Canaries Instead of Excessive Staging Tests
  32. Reduce End-to-End Testing Bottlenecks
  33. Adopt Behavior-Driven Development (BDD)
  34. Use Dependency Injection for Easier Testing
  35. Optimize Assertions in Tests
  36. Adopt Specification-Based Testing
  37. Implement Feature Branch Testing
  38. Use Property-Based Testing Instead of Exhaustive Cases
  39. Design for Observability First
  40. Reduce Statefulness in Code
  41. Use Meta-Testing to Evaluate Effectiveness
  42. Combine Tests Where Possible
  43. Prioritize Happy Path Testing
  44. Explore Data-Driven Testing
  45. Use Golden Master Testing for Legacy Code
  46. Adopt Immutable Infrastructure
  47. Use Differential Testing
  48. Emphasize Static Typing and Compiler Guarantees
  49. Optimize Database Queries Instead of Testing Every Scenario
  50. Focus on Error Prevention Over Error Detection
  51. Implement Feature Flags for Gradual Rollouts
  52. Write Contracts for Third-Party Integrations
  53. Reduce System Complexity Through Modularization
  54. Use Time Travel Debugging Instead of Excessive Logging Tests
  55. Focus on Architectural Patterns That Minimize Testing Needs
  56. Apply Test-Impact Analysis
  57. Use Compiler Optimizations to Reduce Runtime Issues
  58. Apply Orthogonal Array Testing
  59. Emphasize Modular Design for Reusable Testing
  60. Build Resilient APIs Instead of Testing Every Scenario
  61. Reduce Duplication in Test Fixtures
  62. Implement Type-Driven Development (TDD)
  63. Use Continuous Deployment for Fast Feedback
  64. Reduce Test Combinations Using Pairwise Testing
  65. Optimize Integration Testing Through Contract-Driven Design
  66. Monitor Real-World Usage Instead of Hypothetical Edge Cases
  67. Favor Deterministic Code for Easier Testing
  68. Use Feature Deployment Strategies Instead of Excessive Pre-Release Testing
  69. Leverage Constraint-Based Testing
  70. Rely on Formal Verification for Critical Systems
  71. Favor Functional Programming Principles
  72. Use Static Code Contracts
  73. Implement Predictive Analytics for Failures
  74. Use Auto-Mocking for Faster Unit Testing
  75. Reduce UI Test Maintenance with Smart Locators
  76. Apply Hierarchical Test Design
  77. Use Database Constraints Instead of Tests
  78. Reduce Integration Testing for Stable APIs
  79. Validate Configuration Using Infrastructure-as-Code
  80. Reduce Manual Test Case Writing with Test Synthesis
  81. Favor Deterministic Builds
  82. Optimize Assertions to Focus on High-Impact Behavior
  83. Avoid Over-Mocking
  84. Use Cross-Browser Testing Selectively
  85. Implement Dynamic Test Configuration

It’s an answer from an LLM, so there are many good ideas in the list, along with some not-so-good ones, and ones that miss the question. Anyway, the list is useful as a source of inspiration.

Summary of the whole series

Depending on your programming language, you can replace a part of the automated tests with better design and language feature usage.

By replacing tests with compilers, linters, design, etc., we reduce the effort required to write and maintain these tests, leading to faster development and maybe even higher confidence.

We apply all the concepts I presented in this series. But we still have many automated tests, and we use different kinds for different purposes. The following picture gives an overview:

Different kinds of automated tests for different purposes in our codebase.

I hope you can take some inspiration from this series of posts. If so, I’d like to hear from you in the comments here or in a reply on social media (it helps my motivation to continue blogging). Happy coding!

About the author

Urs Enzler

1 comment

By Urs Enzler

Recent Posts