We received the following Ask-a-geek question from Patrick:
I have the following scenario. MVC Application using Ninject and Moq for the unit tests.
We do some additional Tests using Selenium which testes JS etc. So the scenario we want to test is, that in a TestMethod we start IIS Express with the MVC project. Start selenium and do the clicks, check the results. That works all perfect.
Now we have a call to a third party webservice from our repository. In our unittests we used MOQ to mock that. In the Selenium test we want also to mock it. Currently we have this solution
– MVC web.config has a setting, in case that setting is available, we load inject a test repository containg the moq logic. So that means, when we start the test, MVC sees the new setting and instead of injecting the default repository we have the moq repository.
So far, all works good. But here the concerns
– not a nice solution, as all the moq is part for the productive solution
– Moq and testclass are seperated: the whole logic for the Moq is within the MVC solution (repositories) and not part of the Test-Project
Any ideas/suggestions/approches to have a nicer solution?
We use the following approach the test our software (only the automated testing part):
Unit test individual classes and mock (we say fake because we use FakeItEasy 😉 ) all dependencies. This gives you super tight error localization.
Specifications / Acceptance Tests
A specification tests from process boundary to process boundary. For example from just behind the UI (controller in the case of an MVC application), via the class calling a third-party service to the data access class accessing the database. But note: we check only internals of our process. Therefore, we fake (or mock) the environment (UI, database, third-party service) in the specifications/acceptance tests.
With Unit Tests and Specifications we have functionally tested our software. The only thing that can go wrong now are integration with the environment (user, other service, database, …).
System tests check the complete system as it would be in production, but external services. We check only our own stuff. External parts are replaced with simulators. We use simulators for hardware and external services. The only thing that is different between real production and the test stage is configuration. Instead of calling the real third party service or the real hardware, calls to the simulators. For each test case, the simulator is told how to behave (in Patrick’s case a simple HTTP API for the simulator should work). Most of the times a simple for this input return that is enough. And the simulator can grow together with your software. Start with the absolute minimum. Think in the use cases of your software and not what the third-party service offers to keep the simulator as simple as possible.
Of course, implementing these simulators costs time and money. But they tell you a lot about your system and you can test your system as near as possible to the real thing. Using a real third-party service on the test stage is very painful because then tests share state and it is not guaranteed that the third-party service reacts correctly on every test run because of internal state.
Maybe the third party vendor provides already a simulator – okay that would be nice but I have never seen this. But they really should!!