Your tests come with a glowing code coverage report, but I’m not convinced. A subtle rot spreads through your tests without you even knowing it: your tests keep passing, your code coverage report keeps claiming health, but in fact, your tests drift further and further away from demonstrating the system’s true behavior. Thankfully, we can fight back by identifying the root cause of this rot and by addressing it with a simple assertion method.
Naive integration tests
Consider a basic Contact List application that tracks the names and email addresses of your contacts. We’d develop the Edit Contact feature along with an integration test: one that invokes the feature and confirms the behavior, all the way out to a real database. The test would begin by invoking the Edit Contact feature for a sample Contact and querying the database for the updated Contact’s actual state. We’d finish the test by asserting on the Contact’s individual properties, demonstrating all the expected updates took hold:
The rot sets in
This is great, but only for the next few hours. The next day, your teammate works to add a new PhoneNumber property to the Contact entity. They diligently visit features like Add Contact and Edit Contact to make use of the new property, but they aren’t yet comfortable with maintaining test coverage, so they aren’t even aware of your integration test. They do know to modify the Edit Contact feature to set PhoneNumber, and they even run the build. The integration test has quietly become incomplete, but it still passes.
Your teammate thinks they’re done, but they’ve taken the first step on the path to subtly eroding the value of your system’s test coverage. The test is lying, the code coverage report is lying, and developers will come to distrust the growing set of passing-yet-incomplete tests.
The root of the problem is the reliance on field-by-field assertions. Our intent was to assert on the new state of the system, but that is not what the test is doing. You have to keep on remembering to reevaluate your tests every time you make a change to the system, and inevitably forget.
Expressing intent with ShouldMatch
The integration test’s entire reason for being is to demonstrate the true and complete effect of the Edit Contact feature. In a perfect world, our poor teammate would not have to be aware of the existing test. Instead, we’d rather have the system tell them right away that their work is not finished. We can do so by defining our own assertion helper, ShouldMatch, and an exception for describing mismatched objects:
Here, we get a plain text representation of our complete objects for comparison, using the new JSON support in .NET Core 3. Next, we rephrase our original naive assertion:
This solves the immediate problem. The moment the developer adds PhoneNumber support to the Edit Contact feature, this test will begin to (rightly!) fail. The test fails because the developer hasn’t updated the test to match the true behavior of the system.
Enhancing ShouldMatch With Your Diff Tool
ShouldMatch lets us better-express our intent, but the test will fail with a rather lengthy message, listing two large JSON strings. The developer has to eyeball where the strings differ.
To give the developer a fantastic development experience, telling them exactly what’s wrong with the test, we’ll modify our test runner’s own behavior. Using the Fixie test framework, we’ll define a convention: whenever the developer is running a single test, if the test fails due to a MatchException, launch the developer’s own diff tool with the Expected and Actual JSON strings:
With this infrastructure in place, the developer updates the Edit Contact feature, witnesses a surprising test failure right away, and then upon running that test in isolation, they see this:
The test is telling the developer that their job isn’t done yet, with exactly the guidance they need to finish the job, maintaining meaningful test coverage by default. For all those out there with a glowing code coverage report, ask yourself: How much of this coverage is even real?