Saturday, 12 May 2018

No unit tests? No problem!

A couple of weeks ago I created a Twitter poll about unit tests that asked:

"Is code without unit tests inherently bad code?" 
The conversations that emerged covered a number of interesting points, which challenged some of my assumptions about unit tests and how we evaluate code.

What is bad code?

When I framed my original question, I deliberately chose the phrase "inherently bad code". I was trying to emphasize that the code would be objectively bad. That the absence of unit tests would be a definitive sign, one of a set of impartial measures for assessing code.

In my organisation, most of our agile development teams include unit tests in their Definition of Done. Agile practitioners define a Definition of Done to understand what is required for a piece of work to be completed to an acceptable level of quality. In this context, the absence of unit tests is something that the agile development team have agreed would be bad.

A Definition of Done may seem like an unbiased measure, but it is still a list that is collectively agreed by a team of people. The code that they create isn't good or bad in isolation. It is labeled as good or bad based on the criteria that this group have agreed will define good or bad for them. The bad code of one team may be the good code of another, where the Definition of Done criteria differs between each.

Is the code inherently bad when it doesn't do what the end user wanted? Not necessarily. What if it the unexpected is still useful? There are a number of famous products that were originally intended for a completely different purpose e.g. bubble wrap was originally marketed as wallpaper [Ref].

I believe there is no such thing as inherently bad code. It is important to understand how the people who are interacting with your code will judge its value.

Why choose to unit test?

Many people include unit testing in their test strategy as a default, without thinking much about what type of information the tests provide, the practices used to create them, or risks that they mitigate.

Unit tests are usually written by the same developer who is writing the code. They may be written prior to the code, in a test driven development approach, or after the code. Unit tests define how the developer expects the code to behave by coding the "known knowns" or "things we are aware of and understand" [Ref].

By writing unit tests the developer has to think carefully about what their code should do. Unit tests catch obvious problems with an immediate feedback loop to the developer, by running the tests locally and through build pipelines. If the developer discovers issues and resolves them as the code is being created, this offers opportunities for other people to discover unexpected or interesting problems via other forms of testing.

Where there are different types of automated testing, across integration points or through the user interface, unit tests offer an opportunity to exercise a piece of functionality at the source. This is especially useful when testing a function that behaves differently as data varies. Rather than running all of these variations through the larger tests, you may be able to implement these checks at a unit level.

Unit tests require the developer to structure their code so that it is testable. These implementation patterns create code that is more robust and easier to maintain. Where a production problem requires refactoring of existing code, the presence of unit tests can make this a much quicker process by providing feedback that the code is still behaving as expected.

The existence of unit tests does not guarantee these benefits. It is entirely possible to have a lot of unit tests that add little value. The developer may have misunderstood how to implement the tests, worked in isolation, or designed their test coverage poorly. The merit of unit tests is often dependent on team culture and other collaborative development practices.

No unit tests? No problem!

Though there are some solid arguments for writing unit tests, their absence isn't always a red flag. In some situations we can realise the benefits of unit testing through other tools.

Clean implementation patterns that make code easier to maintain may be enforced by static analysis tools. These require code to follow a particular format and set of conventions, rejecting anything that deviates from the agreed norm before it is committed to the code base. These tools can even detect some of the same functional issues as unit tests.

Rather than writing unit tests to capture known behaviour, you may choose to push this testing up into an integration layer. Where the data between dependent systems includes a lot of variation, shifting the tests can help to examine that the relationship is correct rather than focusing on the individual components. There is a trade-off in complexity and time to execution, but simple integrated tests can still provide fast feedback to the developers in a similar fashion to unit testing.

When dealing with legacy code that doesn't include unit tests, trying to retrofit this type of testing may not be worth the effort. Similarly if the code is unlikely to change in the future, the effort to implement unit tests might not provide a return through easy maintainability, as maintenance will not be required.

There may be a correlation between unit tests and code quality, but one doesn't cause the other. "Just because two trends seem to fluctuate in tandem ... that doesn’t prove that they are meaningfully related to one another" [Ref].

9 comments:

  1. Hi Katrina,
    this reminds me of that ancient scroll they found back in 2006 ‘The Way of Testivus’. Especially those two advices:
    * Don’t get stuck on unit testing dogma.
    * Embrace unit testing karma.
    Here’s a link to the scroll’s translation: http://www.agitar.com/downloads/TheWayOfTestivus.pdf

    ReplyDelete
    Replies
    1. I hadn't seen this before, thanks for sharing!

      Delete
  2. Thanks for another great post Katrina, one problem that I've encountered is that developers confuse unit tests with product testing in that they believe that by writing unit tests and achieving an arbitrary code coverage number that the product is tested. Unit tests will only demonstrate that the code can function as intended. They are often a complex change detection mechanism that have no clear relationship to business value. This is particularly the case when unit tests are written after the fact. One change I'm trying to affect at the moment in my organization is to introduce a step where the dev & tester identify risks and how they plan to mitigate them before implementation begins. I'm hoping that this will allow us to have a more holistic test approach with most of our acceptance tests being written at unit level.

    ReplyDelete
    Replies
    1. This is a great example of the thinking that I was trying to provoke through this article. I wish you luck in getting your team to identify risks collaboratively and determine how to mitigate them through different types of testing. Sounds like a great way to get everyone understanding *why* they are testing.

      Delete
  3. Kathrina, thanks for sharing, I always enjoy reading your posts. In this case we might have to agree to disagree on the "No unit tests? No problem!" concept.

    See my objections below, I would really appreciate your feedback.

    Everything can be done badly, for example unit tests, but also integration tests, the fact that some developers don't know how to write unit tests should not be the excuse for not having unit tests. Who guarantees that the integration tests will be well written?

    If I think that when I have to modify some code that has no unit tests, I am frightened. What do I do? I change and push to master? Is it ok to break the pipeline continuously? Am I not disrupting the team?

    Also, having worked with a ton of teams that didn’t do unit tests and started doing them (because of me most of the times) I have observed a definite reduction of issues detected late. This is obviously only my experience and it don’t expect to be a fact for everybody.

    ReplyDelete
    Replies
    1. Hey Gus! Thanks for leaving a comment. The title of the post was a little facetious, but the argument I was trying to make about unit tests is that "their absence isn't always a red flag".

      I'd argue against introducing unit tests purely because there are no unit tests. You first need to understand what they're for. If you don't have any alternative ways of receiving fast feedback on the code during development, through static analysis tools or other types of testing, then that sounds like a valid reason. But if the benefits of unit testing are delivered elsewhere, then I would challenge what additional benefit unit testing would provide.

      Delete
  4. You've written a great thought-provoking piece. I've re-read it a couple times over the space of a few days to get my head around your points.

    To me, and the types of software engineers I vastly prefer to work with, unit tests aren't optional. Period. You make some fine points that software sans unit tests can still provide value, but I think that same argument can be made for skipping testing all together--and the result is rarely, RARELY anywhere near as valuable as we'd like to delude ourselves. The skill level of our industry simply is not at the level of mastership regarding design, thinking, and execution. Ergo, we get teams thinking they can skip unit testing because "We really are good enough!" and in reality "No, no you're not anywhere near good enough."

    Moreover, I'm rather cautious about positions like this giving recalcitrant teams and managers yet another excuse to avoid doing the right thing. I run into this all the time for both developers and testers--seizing upon any rationalization to skip building high-quality, high-value software.

    I've written a number of articles and posts on skipping unit testing over the years--and far more brilliant, experienced folks have written as well--but you've prompted me to do some more writing on this as well.

    I don't agree with your position in general, but thank you for a thoughtful, well-written, provoking post!

    ReplyDelete
    Replies
    1. Hi Jim! Thanks for your comment. I think Sarah Mei has a tidy summary of what I was trying to communicate with this post:

      "The writing and running of tests is not a goal in and of itself - EVER. We do it to get some benefit for our team, or the business."
      https://twitter.com/sarahmei/status/868941512817557504

      Unit tests are one way to achieve fast feedback, testable code, and considered implementation. I don't believe that they are the only way in every situation. I'd rather that the team had a discussion about why they are unit testing, than adopt the practice by default without understanding the potential benefit.

      Delete
  5. Hi Katrina,
    Thanks for another great article.
    Key takeaway for me was not that we can do away with unit tests but the fact that any kind of tests (unit or integration) if they are merely present as a checklist then we may not get value from it.
    One of the things that interests me in this write up is that you mention is pushing the unit tests a bit right into the integration tests space.
    In my opinion Integration tests can mean a lot of different things and I am keen to understand what you mean when you say integration tests.
    I see integration tests as api tests using frameworks like Frisby.js as well as e2e tests.
    Also keen to know where would you put Contract tests (considering I am talking about a microservices architecture)

    Thanks,
    Tapan

    ReplyDelete