Sunday 16 February 2014

Own it

It feels like testing suffers as a profession because we fail to own our failures. We are quick to point out the plethora of reasons that something is not our fault. Where a product is released with problems we didn't have enough time, or we weren't listened to, and anyway, we didn't write the bugs into the code in the first place so why are you blaming us?

Testing, perhaps more than any other discipline in software development, includes a number of pretenders. These people may be called fake testers, possums, or zombies;  there are no shortage of names for a problem that is widely acknowledged. Yet they remain sheltered in software development teams throughout the world, pervasive in an industry that allows them to survive and thrive. Why?

We don't take the blame.

Think of a retrospective or post-project review where a tester took ownership of a problem, identified their behaviour as a cause, and actively worked to prevent recurrence in their work. Now think of the problems identified in testing that would be gone if the developer had delivered the code earlier, or if the project manager had allowed more time for defect fixing, or if the business analyst had identified the requirement properly. It seems that more often we attribute our problems elsewhere; fingers are pointed.

It is a brave thing to claim a failure. In doing so we acknowledge our imperfection and expose our flaws. I think that testers do not show this bravery enough. Instead, criticism of a poor product, or a failed project, is water off a testers back. We escape the review unscathed. We cheer our unblemished test process. We see this as a victory to be celebrated with other testers.

This is what allows bad testers to hide. Where a tester leaves a review without ownership of any problems they are warranted in considering their contribution successful. A tester may then consider themselves associated with any number of "successful" projects, by definition that none of its failures were attributed to them.

How do we fix this? By considering how everything could be our fault.

Imagine a project that goes live in production with a high number of defects. In the review meeting, one tester claims that the project manager did not allow enough time in her schedule for defect fixing. An action is taken by the project manager to allow more time for this activity in the next project.

Another tester on the project thinks about how this same problem could be their fault using the test of reasonable opposites, the idea that for every proposition you come up with there are contrasting explanations that are just as plausible. In this example the proposition is that the project goes live with a high number of defects because the project manager did not allow enough time in her schedule for defect fixing. A reasonable opposite may be that the project goes live with a high number of defects because the testers raised many minor problems that the business did not need to see resolved before release.

From a reasonable opposite we now have an action to consider; should the testers treat these minor problems differently in future? The tester is prompted to think about how their behaviour could have contributed to a perceived failure. Once you start imagining ownership, it becomes easier take it, where appropriate.

As good testers start to claim problems and action change we erode the position of bad testers who consider themselves above reproach. When we stop finger pointing, we stop enabling others to do so. To change the culture of our industry and expose those who hide among us, we need to be comfortable in accepting that sometimes the things that go wrong on a project are because of things that we did badly.

The law of reasonable opposites; a good tool for testers in a review meeting.

9 comments:

  1. I've taken the approach of accepting responsibility for my failures for years now, in the hopes that others will do the same. For me this has worked very well in startups, and in tech-focussed companies, and within smaller departments of larger companies. Basically any time I've worked with people whose main focus matches mine, which is to deliver great software.

    Accepting blame proactively in a large organisation where finger-pointing is the norm and office politics are out of control... That I do not recommend. If you find yourself in that type of environment, however briefly and for whatever reason, I suggest taking this "acceptance of blame and plan of action" to a team level or personal level only. I don't join in with the finger-pointers or try to dodge responsibility for my own actions and omissions, but I do come to meetings prepared to stand up for all of the good work done by myself and my team.

    I agree with everything you've written about thinking, considering, imagining, all of which sounds like an internal thought process. Your mention of retrospective meetings implies an agile environment where teams are allowed to fail and learn. Post-project review sounds to me like a euphemism for a witch hunt, which probably tells you something about the places I've worked in. It was that and your comment about enabling others to finger point that had me considering the different environments I've worked in, and how I've had to adjust my approach accordingly.

    I like this post because you got me thinking :)

    ReplyDelete
    Replies
    1. Thanks for your comment Kim.

      Your first two paragraphs are interesting. Where do you think the zombie testers are hiding? In those small organisations, or in the culture of finger-pointing? If the latter, then going in to a meeting "prepared to stand up for all of the good work done" may be contributing to a CYA environment, as much as your intentions are good.

      I'm not saying that a tester should take on everything as being their fault. I'm suggesting that they consider how each problem could have been caused by them, and claim action where appropriate. In a politically charged environment you may not want to boldly claim a failure in a review meeting, but it seems wrong to entirely ignore an opportunity to improve.

      As I said on Twitter, I just feel that testers are too fond of ensuring no blame is laid at our feet.

      Delete
    2. I'm totally with you. I'm currently in one of those places where finger pointing and politics are the norm (at higher levels - my team itself is great). While proactively accepting blame might sound like a great & noble thing to do, that's like voluntarily putting your head on the chopping block when no one is even asking that of you. Not gonna happen! We just talk amongst ourselves about what we can do better next time.

      Delete
  2. I'm probably too quick to feel I've failed if a bad bug gets out to production. But I'm not sure that taking the blame or assigning blame helps in any way. It's good to learn from mistakes. But I'd rather focus on how, as a development team (including testers), we can prevent defects. Maybe we could have thought of more test cases up front to help guide development? These are good things to learn, but IME they only help if the whole team takes responsibility to take steps to prevent similar failures in the future.

    ReplyDelete
    Replies
    1. Thanks for your comment Lisa.

      I'm not sure what the difference is between learning from the mistake of a bad bug going to production vs. focusing on how we can prevent defects. Don't you think that the learning from the first is an essential input to the latter?

      While I don't disagree that preventing similar failures is easier with the whole team taking responsibility, it's certainly not the *only* way to effect change. A tester who takes responsibility and champions a different way of doing things may have the same success. A tester who throws up their hands because not everybody is in agreement is unlikely to change much at all.

      Delete
  3. Nice post Katrina,

    I've re-written this about three times, so I'm over-thinking things... or maybe I shouldn't read blog posts after 10PM. I do have one question, though... how would you go about enticing the "zombie testers" to step forward and start claiming responsibility? Is this even possible? That appears to be the main problem. There are a few of us who will step forward and take the hit when things went bad (that they were involved with, of course...), but most will continue to stay in the shadows.

    It would be great to say "lead by example", but from personal experience, that doesn't seem to be the case. I'm a big fan of improving my "testing game" through looking at my mistakes, and the best way to start that process is to own up to them, but I can't say that this appears to be the same for many that I've worked with*.

    The "reasonable opposite" is a great question to ask. But I'm not so sure that it may lure our not-so-forward compatriots from the shadows. And I think I've gone off on a tangent anyway...

    * To be fair, I can't possibly know how and whether others take on the lessons learned from failure. But there doesn't appear to be a learning process occurring with them from my perspective.

    ReplyDelete
    Replies
    1. Thanks for your comment Dean.

      I think there's a culture where testers escape the review process unscathed, which exists because of tester behaviour in these reviews. Certainly, initially, the good testers stepping forward and starting to accept responsibility where appropriate will do nothing to the zombies. But thinking longer term, suddenly testers are not above reproach. The excuses that we hide behind now are no longer accepted at face value by those around us. We give others in our team permission and confidence to question what testers are doing. They then shine their light into the shadows and ferret out the undead among us. I think this is a way to change our culture so that the fakers are exposed as such.

      Delete
  4. I was thinking last night about this - I remember reading in M Scott Peck how we build a mental "map" or model of the world, and as much as possible we like things to "fit into that map". We really don't like it when it doesn't - some people will rant and rave that the world isn't behaving to their map, others will say it's time to redraw the map.

    For many it's easier to rant, and to keep their map sacred. I think empirical thinkers are always re-evaluating the models they look at the world with, and that can only come from a sense of ownership. As I've said on Twitter, the big danger with anything like this is it's always easy to over-react to issues as well. Personally if a largish size defect has gotten through, the question I always ask myself is "how did our imagination fail?". We run the tests where we can imagine something going wrong - there are so many permutations on any system, that exhaustive testing isn't possible - so we choose ones which are most representative or most compelling as far as our experience and imagination targets danger areas. When something goes through, we have to adjust our experience and learn to reimagine.

    I was writing about models of thinking the other week ...

    http://testsheepnz.blogspot.co.nz/2014/02/revolutionary-thinking.html

    ReplyDelete
  5. Thank you for a great post. It got me contemplating my responsibilities as a tester.

    I can’t stop wondering how the culture has become this “not taking the blame” culture you mention. I recognize it from many years in the testing business, in both large and smaller companies.

    Could it be related to the core discipline of testing, being an investigation to inform other (important) people of a certain situation? We don’t write the code, we don’t design the interfaces, we don’t plan release cycles. Our job is “simply” to provide a service so others can make good decisions. Could it be that accountability and responsibility suffer under testing inherently being an evaluation, an investigation?

    What if testers were given more responsibility and were held accountable for their work in same way developers are held accountable with e.g. code review, source control and focus on working software delivered often? I believe testers often can “hide” because developers or managers don’t know what we are doing, but just as it’s nice to have insurance if a storm should come; it is nice to have testers to (maybe) prevent critical bugs reaching the customers.
    It is the manager that makes the call of the release date and the developer that have written the code, so I understand why the responsibility is placed at their shoulders, not that I necessarily agree in this. I believe if testers were given more responsibility in a project they would take more responsibility - and fake testers would have a hard time keeping up appearances.

    ReplyDelete