Recently I've been working with a specific group of testers in one of our online banking applications. They currently operate a monthly release cycle using a release process that takes about a week to complete. Most of the week is spent in manual release testing, which consistently creates frustration for the testers themselves and the people they're working alongside.
My observation from a coaching perspective was that we had fallen into release testing theatre*. Our testers all had the script for every release. They dutifully played their parts and read their lines, but it all felt a bit empty. Unfortunately the playwright hadn't been evolving the play alongside other changes in our organisation. The testers were acting out a release process that no longer made much sense.
The testers all recognised a need to change what they were doing in the release. But instead of trying to edit what we already had, I wanted to question the rationale behind it.
Risk Appetite
I facilitated a workshop that was attended by all of the testers for the product, along with two of the delivery managers who have accountability for release testing sign off as part of our governance process.
I started the session by gauging opinion of all the attendees about our current approach to release testing. I asked two questions that I adapted from The Risk Questionnaire by Adam Knight:
- How do you think [product] currently stands in its typical level of rigour in release testing?
- How do you think [product] should stand in its typical level of rigour in release testing?
I asked people answer the questions by choosing a place to stand in the room: one wall was low and the opposite wall was high. This gave a visual indicator of how people felt about the existing approach and which direction they felt we should be heading towards.
Interestingly the testers and the delivery managers had quite different views, which was good to highlight and discuss early in the session.
Brainstorming Risk
Next I asked people to consider what risks we were addressing in our release testing, then write out one risk per post-it note. I emphasised that I wanted to focus on risk rather than activities. For example, instead of 'cross-browser testing' I would expect to see 'product may not work on different platforms'.
After five minutes of brainstorming, the attendees shared the risks that they had identified. As each risk was shared, other attendees identified where they held a duplicate risk. For example, when someone said 'product may not work on different platforms', we collected every post-it that said something similar and combined them into a single group.
We ended up with a list of 12 specific risks that spanned the broad categories of functionality, code merge, cross-browser compatibility, cross-platform compatibility, user experience, accessibility, security, performance, infrastructure, test data, confirmation bias and reputation.
Mitigating Risk
Between completion by a delivery team and release to our customers, the product is deployed through six different environments. The next activity was to determine whereabouts in the release process we would mitigate each of the risks that we'd collectively identified.
I stuck a label for each of our environments across the wall of the workshop room, creating column headings, then put the risk post-it notes into a backlog at the left. We worked through the backlog, discussing one risk at a time and moving it to the environment where it was best suited, or breaking the risk in to parts that were mapped to separate environments if required.
The result was a matrix of environments and risk that looked like this:
Mapping risks to release environments |
As you can see from the picture above, we realised that most of our risk was being mitigated early in our release process. As we get closer to the production environment, on the right hand side of the visualisation, there are far fewer post-it notes.
Creating this mapping initially caused some confusion, as the testers were reluctant to say a risk had been mitigated at a particular point in the release process. Eventually I realised that there was a misunderstanding in terminology. I said mitigated, they thought I meant eliminated.
To explain the difference between mitigating and eliminating risk I used an example from one of my volunteering roles as a Brownie Leader. In one of the lodges where we hold our overnight camps there is a staircase to use the bathrooms that are located on a lower level. To mitigate the risk of a girl falling on the stairs at night, we leave the stairwell light switched on. This action doesn't mean that we will never have someone fall on the stairs, but it significantly reduces the likelihood. The risk is mitigated but not eliminated.
Targeted Testing
At the conclusion of the workshop we hadn't talked specifically about test activities. However, the visual mapping of risks to environments raised a lot of questions for both the testers and the delivery managers about the validity of our existing release test process.
Having reached agreement with the delivery managers about the underlying purpose of each release environment, the testers reconvened in a later meeting to discuss how testing could mitigate the specific risks that had been identified. Again we did not reference the existing approach to release testing. Instead we collaboratively mapped out the scenes of a brand new play:
Brainstorming a new risk-based approach to release testing |
Our new approach is very different to the old. It's less repetitive and quicker to execute. It's also truly a risk-based approach. The testers are excited about the possibility in what we've agreed. I'm looking forward to seeing how it works too.
I also hope that our release testing for this product continues to evolve. This time around all of the testers collaborated together as playwrights and have shared ownership of the actions they will perform. As our organisation continues to change we should continue to tweak our script to stay relevant. The alternative is a stale process that ends in empty pageantry.
* I'm not the first person to use the theatre analogy. Steve Smith wrote an article on a similar theme, titled Release testing is risk management theatre.