Tuesday, 22 April 2014

Test Strategy Retrospective

Once an agile team is established and has started delivering working software, a test strategy retrospective can help to determine whether everyone in the team knows what the test strategy is, and agrees on which aspects of the strategy have been implemented. 

Why do a test strategy retrospective?

When people talk about testing in agile they often refer to cross-functional teams. By definition a cross-functional team is a group of people with different functional expertise working toward a common goal [1].

Many agile practitioners include in their understanding of the term the idea that any individual within the team is capable of completing any task. They speak of resources becoming T-shaped, with a depth of skill in their specialist area and a breadth across disciplines other than their own [2][3]. As a simplified example, a tester may have depth of skill in testing with a breadth of skill across business analysis and development.



Although there is usually a specialist tester in a cross-functional team, they are not the only person doing testing. Instead testing can be performed by anyone, which means that the quality of testing will vary depending on who is performing it.

Those from a development background may test that something works, often by creating an automated check, and consider testing complete. Those from the business are requirements driven and may test only to confirm that their needs are met. Those who are not testers are generally less interested in thinking about the ways in which a function doesn't work or could be exploited, so testing becomes more confirmatory and less investigative.

It's apparent to a tester that a shift towards confirmation of requirements comes at the expense of other types of thinking. When faced with this eroding test coverage the specialist tester has two options; alliance or surrender.

By alliance, the specialist tester implements practices that ensure critical thinking and interrogation of the application retain their place. They may institute peer review of the testing performed by non-specialist testers. They may adopt pair testing as a means of complementing the thinking of their colleagues.

By surrender, the specialist tester adopts the belief that testing is confirming that the requirements have been met. They may support automated checks as the primary means of testing an application. They may advocate for a minimum viable product, where the quality of the application is "good enough" for market and nothing more [4].

In either scenario, alliance or surrender, the specialist tester is making a conscious decision to alter the test strategy of the team. They are actively thinking about the trade-off in adopting one practice over another, the implications to test coverage and the impact on the overall quality of the product. But they are often thinking and deciding as an individual.

In a cross-functional team the performance of testing is considered open to all, yet strategic thought about testing is often not. This means that testers, in the loosest application of the word, may be adopting a practice without understanding why.

You may argue that the specialist tester is the only person in a cross-functional team with the ability to create a test strategy, given that testing is the area in which they have a depth of skill. I don't disagree, but counter that the method by which a strategy is decided and shared is important. A tester who fails to make their strategic decisions visible is adopting a high level of risk; taking ownership of choices that may not be theirs to make. And the benefits of a strategy are limited when the tester fails to communicate it to the team so that it is understood and widely adopted.

So, how can a tester determine whether their cross-functional team understands the test strategy that is in place and the decisions that underpin it? By leading a test strategy retrospective.

Creating a visualisation

A test strategy retrospective is designed to be engaging and interactive; to get people who are not testers to think about what types of testing are happening and why. It should take approximately one hour.

The specialist tester should lead this retrospective but not participate in creating the visualisation. This prevents the team from being lead by the opinion of the tester, and ensures that others engage their brains.

To run a test strategy retrospective you will need post-it notes in four different colours and a large surface to apply them to. A large boardroom table is ideal, as it allows the team to gather around all four sides. A large, empty wall is a good second choice.

Start the retrospective by clearly stating the purpose to those gathered; to visualise your test strategy and check that there is shared understanding in the team about what testing is happening.

Take two post-it notes, one labelled IDEA and the other labelled PRODUCTION. These are placed at the top left corner and top right corner of the surface, creating a timeline that reflects the process of software development from an idea to a deployed application.




Within this timeline, different types of test activities can occur. Some of these activities will be part of the test strategy, and some will not. Ask each team member to think about the test activities that are happening in the project, and those that should be.

Allocate five minutes for each person to write a set of post-it notes that each name one test activity, where the colour of the post-it note shows whether or not the activity is part of the test strategy and, if so, whether it is being implemented.

In this example, purple, pink and yellow post-it notes are used to mean:




Each individual should stick their post-it notes on to the timeline at the point they think the test activity will occur. At the end of the five minutes there should be a haphazard display of a large number of post-it notes.

Ask the team to collaboratively group activities with the same name, and agree on the placement of activities within the timeline. Where different names have been used to refer to the same concept, keep these items separate. Once the team are happy with their visualisation, or the conversation starts to circle, call a halt.

An example of a test strategy retrospective visualisation is below.



Leading a discussion on strategy

If you've reached this point of the retrospective by killing a circular thread of conversation then that may be the first place to start a discussion. But there are a number of other questions to ask of this visualisation.

Are there groupings that include different coloured post-it notes? Why?

Have people used different terminology to refer to the same type of test activity? Why?

Why are there activities that are in the test strategy that aren't being implemented?

What are the activities that aren't in the strategy and should be? Do we want to include these?

Are there any activities that are missing from the visualisation altogether? What are they?

These questions not only uncover misunderstanding about the current state of testing, but they also surface the decisions that have been made in determining the strategy that is in place. The visualisation is a product of the whole team and they are invested in it, creating a catalyst for a deep discussion.

For example, the team above are in a surrender state; the test activities that are shown in purple are largely for automated checking. This illustrates that testing is primarily confirmatory, with tools verifying that the requirements have been met. Yet, judging by the number of yellow post-it notes on the right hand side of the timeline, a number of people in the team feel there should be more investigative testing. Who made the decision to focus on automation? It appears that this choice that has not been widely publicised and agreed by the team as a whole. The retrospective offers an opportunity to discuss.

In a cross-functional team where anyone can perform testing, it is important for there to be a shared understanding of both the practical approach to testing tasks and the underlying test strategy. By creating agreement about the type of test activities being performed, any person who picks up a testing task understands the wider context in which they operate. This helps them to make decisions about the boundaries of their task; what they should cover and what sits within another activity.

Tuesday, 15 April 2014

Evolution of a model

I've been meaning to share some examples of visual test coverage models using mind maps and how these may evolve over time. This example is from a group of testers working on a training project that I've blogged about previously. It is a tidy demonstration of the rapid learning that is common in software testing and clearly illustrates how a model changes over time.

The first version of the model was directed by what the Test Manager told the team was important to test. Though the team had access to the application, there was a limited testing time frame and they did not stray far from the functionality mentioned in their initial briefing. Test ideas within the model were grouped into sessions with a specific charter, which are represented as pale gray boxes.


The team working on this piece of function included three testers who each executed a session. Every tester found at least one bug in their first session. They decided to reflect this against their model by colouring the completed sessions in red.


Before executing their second session, the group re-evaluated. Their first set of sessions had exposed them to a wider variety of function than the Test Manager had mentioned. They questioned whether other aspects of the application under test were also important, and whether the focus of testing should change accordingly. As a result, the left hand side of the model started to expand and, after discussion with the Test Manager, the priorities for their second set of sessions changed. To be clear about where their focus was, the team updated their model to show priority by adding numbering.

At the same time the wider project team adopted a common colour scheme for reporting, so that the Test Manager could expect models from different teams to adopt the same conventions. As a result, completed sessions were marked in green regardless of whether bugs had been discovered.


After the third set of sessions things changed again. The testers felt comfortable with the application, which had been new to them at the start of the process, and started questioning the Test Manager on the applicability of their originally proposed scope. As a result, the right hand side of the diagram became significantly simpler.

In addition, scope added during exploration was refined into succinct and understood coverage. At the end of these sessions testing against the left hand side of the model was complete.


Once all of the scoped sessions had been completed, the model looked quite different to the initial version.


Though this project was slightly contrived for training purposes, I have observed the same rapid evolution of visual models in many client projects. By representing test ideas in a format that is easy to adapt, we make our testing more flexible and responsive to change.

See also:



Wednesday, 9 April 2014

Test Manager in Agile

I was recently asked by a colleague "What is the role of a Test Manager in agile?". This was my response.

Test managers tend to be quite nervous about agile. As the focus of a testing team switches to collaboration on products and projects, rather than testing being an isolated phase or service, it may feel like the need for a test manager disappears. I think there is a place for Test Managers in agile, but that their responsibilities and scope may look quite different to in a waterfall environment.

Because testers should be communicating their progress directly within their project teams, providing their estimates as part of an agile methodology and using just-in-time test planning, there is no need for a Test Manager who acts as an intermediary or overseer at a project level. If a Test Manager is required in this capacity, it's a sign of dysfunction within the agile team.

Instead I see the Test Manager role as evolving to a higher-level position that includes:
  1. facilitation of inter-team communication across many agile projects within an organisation 
  2. presenting an aggregate view of testing utilisation to high level management
  3. personal support, mentoring, and professional development for testers
  4. being an escalation point for testers
  5. budgeting or forecasting for testing as a service dependent on organisational process
I think the agile test manager operates at a programme level, with a view across many streams of work.

To those test managers who don't find this appealing, as a tester involved in an agile delivery team you operate with a higher degree of autonomy and freedom than in a traditional environment. You have ownership of test strategy and planning, are accountable for your own progress, and responsible for accurately reporting to your team.

There is also an opportunity for test leadership, though not at the expense of hands-on testing. In a cross-skilled team, the agile tester must ensure that the quality of testing is good regardless of who in the team is completing it. The tester becomes the spokesperson for collaborative testing practices, and provides coaching via peer reviews or workshops to those without a testing background.