I recently exchanged a series of emails with an ex-colleague, who found them very helpful in starting to implement a different testing approach in her new organisation. She generously agreed to let me steal them for a blog post, as I thought they may also help others who aren't sure where to start. For context, this person comes from a formal testing background and is now wanting to implement a context-driven testing approach in an agile team that use SCRUM.
I'd recommend looking at the Heuristic Test Strategy Model, specifically pages 8 - 10 of this PDF (General Test Techniques, Project Environments, Product Elements). Using these three pages as a guide, I'd open up FreeMind (or similar) and create a mind map of everything that you think you could test based on this, if time was unlimited and there were seven of you! You'll find that there are a number of questions asked in the Heuristic Test Strategy Model that you just don't know about. I'd include these in your mind map too, with a question mark icon next to them.
Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do.
Then you want to take all this information back to your desk and start processing it. I'd suggest that creating this huge mind map, having a meeting about it, and then deciding how to proceed, is at least the first day of a week long sprint, or the first two days of a fortnight long sprint.
Once you are comfortable that there's shared understanding between you, the product owner, and whoever else attended about what you will and won't be doing, then I'd start breaking up what you have in to do in to charters and using test sessions to complete the work; in agile there's really no need for scripted test cases. You can think of a charter like the one line title you'd use to describe a test case (or group of test cases). It's the goal of what you want to test. Something like "Test that the address form won't allow invalid input". I'd encourage you to assign yourself time-boxed testing sessions where you test to one goal. You can record what you've tested in a session report.
This probably all sounds totally foreign. This might help. I'd also definitely suggest reading this, and this.
I usually keep my test structure similar to the application structure, so that for a user of the application the tests all look familiar. For example, my current application has three top-level navigation elements; Apples, Oranges and Pears. The test suite starts with the same three level split.
I use mind maps to plan my testing in each space. So I have an Apples mind map, that has 7 branches for each type of apple we have. Then, because the children were too big, I have a separate mind map for each apple type where I actually scope their testing.
When we have a new user story go through the board, I assess which parts of my mind maps could be altered or added to. Then I update the mind maps accordingly to illustrate where the testing effort will occur (at least, where I think it will!)
I don't formally tie the story and features to be tested together, as this is rarely a 1-1 relationship, and there's some administrative overhead in tracking all this stuff that I don't think is very useful.
If the product owner isn't helping create the stories, then I would definitely be asking lots of questions to be sure that what your team have guessed that they want is what they actually want. I'd suggest this might be a separate meeting to "what would you like me to test" though.
I think the first meeting is like "I can test that it works when users behave themselves. I can test that it handles input errors. I can test that network communications are secure. I can test that the record is successfully written to the backend database. I can test that a colour blind person can use this. What's important to you in this list?" and they might say "Just the first two" and you say "GREAT!" and cross out a whole bunch of stuff.
The second meeting is "Ok, you want me to test that is works when users behave themselves. Can we talk through what you think that means? So, if I added a record with a name and address, which are the only mandatory inputs, that would work?" and then the product owner might say "no, we need a phone number there too" and you start to flesh those things out.
The second meeting is working from your test scope mind maps (in my case, Apples). The first meeting is working from a generic HTSM mind map (in my case, what do you want me to do here?)
With this approach I usually do get a relatively clear list of requirements at the end of step 2. Then I also ask the BAs to review what I'm testing by looking at the mind maps and seeing if there are business areas I've missed.
I use concordion for my automated reporting, which is very flexible in what it allows you to include. I put mind map images in to the results generated by running automation, i.e. the apples mind map. I have little icons showing what, from all the things we talked about, have been included as automated checks, what I've tested manually, and what the team has decided is out of scope.
I find that in my team the Product Owner, Project Manager and BAs all go to the automated results when they want to know how testing is going. In order to show an overview of all testing in that single location, I pull all my mind maps in there too. I often find that the Product Owner and Project Manager don't drill down to the actual automated checks, they just look at the mind maps to get an idea of where we're at.
I don't record them with a screen recorder. I do record what I did in a document, using essentially the same structure as this.
How do I use context-driven testing instead of structured formal testing? What tool do I use? How does this method fit in to each sprint?
I'd recommend looking at the Heuristic Test Strategy Model, specifically pages 8 - 10 of this PDF (General Test Techniques, Project Environments, Product Elements). Using these three pages as a guide, I'd open up FreeMind (or similar) and create a mind map of everything that you think you could test based on this, if time was unlimited and there were seven of you! You'll find that there are a number of questions asked in the Heuristic Test Strategy Model that you just don't know about. I'd include these in your mind map too, with a question mark icon next to them.
Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do.
Then you want to take all this information back to your desk and start processing it. I'd suggest that creating this huge mind map, having a meeting about it, and then deciding how to proceed, is at least the first day of a week long sprint, or the first two days of a fortnight long sprint.
Once you are comfortable that there's shared understanding between you, the product owner, and whoever else attended about what you will and won't be doing, then I'd start breaking up what you have in to do in to charters and using test sessions to complete the work; in agile there's really no need for scripted test cases. You can think of a charter like the one line title you'd use to describe a test case (or group of test cases). It's the goal of what you want to test. Something like "Test that the address form won't allow invalid input". I'd encourage you to assign yourself time-boxed testing sessions where you test to one goal. You can record what you've tested in a session report.
This probably all sounds totally foreign. This might help. I'd also definitely suggest reading this, and this.
Do you associate the user story to the identified features to be tested?
I usually keep my test structure similar to the application structure, so that for a user of the application the tests all look familiar. For example, my current application has three top-level navigation elements; Apples, Oranges and Pears. The test suite starts with the same three level split.
I use mind maps to plan my testing in each space. So I have an Apples mind map, that has 7 branches for each type of apple we have. Then, because the children were too big, I have a separate mind map for each apple type where I actually scope their testing.
When we have a new user story go through the board, I assess which parts of my mind maps could be altered or added to. Then I update the mind maps accordingly to illustrate where the testing effort will occur (at least, where I think it will!)
I don't formally tie the story and features to be tested together, as this is rarely a 1-1 relationship, and there's some administrative overhead in tracking all this stuff that I don't think is very useful.
Currently our product owner provides very high-level business requirements, then the team create many user stories from this that are put in the backlog. So once I prepare the mind-map for what I can test based on the given requirement, I could take this to the product owner. Is it what you would do? When do you use this approach, do you normally get a relatively clear list of requirements?
If the product owner isn't helping create the stories, then I would definitely be asking lots of questions to be sure that what your team have guessed that they want is what they actually want. I'd suggest this might be a separate meeting to "what would you like me to test" though.
I think the first meeting is like "I can test that it works when users behave themselves. I can test that it handles input errors. I can test that network communications are secure. I can test that the record is successfully written to the backend database. I can test that a colour blind person can use this. What's important to you in this list?" and they might say "Just the first two" and you say "GREAT!" and cross out a whole bunch of stuff.
The second meeting is "Ok, you want me to test that is works when users behave themselves. Can we talk through what you think that means? So, if I added a record with a name and address, which are the only mandatory inputs, that would work?" and then the product owner might say "no, we need a phone number there too" and you start to flesh those things out.
The second meeting is working from your test scope mind maps (in my case, Apples). The first meeting is working from a generic HTSM mind map (in my case, what do you want me to do here?)
With this approach I usually do get a relatively clear list of requirements at the end of step 2. Then I also ask the BAs to review what I'm testing by looking at the mind maps and seeing if there are business areas I've missed.
How do we integrate this context-driven approach to automation or regression testing?
I use concordion for my automated reporting, which is very flexible in what it allows you to include. I put mind map images in to the results generated by running automation, i.e. the apples mind map. I have little icons showing what, from all the things we talked about, have been included as automated checks, what I've tested manually, and what the team has decided is out of scope.
I find that in my team the Product Owner, Project Manager and BAs all go to the automated results when they want to know how testing is going. In order to show an overview of all testing in that single location, I pull all my mind maps in there too. I often find that the Product Owner and Project Manager don't drill down to the actual automated checks, they just look at the mind maps to get an idea of where we're at.
When you are doing time-boxed testing (session based?), do you record the all sessions? If so, do you normally attach the recorded session?
I don't record them with a screen recorder. I do record what I did in a document, using essentially the same structure as this.