Wednesday 21 August 2013

Where to begin?

I recently exchanged a series of emails with an ex-colleague, who found them very helpful in starting to implement a different testing approach in her new organisation. She generously agreed to let me steal them for a blog post, as I thought they may also help others who aren't sure where to start. For context, this person comes from a formal testing background and is now wanting to implement a context-driven testing approach in an agile team that use SCRUM.


How do I use context-driven testing instead of structured formal testing? What tool do I use? How does this method fit in to each sprint?


I'd recommend looking at the Heuristic Test Strategy Model, specifically pages 8 - 10 of this PDF (General Test Techniques, Project Environments, Product Elements). Using these three pages as a guide, I'd open up FreeMind (or similar) and create a mind map of everything that you think you could test based on this, if time was unlimited and there were seven of you! You'll find that there are a number of questions asked in the Heuristic Test Strategy Model that you just don't know about. I'd include these in your mind map too, with a question mark icon next to them.

Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do.

Then you want to take all this information back to your desk and start processing it. I'd suggest that creating this huge mind map, having a meeting about it, and then deciding how to proceed, is at least the first day of a week long sprint, or the first two days of a fortnight long sprint.

Once you are comfortable that there's shared understanding between you, the product owner, and whoever else attended about what you will and won't be doing, then I'd start breaking up what you have in to do in to charters and using test sessions to complete the work; in agile there's really no need for scripted test cases. You can think of a charter like the one line title you'd use to describe a test case (or group of test cases). It's the goal of what you want to test. Something like "Test that the address form won't allow invalid input". I'd encourage you to assign yourself time-boxed testing sessions where you test to one goal. You can record what you've tested in a session report.

This probably all sounds totally foreign. This might help. I'd also definitely suggest reading this, and this.


Do you associate the user story to the identified features to be tested?  


I usually keep my test structure similar to the application structure, so that for a user of the application the tests all look familiar. For example, my current application has three top-level navigation elements; Apples, Oranges and Pears. The test suite starts with the same three level split.

I use mind maps to plan my testing in each space. So I have an Apples mind map, that has 7 branches for each type of apple we have. Then, because the children were too big, I have a separate mind map for each apple type where I actually scope their testing.

When we have a new user story go through the board, I assess which parts of my mind maps could be altered or added to. Then I update the mind maps accordingly to illustrate where the testing effort will occur (at least, where I think it will!)

I don't formally tie the story and features to be tested together, as this is rarely a 1-1 relationship, and there's some administrative overhead in tracking all this stuff that I don't think is very useful.


Currently our product owner provides very high-level business requirements, then the team create many user stories from this that are put in the backlog. So once I prepare the mind-map for what I can test based on the given requirement, I could take this to the product owner. Is it what you would do? When do you use this approach, do you normally get a relatively clear list of requirements?


If the product owner isn't helping create the stories, then I would definitely be asking lots of questions to be sure that what your team have guessed that they want is what they actually want. I'd suggest this might be a separate meeting to "what would you like me to test" though.

I think the first meeting is like "I can test that it works when users behave themselves. I can test that it handles input errors. I can test that network communications are secure. I can test that the record is successfully written to the backend database. I can test that a colour blind person can use this. What's important to you in this list?" and they might say "Just the first two" and you say "GREAT!" and cross out a whole bunch of stuff.

The second meeting is "Ok, you want me to test that is works when users behave themselves. Can we talk through what you think that means? So, if I added a record with a name and address, which are the only mandatory inputs, that would work?" and then the product owner might say "no, we need a phone number there too" and you start to flesh those things out. 

The second meeting is working from your test scope mind maps (in my case, Apples). The first meeting is working from a generic HTSM mind map (in my case, what do you want me to do here?)

With this approach I usually do get a relatively clear list of requirements at the end of step 2. Then I also ask the BAs to review what I'm testing by looking at the mind maps and seeing if there are business areas I've missed.


How do we integrate this context-driven approach to automation or regression testing?


I use concordion for my automated reporting, which is very flexible in what it allows you to include. I put mind map images in to the results generated by running automation, i.e. the apples mind map. I have little icons showing what, from all the things we talked about, have been included as automated checks, what I've tested manually, and what the team has decided is out of scope.

I find that in my team the Product Owner, Project Manager and BAs all go to the automated results when they want to know how testing is going. In order to show an overview of all testing in that single location, I pull all my mind maps in there too. I often find that the Product Owner and Project Manager don't drill down to the actual automated checks, they just look at the mind maps to get an idea of where we're at.


When you are doing time-boxed testing (session based?), do you record the all sessions? If so, do you normally attach the recorded session?


I don't record them with a screen recorder. I do record what I did in a document, using essentially the same structure as this.

7 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Oh Katrina.

    You have to deny the premise of that first question. Since when is Context-Driven Testing not formal and structured?

    All testing is structured. Most testing is at least somewhat formalized. Context-Driven doesn't mean that it somehow lacks structure. In the Context-Driven world we study and control our structures. The traditional approach is to do what you are told by faceless gurus. In the Context-Driven world we formalize whenever that is needed. Traditional testing promotes formalization for it's own sake.

    ReplyDelete
  3. Thanks for your comment James.

    I agree with you on content, but I don't think that I had anything to gain at that point in the conversation by challenging their language. The intent was still clear, they wanted to do things in a different way. I want to encourage that. I feel that picking at how people have asked can make them nervous about asking more questions in future. I wanted the rest of the conversation to happen.

    My previous blog post touched on this topic, which I wrote as a result of this conversation; http://katrinatester.blogspot.co.nz/2013/08/communicating-kindly.html

    ReplyDelete
  4. Hello Katrina,

    I'm with James on this one. You mention that there is such as thing as structured formal testing. Apparently then, there is also a thing such as unstructured formal testing? Would it be possible to be more specific with regards to this formal method? As background information I think it is important to know what pains are caused by one testing methodology and are apparently fixed or eased by the other.

    Context-driven software testing is not the holy grail; it is a theory on how to do testing. It may be a theory that stands a better chance of surviving than other theories. As of yet, I think our natural skepticism should tell us that the jury is still out on this one.

    Therefore, to advise people to start using, for example, the Heuristic Test Strategy Model, there should be, in the situation of the client, a clear and apparent need to use heuristics and a certain knowledge of what heuristics are and why they are used in thinking.

    I would like to ask a question about you approach to communicating testing. I have been in situations where bringing testing to the attention of the product owner, the architect or anyone else, for that matter, was a difficult job, one that required care and precision. I am inclined to extrapolate from that, that there is a greater number of contexts in which the architect's or project manager's knowledge or awareness of testing, is, at the very least, very different from the tester's, than there are contexts in which there is ample (and shared!) awareness and knowledge of testing. I think in the former set contexts the display of a A3 filled with testing ideas might possibly be met with either horror, apathy, or ridicule. I do not applaud any of these reactions, but I think they are within the range of reactions that we should take into account. How would you handle such a situation from a context-driven point of view?

    Kind regards,

    Joris

    ReplyDelete
  5. Hi Joris,

    Thanks for your comment.

    I would argue that there's always a clear need to use heuristics, by a definition that heuristics are techniques for problem solving, learning and discovery. I think that in any situation a starting point is required, and that's all the HTSM is. A point from which to start your own journey of learning within your project, and a helping hand for some questions to ask that you *may* find helpful.

    With regards your communication question, I'm afraid I'm not speaking from experience in my reply. I've had business oriented people who are more resistant to contributing than others, but never reactions that I would class as horror, apathy or ridicule. I guess in that situation, regardless of the type of testing you're doing, it's important to educate these people about why you are asking questions and the impact their lack of co-operation will have.

    ReplyDelete
  6. Hi Katrina

    Great article!

    One thing that is missing for me, is how do you help make sure the developers build it right, first time?

    For example, in the second meeting, the product owner decides "no, we need a phone number there too". How does this filter through to the devs?

    thanks
    Nigel

    ReplyDelete
  7. Hi Nigel,

    Good point. I would normally invite the Lead Developer in to that second type of meeting, to reduce the feedback loop.

    ReplyDelete