Wednesday, 21 August 2013

Where to begin?

I recently exchanged a series of emails with an ex-colleague, who found them very helpful in starting to implement a different testing approach in her new organisation. She generously agreed to let me steal them for a blog post, as I thought they may also help others who aren't sure where to start. For context, this person comes from a formal testing background and is now wanting to implement a context-driven testing approach in an agile team that use SCRUM.


How do I use context-driven testing instead of structured formal testing? What tool do I use? How does this method fit in to each sprint?


I'd recommend looking at the Heuristic Test Strategy Model, specifically pages 8 - 10 of this PDF (General Test Techniques, Project Environments, Product Elements). Using these three pages as a guide, I'd open up FreeMind (or similar) and create a mind map of everything that you think you could test based on this, if time was unlimited and there were seven of you! You'll find that there are a number of questions asked in the Heuristic Test Strategy Model that you just don't know about. I'd include these in your mind map too, with a question mark icon next to them.

Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do.

Then you want to take all this information back to your desk and start processing it. I'd suggest that creating this huge mind map, having a meeting about it, and then deciding how to proceed, is at least the first day of a week long sprint, or the first two days of a fortnight long sprint.

Once you are comfortable that there's shared understanding between you, the product owner, and whoever else attended about what you will and won't be doing, then I'd start breaking up what you have in to do in to charters and using test sessions to complete the work; in agile there's really no need for scripted test cases. You can think of a charter like the one line title you'd use to describe a test case (or group of test cases). It's the goal of what you want to test. Something like "Test that the address form won't allow invalid input". I'd encourage you to assign yourself time-boxed testing sessions where you test to one goal. You can record what you've tested in a session report.

This probably all sounds totally foreign. This might help. I'd also definitely suggest reading this, and this.


Do you associate the user story to the identified features to be tested?  


I usually keep my test structure similar to the application structure, so that for a user of the application the tests all look familiar. For example, my current application has three top-level navigation elements; Apples, Oranges and Pears. The test suite starts with the same three level split.

I use mind maps to plan my testing in each space. So I have an Apples mind map, that has 7 branches for each type of apple we have. Then, because the children were too big, I have a separate mind map for each apple type where I actually scope their testing.

When we have a new user story go through the board, I assess which parts of my mind maps could be altered or added to. Then I update the mind maps accordingly to illustrate where the testing effort will occur (at least, where I think it will!)

I don't formally tie the story and features to be tested together, as this is rarely a 1-1 relationship, and there's some administrative overhead in tracking all this stuff that I don't think is very useful.


Currently our product owner provides very high-level business requirements, then the team create many user stories from this that are put in the backlog. So once I prepare the mind-map for what I can test based on the given requirement, I could take this to the product owner. Is it what you would do? When do you use this approach, do you normally get a relatively clear list of requirements?


If the product owner isn't helping create the stories, then I would definitely be asking lots of questions to be sure that what your team have guessed that they want is what they actually want. I'd suggest this might be a separate meeting to "what would you like me to test" though.

I think the first meeting is like "I can test that it works when users behave themselves. I can test that it handles input errors. I can test that network communications are secure. I can test that the record is successfully written to the backend database. I can test that a colour blind person can use this. What's important to you in this list?" and they might say "Just the first two" and you say "GREAT!" and cross out a whole bunch of stuff.

The second meeting is "Ok, you want me to test that is works when users behave themselves. Can we talk through what you think that means? So, if I added a record with a name and address, which are the only mandatory inputs, that would work?" and then the product owner might say "no, we need a phone number there too" and you start to flesh those things out. 

The second meeting is working from your test scope mind maps (in my case, Apples). The first meeting is working from a generic HTSM mind map (in my case, what do you want me to do here?)

With this approach I usually do get a relatively clear list of requirements at the end of step 2. Then I also ask the BAs to review what I'm testing by looking at the mind maps and seeing if there are business areas I've missed.


How do we integrate this context-driven approach to automation or regression testing?


I use concordion for my automated reporting, which is very flexible in what it allows you to include. I put mind map images in to the results generated by running automation, i.e. the apples mind map. I have little icons showing what, from all the things we talked about, have been included as automated checks, what I've tested manually, and what the team has decided is out of scope.

I find that in my team the Product Owner, Project Manager and BAs all go to the automated results when they want to know how testing is going. In order to show an overview of all testing in that single location, I pull all my mind maps in there too. I often find that the Product Owner and Project Manager don't drill down to the actual automated checks, they just look at the mind maps to get an idea of where we're at.


When you are doing time-boxed testing (session based?), do you record the all sessions? If so, do you normally attach the recorded session?


I don't record them with a screen recorder. I do record what I did in a document, using essentially the same structure as this.

Wednesday, 14 August 2013

Communicating Kindly

Starting to blog has made me realise that testing makes me a ranty person. These rants are generally released in cathartic blog format, but occasionally an innocent tester will fall victim to a spiel as they touch upon a frustration that others have created. Here's how I try to keep my conversations friendly and my ideas accessible to others.

Answer the question

If you've been asked a question that has been phrased in a way that makes you want to start by correcting the question itself, stop. Take a deep breath. Just answer the question. There's a time and a place for correcting misconceptions and it's not at the start of the journey. In all likelihood, the mistake in asking is due to ignorance that you could help correct, if you chose to answer the question in the first place.

Share your experience

We test in different ways, interacting with different applications, in a variety of project teams. Though you may need the context of a situation to give good advice, you could start with some general advice by providing the basics. Share what has worked in your experience. Offer links to the good content that lurks in shadows of the internet. Sow the seeds for a beginning; give people something to start from.

It's not their fault

I have trigger words or phrases that really fire me up. I like to think this is a common foible. It's really important not to go in to Hyde mode, because I don't think anything productive comes of it. Your audience tune out and label you as unhelpful. You end up feeling guilty for releasing the beast on those who don't deserve it.

Stay tuned

If someone is asking a question, you're unlikely to be able to resolve anything in a single conversation. A question is like an iceberg, you only see 10% of what the person wants to know. Do your best to start a dialog where further questions are encouraged. Give yourself the opportunity to build a relationship, rather than attempting to solve every problem in one reply. The more sharing that occurs, the better the advice becomes.


What would you add?

Wednesday, 7 August 2013

Stupid Silos

Division is part of identity. Finding a group of people to belong to, who share distinctive characteristics or thinking, helps to shape our view of our selves. But I get frustrated by the repercussions of how we group ourselves as testers.

Functional vs. Technical

A pet peeve of mine is the division creeping in to the New Zealand testing community between a test analyst and a test engineer, where the latter is used to distinguish those who are capable of writing automated checks or using automated tools. It annoys me because I feel the industry is lowering its expectations of testers. A test analyst is given permission to be "non-technical". A test engineer is given permission to switch off their brain; no analysis required, just code what you're told to!

Why has this distinction arisen? Because organisations place higher value on the automated checks produced by an engineer than they do on the information available from an analyst. How is this possible? Because many engineers genuinely believe that their checks are testing and many analysts are producing terrible information. How depressing.

I wish we hadn't created this divide. A tester should know the value and the limits of an automated suite, be capable of test analysis and provide quality information to the business for their decision making. A tester should be encouraged to have a breadth of skill. The silos created by our titles are preventing these type of people from developing.

Attaching the blinders

Similarly, I am annoyed by job titles that add specificity on the type of test activity occurring; security analyst, performance engineer, usability analyst, etc. Certainly these activities require specialist skill. But in deploying someone in to a project specifically to test one aspect of the application, you give them permission to ignore all others. Mandated inattentional blindness; count those passes!

All of these specialists will interact with the application in creative ways, which may result in execution of different functionality. If the tester is looking specifically at how fast the application responds or how usable the interface is, they may miss functional issues that their activities reveal. And even if they do see something strange, often these specialists are given little opportunity to understand how the application should behave, so they may not even understand that what they observe is a problem.

Barriers to collaboration

At CITCON in Sydney earlier this year, Jeff challenged us to think about whether we are truly collaborative in our agile teams; particularly across the traditional boundaries of developer, tester, operations and management. People rarely question the thinking of team members outside their role, because titles give people ownership of a particular aspect of the project. The challenge to establishing collaboration is in changing the professional identity of these people from the individual focus to the collective.

At the same conference, Rene shared a story about a team deployed to resolve high priority production issues. They were known as ‘the red team’ and included people from multiple professional disciplines; development, testing, etc. This team was focused on finding a solution and their collaboration was excellent. When asked about their role in the organisation, members of this team would proudly identify as being “on the red team”. This is a shift from how people usually identify themselves, by claiming a role within a group; “I’m a test analyst on Project Z”.

Testing is testing

Our challenge is to take the change in identity that occurs in this single focus, high pressure situation and apply it to our testing teams. By populating a team with people who have different skills, but asking them to think together, we can achieve better collaboration and create testers who have a broader skill set.

An engineer should not be permitted to build a fortress of checks that is indecipherable to other testers in the team. Analysts should be interrogating what is being coded, offering suggestions on what is included and demanding transparency in reporting. Similarly an analyst should not be allowed to befuddle with analysis and bamboozle with metrics without an engineer calling their purpose and statistics to account. Performance, security and usability specialists will all benefit from exposure to a broader set of interactions with the application, so that they can help the team by identifying issues they observe that our outside their traditional remit. Challenging each other allows transfer of knowledge, making us all better testers.

Titles acknowledge that we are different and that we have strengths in a particular area. But we should all try to identify as being part of a testing team rather than claiming an individual role within it. We should not allow a label to prevent us from delivering a good testing outcome to our client. We should not allow our titles to silo our thinking.

Thursday, 1 August 2013

Metric Rhetoric

I meet very few testers who argue that there is value in metrics like number of test cases executed as a measure of progress. The rhetoric for metrics such as these seems to have changed from defiant oblivion to a disillusioned acceptance; "I know it's wrong, but I don't want to fight".

The pervasive view is that, no matter what, project management will insist upon metrics in test reporting. Testers look on in awe at reporting in other areas of the project, where conversations convey valuable information. We are right to feel sad when the reporting sought is a numeric summary.

But they want numbers...

Do they? Do your project management really want numbers? I hear often that testers have to report metrics because that is what is demanded by management; that this is how things are in the real world. I don't think this is true at all. Managers want the numbers because that's the only way they know to ask "How is testing going?". If you were to answer that question, as frequently as possible, making testing a transparent activity, much of the demand for numbers would disappear.

"Even if I convince my manager, they still need the numbers for reporting further up the chain". If your manager has a clear picture of where testing is at, then they can summarise this for an executive audience without resorting to numbers. Managers are capable communicators.

Meh. It's not doing any harm.

Everytime that you give a manager a number that you know could be misleading or misinterpreted you are part of the problem. You may argue that the manager never reads the numbers, that they always ask "How's it going?" as you hand them the report, that they'll copy and paste those digits to another set of people who don't really read the numbers either, so where's the harm in that? Your actions are the reason that I am constantly re-educating people about what test reporting looks like.

Snap out of it!

We need to move past disillusioned acceptance to the final stage and become determined activists. A tester is hired to deliver a service as a professional. We should retain ownership of how our service is reported. If your test reporting troubles your conscience, or if you don't believe those metrics are telling an accurate story, then change what you are doing. When asked to provide numbers, offer an alternative. You may be surprised at how little resistance you encounter.