Thursday, 28 May 2015

Dominos to illustrate communication in pair testing

I recently ran a one hour workshop to introduce pair testing to my team. I wanted to make the session interactive rather than theoretical however, having done the research, I struggled to find any practical tips for training people in how to pair effectively. Having created something original to suit my purpose, I thought I would share my approach in case it is useful for others.

I coach a large team of 20 agile testers who are spread across several different teams, testing different applications and platforms. Though I wanted the workshop to be hands-on, the logistics of 10 pairs performing software testing against our real systems was simply too challenging. I needed to go low-tech, while still emulating the essence of what happens in a pair testing session.

So, what is the essence of pair testing? I spent several days thinking on this and, in the end, it wasn't until I bounced ideas around with a colleague that I realised. Communication.

Most people understand the theory of pairing immediately. Two people, one machine, sharing ideas and tackling a single task together. It's not a difficult concept. But the success of pairing hinges on the ability of those who are paired to communicate effectively with one another. How we speak to each other impacts both our enjoyment and our output.

With this goal in mind I started to research communication exercises, and found this:

Dominos

One of the listening skills activities that I do is that you have people get in groups of 2, you give one of them a pack of 8 dominos and the other a shape diagram of rectangles (dominos) in a random pattern. Only the person without the dominos should see the pattern. They sit back to back on the floor or the one with the dominos at a table and the other in a chair back to back. The one with the diagram instructs the other on placing the dominos to match the diagram. The one with the dominos cannot speak. They get 2 min. I usually do this in a big group where they are all working in pairs at once.
Then they switch roles, get a new pattern and do the exercise again, this time the person with the dominos is allowed to speak. 2 min. usually successful.
Then we debrief looking at challenges, jargon words used, analyze how they provided instructions without being able to watch the person, tone, questions asked, etc. ( I have this all in a document if you want it) It is quite fun and enlightening for those who are training to be able to be in a support role with technology.


Though it wasn't quite right for my workshop, this was an exercise for pairs that was interactive, communication focused, and involved toys. I decided to adapt it for my purpose and use dominos to illustrate two different types of knowledge sharing -- "follow me" and "flashlight" -- that hoped to see occur in our real-life pair testing sessions.

Follow Me

The workshop participants were placed in pairs. One person in the pair was given a packet of dominos and a diagram of 8 dominos in a pattern. They were given 2 minutes to arrange their dominos to match the diagram while their partner observed.

I asked each pair to push all their dominos back into a pile. The person who had arranged the dominos was asked to pick up the instruction diagram and hold it out of view of their partner. The person without the instructions was then given 2 minutes to repeat the same domino arrangement with limited assistance from their partner who was forbidden from touching the dominos!

Though the person with the dominos had seen the puzzle completed and knew it's broad shape, it was clear that they would need to talk to their partner and ask a lot of questions about the diagram in order to repeat the arrangement precisely. It was interesting to observe the different approaches; not every pair successfully completed the second part of this exercise within the 2 minute time frame.

After the exercise we had a short debrief. The participants noticed that:

  • pairs who talked more were able to complete the task quicker,
  • there were advantages to using non-verbal communication, particularly pointing and nodding, to help the person arranging the dominos, 
  • though it seemed easy when observing the task, attempting to repeat the same steps without the diagram was more challenging than people expected, 
  • it was frustrating for the person with the instructions to be unable to touch the dominos, and
  • keeping an encouraging tone when giving instructions helped to focus people on the task rather than feel stressed by the short deadline.


I felt that there were clear parallels between this activity and a pair testing scenario in which a tester is exploring a completely unfamiliar domain with guidance from a domain expert. I emphasised the importance of being honest when help is required, and keeping up a constant dialog where people are uncertain.

Flashlight

In the same pairs, one person was given a diagram of 8 dominos while the other was given a partial diagram that included only four. The person with access to only the smaller diagram was given 2 minutes to arrange the full set of 8 dominos.

Example of a full map of 8 dominos (left) next to a corresponding partial map of 4 dominos (right)

In this iteration the person who was arranging the dominos was given some understanding of what was required, but still needed need to ask their partner for assistance to complete the entire puzzle. As previously, the person with the complete picture was not permitted to touch the dominos and kept their instructions hidden from their partner.

Again we had a short debrief. The participants felt that this exercise was much easier than the first. Because the person arranging the dominos was bringing their own knowledge to the task it meant that almost every pair completed the arrangement within the 2 minutes.

As a facilitator I noticed that this little bit of extra knowledge changed the communication dynamics of some pairs quite dramatically. Instead of talking throughout, the observers remained silent as their partner completed the arrangement of the first four dominos. Only once the person with the dominos had completed the task to the extent of their abilities did they ask their pair for input.

The pairs who worked in this way were ultimately slower than their colleagues who kept talking to one another. One way that talking made things quicker was in eliminating double-handling of dominos -- "You'll need that one later".

Having shared this reflection, the two people switched roles and, with new diagrams, repeated the activity. With the expectation set that communication should remain continuous, it seemed that the pairs worked quicker together. The second iteration was certainly noisier!

I felt that there were clear parallels between this activity and one in which a tester is exploring a domain where they have some familiarity but are not an expert. It's important to remember that there is always something to learn, or opportunities to discover the ways in which the maps of others differ to our own. This exercise illustrated how important it is to continue communicating even when we feel comfortable in our own skills.

I was happy with how the dominos activities highlighted some important communication concepts for effective pair testing. If you'd like to repeat this workshop in your own workplace I would be happy to share my domino diagrams to save you some time, please get in touch.

Friday, 15 May 2015

Pair Testing

I'm currently working on defining a pair testing experiment to share testing knowledge across the agile teams within my organisation. What follows is my aggregated research on pair testing, which may be useful to others who are looking to implement pairing in their workplace.

Approach to pairing

Pair testing is a way of approaching a test design process by having two people test the same thing at the same time and place, continuously exchanging ideas. [1]

When paired, two people use a single machine or device. One has the keyboard, though it may pass back and forth in a session, while the other suggests ideas or tests, pays attention and takes notes, listens, asks questions, grabs reference material, etc. [2]

The pair should tackle a single testing task, so that they have a shared and specific goal in mind. Though the pair will work together, one person must own the responsibility for getting the task done. The person with ownership of the task may do some preparation, but pairing will be more successful if there is flexibility. A simple checklist or set of test ideas is likely to be a good starting point. [3]

During a pairing session the testers should talk at least as much as they test so that there is shared understanding of what they're doing and, more importantly, why they are doing it. [4]

Benefits of pairing

These benefits have been taken from the listed references and grouped into three themes:

High creativity

Working in a pair forces each person to explain their ideas and react to the ideas of others. The simple process of phrasing ideas seems to bring them into better focus and naturally triggers more ideas. 

Applying the information and insight of two people to a problem can lead to the discovery of how easily a person working alone can be a victim of tunnel vision.

Pairing brings people into close enough contact to learn about each other and practice communicating and resolving problems.

The camaraderie and the running commentary about the process, necessarily maintained by the pair in order to coordinate their efforts, tends to increase the positive energy in the process. 

High productivity

Each person must stay focused on the task or risk letting their partner down. 

Pairing allows the person at the keyboard to follow their train of thought without pausing to take notes or locate reference information. It encourages dogged pursuit of insights.

Two people working together limits the willingness of others to interrupt them.

Training Technique 

A strong pairing is one where people are grouped so that their strengths will mutually complement their weaknesses. This presents an opportunity for people to learn from one another.

Pairing is a good way for novices to keep learning by testing with others. It's also useful for experienced testers when they are new to a domain to quickly pick up business knowledge.

Experience Reports

A selection of referenced extracts from other blogs about pair testing that I found useful:


How do I know if I'm pairing or doing a demo? This is an important distinction to be aware of. If you're sitting with someone, and one of you is controlling all the conversation for the whole session, then you are not pairing.

Pairing is an interactive partnership. There is a certain level of inquiry and challenge, without feeling of threat or accusation. This is important - another party is essentially reviewing your testing as you perform it, and it's important to respond to these comments. Some of this feedback may suggest other tests, which if feasible you should attempt to include. Sometimes if your session has a goal, and the suggestion would take you off topic, you might want to leave it until the end or schedule another session.



Remember to narrate as you code. What are you thinking? Are you hunting for a file? What’s the test you’re writing now? Why that test? As I was coding, I was often silent. I knew what I was trying to do, but since the code was unfamiliar, I was spending a lot of time hunting. What I discovered was that my partner was feeling a bit useless because he felt he couldn’t contribute. As soon as he told me this, I started describing what I was trying to do and he was immediately able to start pointing me to sections of the code that he had fresh in his mind. 

As a tester, be sure to ask questions. It can be hard to ask questions that you think are dumb – especially when starting out. When I first started pairing as a tester, I felt reluctant to speak up because I didn’t want the programmer to feel like I was telling them how to do their job. I also didn’t want them to think I was stupid. I’ve not had any of the programmers I’ve worked with get defensive or treat me like an idiot. In fact, many things that I thought were stupid questions led to a discussion where we decided to use a different strategy than the one the programmer initially chose. 



If one or the other goes in with the idea that it is a one-way learning experience, the experience will fail.” Pair testing is only effective in an environment of mutual respect and trust. 

Whoever is “driving” during pair testing must ensure that the other party is actively participating and understands what is going on. Encourage thinking and talking aloud, keeping the other person informed on the motivation behind your actions.



You have to trust them to light the way and they have to trust you to send them a signal the moment you are aware that you are over your head. A good pair will tell you that it’s ok and you will get back on track together. 

Trust, vulnerability and communication in this moment is the bedrock of pairing. It is also the bedrock of building great software. 
The Moment Marlena Compton



I do think there are some times when it does make sense to pair test. For example, if you have a new hire who just doesn't know the system or how to test it, you might have him ride along to learn the system by testing it with a buddy. Likewise, if you are coaching someone new to testing, (or teaching an old dog new tricks), it might make sense to sit down and do real, serious, mission-important test work with two people at one keyboard for an extended period of time, say over an hour. Third, if you notice that you and a peer are finding different kinds of bugs, you might pair test just to learn about each other's testing styles -- to see how the other works, and to gain the skills to put on the 'hat' that finds that other category of defect.

Notice all of these situations are about learning.



... it removes so much fear of failure, which removes a blame culture ... If something gets missed, it’s not one person’s fault.



... testing together you will hit issues neither of you have hit [alone]. This is because you are both different testers and will test differently and so together you will try things neither have thought of. Also you will be able to track down more detail since you both have different ways to figure out the issue.
Pair Testing QA Hipster



Increasingly, organizations are bringing people with visual challenges or other disabilities into their accessibility test effort, but these testers still work in silos. Pairing testers with disabilities with non-disabled testers yields valuable results.



Do you know of any other resources that might be useful to add to this list?

Saturday, 9 May 2015

Collecting domain-specific test heuristics

In any testing team there are varying degrees of knowledge about the application under test. One of the things on my mind recently has been how we transfer domain knowledge between testers with varied experience.

In my team, some of the testers have been with the organisation for a long time and hold a great deal of institutional knowledge about the systems and how to test them. They are capable of finding bugs that others in the team are not, because they have years of experience in the domain and often know exactly where to look for problems.

We also have a number of new people in the testing team, myself included, who don't yet know the intricacies of the business. Though we have strong testing ability, we haven't yet built our personal catalogue of domain-specific test heuristics that help us to find the type of bugs that are unique to this organisation.

The demographics of the team offer a strong opportunity for transfer of domain knowledge. When I map out the number of people in the team against their years of experience, there's a skyscraper of new staff and a long tail of domain experts, as shown below:




Having experienced some cumbersome knowledge sharing initiatives, I wanted to create something lightweight to transfer information across the team. Something that would be easy to use and remain relevant through the team taking ownership of regular updates.

I am a big fan of the format and content of Elisabeth Hendrickson's Test Heuristics Cheat Sheet as a means for capturing test heuristics. I decided that I would like to try to create a domain-specific test heuristics cheat sheet, using the same quick reference format, for a page in our wiki. Given my own lack of domain knowledge, and my desire for this to ultimately be owned by the team, I decided to kick this off by facilitating a workshop session for all our testers.

I scheduled one hour with the team. Prior to the session, I asked them to read my Heuristics and Oracles post, which includes simple definitions for each term and links to popular resources and articles.

We began the session with a few minutes of discussion so that those who did not have the opportunity to read the post could talk to the people around them about what it contained. This meant that everyone had some theoretical knowledge before we began, and it also gave me some time as the facilitator to finish arranging the resources required for our first exercise, the marshmallow challenge.

I decided to run a condensed version of the marshmallow challenge, which differed slightly to the official website. We split into four groups, with a time limit of 10 minutes, and each group had to create the tallest tower possible from the limited resources provided to them (bamboo skewers, tape, string and blu-tak). As is traditional, the tower had to support a single marshmallow on the top.

This was a fun activity to loosen people up at the start of the session. It also provided a non-technical opportunity to talk about how we think. Once the winner had been established, we ran a team brainstorming activity to identify the heuristics in use during the challenge.

Each person who contributed a heuristic to the list was given a marshmallow, as I had leftovers from the challenge activity. Perhaps due to this extra incentive, the group generated a great list of heuristics including:

  • Start with a stable base
  • Test early - don't wait to put the marshmallow on top
  • Support the structure with your hands as it is being built
  • Don't be scared to use the resources even though they are limited
  • Blu-tak is the best connecting adhesive (better than tape or string)


At this point we were about halfway through the session. Everyone had a reasonable theoretical and practical understanding of heuristics in general. For the second half of the session, we switched to talking about test heuristics in our domain.

I ran a team brainstorming activity to generate test heuristics in eight categories. I had identified four functional areas that were common across our various front-end applications, then added the four different channels in which we deliver.

Each category had a large piece of paper, from a flip chart, with the words "How do a I test this?" in the top left corner and the words "How do I know it's a bug?" in the bottom right. Alongside each piece of paper were some printed screenshots from the applications as a visual prompt for thinking.

I gave the team 20 minutes to collaboratively brainstorm the test heuristics and oracles in each category. As they ran out of ideas in one area, they were encouraged to switch location to add their thoughts in another.

In the final 10 minutes of the session we went around the room and whoever was sitting in front of a piece of paper read out what was written on it. Though this was a little dry after a very interactive workshop, it allowed everyone to hear what other people had identified giving them some initial visibility of the type of information that our own domain-specific test heuristics cheat sheet would contain.

After the session, I took the sheets from the brainstorming exercise and transcribed them into our organisation wiki. I adopted the same format as Elisabeth, so our domain-specific test heuristics cheat sheet looks something like:




I had some great feedback from the workshop session. I feel confident that everyone in the test team now has some understanding of what heuristics are and why it is useful for us to identify the testing heuristics that we use in our domain. Time will tell whether the cheat sheet is truly useful. The real test will be in how it evolves from this starting point. I know that we've barely scratched the surface of capturing all the domain knowledge in the team, but hope we've made a step forwards in the right direction.