Showing posts with label Coaching. Show all posts
Showing posts with label Coaching. Show all posts

Wednesday, 11 April 2018

Setting strategy in a Test Practice

Part of my role is figuring out where to lead testing within my organisation. When thinking strategically about testing I consider:

  • how testing is influenced by other activities and strategies in the organisation,
  • where our competitors and the wider industry are heading, and
  • what the testers believe to be important.

I prefer to seek answers to these questions collaboratively rather than independently and, having recently completed a round of consultation and reflection, I thought it was a good opportunity to share my approach.

Consultation

In late March I facilitated a series of sessions for the testers in my organisation. These were opt-in, small group discussions, each with a collection of testers from different parts of the organisation.

The sessions were promoted in early March with a clear agenda. I wanted people to understand exactly what I would ask so that they could prepare their thoughts prior to the session if they preferred. While primarily an information gathering exercise, I also listed the outcomes for the testers in attending the sessions and explained how their feedback would be used.

***
AGENDA
Introductions
Round table introduction to share your name, your team, whereabouts in the organisation you work

Transformation Talking Points
8 minute time-boxed discussion on each of the following questions:
  • What is your biggest challenge as a tester right now?
  • What has changed in your test approach over the past 12 months?
  • How are the expectations of your team and your stakeholders changing?
  • What worries you about how you are being asked to work differently in the future?
  • What have you learned in the past 12 months to prepare for the future?
  • How do you see our competitors and industry evolving?
  
If you attend, you have the opportunity to connect with testing colleagues outside your usual sphere, learn a little about the different delivery environments within <organisation>, discover how our testing challenges align or differ, understand what the future might look like in your team, and share your concerns about preparation for that future. The Test Coaches will be taking notes through the sessions to help shape the support that we provide over the next quarter and beyond.
***

The format worked well to keep the discussion flowing. The first three questions targeted current state and recent evolution, to focus testers on present and past. The second three questions targeted future thinking, to focus testers on their contribution to the continuous changes in our workplace and through the wider industry. Every session completed on time, though some groups had more of a challenge with the 8 minute limit than others!

Just over 50 testers opted to participate, which is roughly 50% of our Test Practice. There were volunteers from every delivery area that included testing, which meant that I could create the cross-team grouping within each session as I intended. There were ten sessions in total. Each ran with the same agenda, a maximum of six testers, and two Test Coaches.

Reflection

From ten hours of sessions with over 50 people, there were a lot of notes. The second phase of this exercise was turning this raw input into a set of themes. I used a series of questions to guide my actions and prompt targeted thinking.

What did I hear? I browsed all of the session notes and pulled out general themes. I needed to read through the notes several times to feel comfortable that I had included every piece of feedback in the summary.

What did I hear that I can contribute to? With open questions, you create an opportunity for open answers. I filtered the themes to those relevant to action from the Test Practice, and removed anything that I felt was beyond the boundaries of our responsibilities or that we were unable to influence.

What didn't I hear? The Test Coaches regularly seek feedback from the testers through one-on-one sessions or surveys. I was curious about what had come up in previous rounds of feedback that wasn't heard in this round. This reflected success in our past activities or indicated changes elsewhere in the organisation that should influence our strategy too.

How did the audience skew this? Because the sessions were opt-in, I used my map of the entire Test Practice to consider whose views were missing from the aggregated summary. There were particular teams who were represented by individuals and, in some instances, we may seek a broader set of opinions from that area. I'd also like to seek non-tester opinion, as in parts of the organisation there is shared ownership of quality and testing by non-testers that makes a wider view important.

How did the questions skew this? You only get answers to the questions that you ask. I attempted to consider what I didn't hear by asking only these six questions, but I found that the other Test Coaches, who didn't write the questions themselves, were much better at identifying the gaps.

From this reflection I ended up with a set of about ten themes that will influence our strategy. The themes will be present in the short-term outcomes that we seek to achieve, and the long-term vision that we are aiming towards. The volume of feedback against each theme, along with work currently in progress, will influence how our work is prioritised.

I found this whole process energising. It refreshed my perspective and reset my focus as a leader. I'm looking forward to developing clear actions with the Test Coach team and seeing more changes across the Test Practice as a result.

Sunday, 17 September 2017

How to start a Test Coach role

I received an email this morning that said, in part:

I've been given the opportunity to trial a test coaching approach in my current employer (6-7 teams of 4-5 devs). 

I had a meeting with the head of engineering and she wants me to act almost like a test consultant in that I'm hands off. She expects me to be able to create a system whereby I ask the teams a set of questions that uncover their core testing problems. She's also looking for keys metrics that we can use to measure success.

My question is whether you have a set of questions or approach that allows teams to uncover their biggest testing problems? Can you suggest reading material or an approach?

On this prompt, here is my approach to starting out as a Test Coach.

Avoid Assessment

A test coach role is usually created by an organisation who are seeking to address a perceived problem. It may be that the testers are slower to respond to change, or that testers are less willing to engage in professional development, or that the delivery team does not include a tester and the test coach is there to introduce testing in other disciplines. 

Generally, the role is created by a manager who sits beyond the delivery teams themselves. They have judged that there is something missing. I think it is a bad idea to start a test coach role with a survey of testing practices intended to quantify that judgement. You might represent a management solution to a particular problem that does not exist in the eyes of the team. 

Your first interaction will set the tone of future interactions. If you begin by asking people to complete a survey or checklist, you pitch your role as an outsider. Assessments are generally a way to claim power and hierarchy, neither of which will benefit a test coach. You want to work alongside the team as a support person, not over them.

Assessment can also be dangerous when you enter the role with assumptions of what your first actions will be. If you think you know where you need to start, it can be easy to interpret the results of an assessment so that it supports your own bias.

But if not by assessment, how do you begin?

Initiation Interviews

Talk to people. One-on-one. Give them an hour of your time and really listen to what they have to say. I try to run a standard set of questions for these conversations, to give them a bit of structure, but they are not an assessment. Example questions might include:

  • Whereabouts do you fit in the organisation and how long have you been working here?
  • Can you tell me a bit about what you do day-to-day?
  • What opportunities for improvement do you see?
  • What expectations do you have for the coaching role? How do you think I might help you?
  • What would you like to learn in the next 12 months?

I don't ask many questions in the hour that I spend with a person. Mostly I listen and take notes. I focus on staying present in the conversation, as my brain can tend to wander. I try to keep an open mind to what I am hearing, and avoid judgement in what I ask.

In this conversation I consciously try to adopt a position of ignorance. Even if I think that I might know what a person does, or what improvements they should be targeting, or even where they should be focused on their own development. I start this conversation with a blank slate. Some people have said "This is silly, you know what I do!", to which I say "Let's pretend that I don't". This approach almost always leads me to new insights.

This is obviously a lot slower than sending out a bulk survey and asking people to respond. However, it gives you the opportunity as a coach to do several important things. You demonstrate to the people that you'll be working with that you genuinely want their input and will take the time to properly understand it. You start individual working relationships by establishing an open and supportive dialog. And you give yourself an opportunity to challenge your own assumptions about why you've been bought into the test coach role.

Then how do you figure out where to start?

Finding Focus

When my role changed earlier in the year, I completed 40 one-on-one sessions. This generated a lot of notes from a lot of conversations, and initially the information felt a little overwhelming. However, when I focused on the opportunities for improvement that people spoke about, I started to identify themes.

I grouped the one-on-one discussions by department, then created themed improvement backlogs for each area. Each theme included a list of anonymous quotes from the conversations that I had, which fortuitously gave a rounded picture of the opportunities for improvement that the team could see.

I shared these documents back with the teams so that they had visibility of how I was presenting their ideas, then went to have conversations with the management of each area to prioritise the work that had been raised.

What I like about this approach is that I didn't have to uncover the problem myself. There was no detective work. I simply asked the team what the problems were, but instead of framing it negatively I framed it positively. What opportunities for improvement do you see? Generally people are aware of what could be changed, even when they lack the time or skills to drive that change themselves.

Then asking for management to prioritise the work allows them to influence direction, but without an open remit. Instead of asking "What should I do?", I like to ask "What should I do first?".

Measuring Success

The final part of the question I received this morning was about determining success of the test coach role. As with most measures of complex systems, this can be difficult.

I approach this by flipping the premise. I don't want to measure success, I want to celebrate it.

If you see something improve, I think part of the test coach role is to make sure that the people who worked towards that improvement are being recognised. If an individual steps beyond their comfort zone, call it out. If a team have collectively adopted and embedded a new practice, acknowledge it.

Make success visible.

I believe that people want to measure success when they are unable to see the impact of an initiative. As a test coach, your success is in the success of others. Take time to reflect on where these successes are happening and celebrate them appropriately.


That's my approach to starting in a test coach role. Avoiding assessment activities. Interviewing individuals to understand their ideas. Finding focus in their responses, with prioritisation from management. Celebrating success as we work on improvements together.

Wednesday, 5 April 2017

Test Coaching Competency Framework

A few weeks ago I was wondering how to explain the skills required for a Test Coach role. Toby Sinclair introduced me to Lyssa Adkin's Agile Coaching Competency Framework. It's a simple diagram of the core experience and skills that an agile coach might have:

Ref: Hiring or Growing Right Agile Coach by Lyssa Adkins and Michael Spayd

An individual can take this diagram and complete a relative assessment of themselves or others by shading in each slice, for example:

Ref: Hiring or Growing Right Agile Coach by Lyssa Adkins and Michael Spayd

In a presentation about the framework, Lyssa Adkins and Michael Spayd explain how it can be used as a self-assessment tool, for hiring and developing agile coaches, or to help communities of agile coaches to grow.

I like the simplicity of the framework and could see an opportunity to apply it to the testing domain to explain test coaching. As a small disclaimer, I have never heard Lyssa and Michael explain their framework in person, so I'm not entirely sure how true my interpretation is to their original intent.

Test Coaching Competency Framework

The test version of the framework is almost identical to the original:


Starting at the top, testing practitioner reflects the relevant experience that a person has. Different people will have different interpretations of what this means. Shading of this slice might reflect number of years in industry, number of organisations, variety of roles, etc.

If occupying a role is reflected at the top of the framework, the knowledge that you build through that experience is reflected at the bottom as technical, business, and transformation mastery. In simple terms, shading of the practitioner slice shows what you've done, shading in the mastery slices shows what you've learned as a result.

Technical mastery in testing might include:
  • test automation frameworks
  • coding skills
  • continuous integration and delivery tools
  • development practices e.g. source control, code review
  • basic understanding of solution architecture
Business mastery in testing might include:
  • shift-left testing skills e.g. Lean UX, BDD
  • user experience, usability, and accessibility testing
  • targeted exploratory testing and development of test strategy
  • business intelligence and analytics
  • relevant domain knowledge
Transformation mastery in testing might include being a leader or participant in:
  • switching test approach e.g. waterfall to agile
  • continuous improvement in development practices
  • engaging beyond the development team e.g. DevOps
  • change management

On the left of the framework are the skills that are important to communicate mastery to others: training and mentoring. The distinction is in the size of the audience. Training is prepared material that is delivered to a group of people. Mentoring is transfer of knowledge from one individual to another. Both training and mentoring require the coach to have knowledge of the topic that they're helping someone with.

On the right of the framework are the skills that are important to enable others to find their own solutions: facilitating and coaching. The same distinction in size is present: facilitating is to a group of people and coaching is between individuals. A person conducting these activities is guiding a group or an individual towards the discovery of their own solutions. Skill in facilitation and coaching does not come from personal mastery of a topic, but instead in developing the ability to draw the best from people through questioning.

Using the framework

I needed to be able to explain the skills of a Test Coach in order to recruit. I have used my version of this framework as an activity in eight Test Coach interviews so far. I draw the framework on a piece of paper, explain what is included in each slice as above, then ask the candidate to assess themselves and talk about their decisions.

Here's an example of how one candidate responded:



I've found the test coaching competency framework is an excellent alternative to the general prompt of "tell us a bit about yourself". I use it at the start of the interview to give more structure to the opening conversation and clearly illustrate what I'm interested in hearing from the candidate. 

I've found that it empowers people to talk more freely about both their achievements and opportunities for growth. I like that it gives control of the conversation to the candidate for a period of time rather than asking them to respond to specific questions for the entire interview.

What do I do with the results? I haven't been seeking an individual who is strong in every area, but I am looking to build a team of coaches with complementary skills and experiences. The shaded frameworks are helpful in determining how a group of individuals might become a team. Once a team is formed, I imagine that this information could help guide initial allocation of work and allow me to identify some opportunities for cross-training.

I have found the test coaching competency framework useful and would recommend it to others who are looking to build test coaching teams.

Sunday, 19 February 2017

Test Manager vs. Test Coach

I've been working as a Test Coach for almost two years and I will soon be seeking to hire people into the role. One common attitude that I hear is:

"Test Coach. Oh, that's just the agile name for a Test Manager."

It's true that if you work in an organisation that uses a traditional delivery model with separate design, analysis, development and testing teams, then you are more likely to see the title Test Manager for the person who leads testing. If you work in an agile model with cross-functional teams, then you are more likely to see the title Test Coach for the person who leads testing. But the two roles are not synonymous; it isn't simply a rebranding exercise.

Here are some key differences from my experience.

A Test Manager has a team of testers who report directly to them. They are responsible for hiring the team, determining and recording individual development goals, approving time sheets and leave requests.

A Test Coach has no line management responsibilities. The testers will be part of a cross-functional team that is managed by a team leader or delivery manager. A coach has a limited ability to lead through authority, instead their role is a service position. The Test Coach can influence hiring decisions and support testers to achieve their individual development goals. The language of coaching is different and it requires a different approach.

A Test Manager leads a group of testers who primarily identify as testers. If asked what they would do, the tester's response would be that they're a tester. The testing community is inherent, created by co-location and day-to-day interaction with testing peers.

A Test Coach leads a group of testers who primarily identify as contributors to an agile team. If asked what they do, the tester's response would be that they're part of a particular delivery team. They might mention that their main role in that team is testing, but to identify their place in the organisation it's often the delivery team that is mentioned first. The testing community must be fostered through planned social and knowledge sharing activities for testers who work in different areas, which are often activities lead or championed by the coach.

A Test Manager has ownership of the testing that their team undertake. They are likely to be accountable for test estimates, test resourcing, the quality of test documentation, and may be involved in release governance or sign-off procedures. 

A Test Coach has none of these responsibilities. In an agile environment they are owned by the delivery team who estimate together, review each others work, and collectively determine their readiness for release. The decision making sits outside of the Test Coach role, though they might be called on for counsel in the event of team uncertainty, disagreement, or dysfunction.

A Test Manager drives their testers. They're active participants in their day-to-day work, with hands-on involvement in tracking and reporting testing activities.

A Test Coach serves their testers. They usually won't get involved in specific testing activities unless they are asked to do so. Coaching interactions are driven by the person who needs support with testing, which is a wider group than only testers. The coach is proactive if they identify a particular need for improvement, but their intervention may be with a softer approach than that of a manager.

The Test Manager will know the solution under test inside-out. In order to properly meet their accountabilities they need to be involved in some degree of detail with the design and build of the software. Test Managers are also adept in identifying opportunities for improvement within the processes and practices of their team, or the products that they work with.

The Test Coach is unlikely to be an expert in the product under development or the wider system architecture. They will have some knowledge of these aspects, but as they are removed from the day-to-day detail their understanding is likely to be shallower than the testers who are constantly interacting with the system. A coach generally has a more holistic view for identifying opportunities for improvement that span multiple teams and disciplines.

A Test Manager is the escalation point for testers. Regardless of the problem that a tester is unable to resolve, the Test Manager is the person who will support them. The issues may span administrative tasks, interpersonal communication, professional development, delivery practices, project management, or testing.

A Test Coach is an escalation point for testing-related problems only. The types of issues that come to a coach are generally those that impact multiple delivery teams e.g. refactoring of test automation frameworks or stability of test environments, or those within a team that require specialist testing input to solve e.g. improving the unit test review process or fostering a culture where quality is owned by the team not the tester. These issues may not be raised by a tester, but can come from anyone within the delivery teams, or beyond them. A Test Coach might also be asked to contribute to resolution of non-testing of problem, but these discussions are usually lead by another role.

A Test Manager will identify training opportunities that are aligned with the development goals of their staff and arrange their attendance. They will be abreast of workshops and conferences in the area that may be useful to their team.

A Test Coach will do the same, but they are more likely to identify opportunities to deliver custom training material too. The coach has the capacity to create content, the knowledge to make the material valuable, along with some understanding of teaching to engage participants effectively e.g. learning styles, lesson planning, and facilitation.

Both roles are leading testing in their organisation. The roles are different because the context in which they operate is different. In a nutshell, a Test Manager leads testers and a Test Coach leads testing. The focus shifts from people to discipline.



I hope that this explanation offers clarity, both for leaders who are looking to change their role and for testers who are working within a different structure.

These observations are based on my own experience. If your experience differs, I would welcome your feedback, questions or additions in the comments below.

Friday, 25 November 2016

Finding the vibe of a dispersed team

Recently there has been an unexpected change in my work environment. Just after midnight on the 14th of November, an earthquake with a magnitude of 7.8 struck New Zealand. The earthquake caused significant damage across the upper South Island and lower North Island, including in Wellington where I am based. My work building is currently closed due to earthquake damage.

I work with over 30 testers who are spread across 18 delivery teams. In a co-located environment that's a challenging number of people to juggle. Now that everyone is working from home, there are new obstacles in trying to lead, support and coach the testers that I work with.

In the past fortnight I've been doing a lot of reading about distributed teams. Though some of the advice is relevant, most of it doesn't apply to our situation. We're not distributed in the traditional sense, across multiple cities, countries and timezones. Though we're set up for remote work, it hasn't been our go-to model. We're still close enough that relatively regular face-to-face meetings are possible.

Instead of distributed, I've started to think of us as dispersed.

The biggest challenge so far, in our first fortnight as a dispersed team, has been in determining the vibe of the testing community. The vibe of the team is the atmosphere they create: what is communicated to and felt by others. The vibe comes from feelings of the individuals within the team.

In a co-located environment, there are a lot of opportunities to determine the vibe. The most obvious is our weekly Testing Afternoon Tea. This is a purely social gathering every Tuesday afternoon at 3pm. We have a roster for who provides the afternoon tea, all of the testers meet in the kitchen area, and spend around 15 minutes catching up. The meeting is unstructured, the conversations are serendipitous.

When everyone turns up to afternoon tea, stays for the entire 15 minutes, and there is a hum of conversation, the vibe of the team feels happy and relaxed. When it is difficult to detach people from their desks, people grab food then leave, and the conversations are mostly cathartic, the vibe of the team feels stressed and frustrated. Often, there's a mixture of both.

But even when the testing team are not together, I am reading the vibe from our co-location. For example, I'll often wander the floor in the morning when stand ups are happening. I look at how many people from outside the team are attending. When I spot multiple delivery managers and product owners with a single team, that may be a sign that the team is under pressure or suffering dysfunction. If it seems like the testers are not contributing, or they have closed body language, that may be a sign of frustration or despondence.

The vibe helps me determine where to focus my attention. It's important to be able to offer timely support to the people who need it, even if they may not think to ask. It's important to determine whether it's an appropriate time to think about formal learning, or if it's better to give people space to focus on their delivery demands. It's important to recognise when to push people and when to leave them alone.

Facing the reality of coaching a dispersed team feels a little bit like being blindfolded. The lack of co-location has removed one of my senses. How do I find the vibe of a dispersed team?

I find working at home quite isolating, so the first action I took was to try and reduce the feeling of being far away from everybody else. Though our communication is now primarily through online channels, we are only dispersed and not distributed.

At the start of this week, I asked the testers to check-in to tell me which suburb of the city they were working from and whether they had all the equipment they needed to work effectively. Through the morning I received responses that I used to create a map of our locations. We are now spread across an area of approximately 600 square kilometres or 230 square miles:

Working locations of testers before and after the earthquake

The information in the map is specific enough to be useful but general enough to be widely shared. Markers are by suburb, not personal address, and are labelled by first name only. Tribe groupings are shown by colour and in separate layers so that they can be toggled so that it's possible to see, for example, where all our mobile testers are located.

Creating the map was a way to re-assert that we are still a community. I felt this was a pre-requisite of keeping the testers connected to each other and mindful of the support available from their peers.

The check-in format that I used to gather the information at the start of the week worked well. It meant that everyone contributed to the discussion. I plan to start each week with a check-in of some description while we remain dispersed.

Next I started to consider how to create an environment for the informal gathering and conversation that would usually happen at our weekly afternoon tea. November is traditionally a busy time of year for our delivery as we work to release before the holiday period. Even when we're co-located, it can be hard to get people together. Any distraction from delivery had to have an element of purpose.

Communication was emphasised in everything that I read about distributed teams, with the message that more is better when people are working remotely. I wanted a daily rather than a weekly pulse, but it had to be designed for asynchronous communication. It wasn't feasible to attempt to book a daily appointment and gather people together.

I decided to make use of a book of objective thinking puzzles that I purchased some time ago but never completed. The puzzles are relatively quick, have a purpose in expanding thinking skills, are well suited to remote asynchronous communication, create enough interest that people participate, and offer the opportunity for some conversation outside of their core purpose.

The hardest Puzzle of the Day so far!

I've started to share a puzzle each morning with the testers via an online chat channel. This is keeping the channel active with conversation, which is essential for me to determine the vibe. I'm yet to determine importance within the patterns that I see. I don't assume that silence is bad. I don't assume that people who are active aren't under pressure. But I hope that encouraging informal conversations will start to provide rich information about how people are feeling, just as it did in the office.

Finally, I've started to attend meetings that I would usually skip in our co-located environment. This week the coaching team that I belong to attended two of our product tribe gatherings. These focus on sharing information that delivery teams need to succeed and recognising achievements in what we've already released to our customers.

The content is not directly relevant to me, but these events were a great opportunity to determine the vibe of those tribal groups and the testers within them. Having the ability to sense the atmosphere was worth the hassle of arranging transport and balancing calendar conflicts to attend. It was also a way to be visible, so that people remember to call on us for help too.

It's still early days for our dispersed team. These are just a few things that I've done this week to try to lift the blindfold. I'm curious to hear from other people who coach across dispersed or distributed teams. How do you determine the atmosphere of your team? Where do you discover opportunities to support people? What suggestions do you have that I could try to apply?


Wednesday, 2 November 2016

Stay Interviews for Testers

Recently one of my colleagues sent out an article about stay interviews. Basically, a stay interview is the opposite to an exit interview. Instead of waiting for people to resign then asking them why they are leaving the organisation, you try to determine what is making them stay.

Stay interviews are primarily a retention tool. They're a means of staying connected with the people who work with you, and maintaining an environment that they're happy to be contributing in.

I was interested in this idea, so I decided to try it out. I met one-on-one with every permanent tester in my department to ask a set of questions that touched on motivation, happiness, unused talents and learning opportunities. The answers that I collected provided me with a pulse of the team as a whole, and valuable insights into the individuals who I coach too.

The Questions

I pulled all of my stay interview questions from across the internet. There are a lot of articles around that will give you examples. Some that I read as I researched the concept of stay interviews were:

The ten questions that I chose as being most relevant to my organisation and the purpose of my discussions were:
  1. The last time you went home and said, "I had a great day, I love my job," what had happened?
  2. The last time you went home and said, "That's it, I can't take it anymore," what had happened?
  3. How happy are you working here on a scale of 1-10 with 10 representing the most happy?
  4. What would have to happen for that number to become a 10?
  5. What might tempt you to leave?
  6. What existing talents are not being used in your current role?
  7. What would you like to learn here?
  8. What can I do to best support you?
  9. What do you think is the biggest problem in [our testing team] at the moment?
  10. What else should I be asking you?

The Answers

All of these individual conversations were in confidence. However I did create a high-level document to share with other leaders in my department, which summarised key themes through illustrations, graphs, and anonymised comments. What follows is a subset of that, suitable for sharing.

I took the answers to the first two questions, categorised the responses, then created word clouds that demonstrated the common topics. An awesome day for a tester was one in which they are discovering new things to learn, have released software to production, or are simply enjoying the momentum of completing their work at a steady pace:

"I had a great day, I love my job"

An awful day for a tester is one in which their delivery team is in conflict or has a misunderstanding, where they’re in the midst of our release process, or when they’ve encountered issues with our test environments.

"That's it, I can't take it anymore"

What I found particularly interesting about these responses was how general they were. There were not many comments that were specific to test alone. Rather, I believe that these themes would be consistent for any of our delivery team members: business analysts, developers, or testers.

The question around happiness prompted for a numeric response, so I was able to graph the results:

How happy are you working here on a scale of 1-10 with 10 representing the most happy?

This data was interesting in that the unhappiest testers were mostly from the same area. This was a clear visual to share with the leadership in that particular part of the department, to help drive discussion around specific changes that I hope will improve the testing habitat.

When asked what would improve happiness, salary was an unsurprising response. But other themes emerged of almost equal weighting. Time to deliver more automation, a consistent workflow for testers, and the ability to pick up and learn new tools.

In response to existing talents that are not being used, the most prevalent skills were those that sit within the Testing Coach role: automation frameworks, leadership and training. This was a strong indicator to me that I need to delegate even more frequently to provide these opportunities.

Frustratingly, but not unusually, the requests for learning were fragmented. The lack of a consistent response makes it difficult to arrange knowledge sharing sessions that will appeal to a wide audience. But it does allow people to specialise in areas that are of interest to them rather than pursuing shallow general learning.

Overall I found the activity of stay interviews very useful. The structured set of questions helped me to have a purposeful and productive conversation with each of the permanent testers that I work with. I learned a lot from the information that was gathered, each set of responses were interesting for different reasons. The results have helped me to shape my actions over the coming months. I'm hoping to create outcomes from these conversations that will continue to keep our testing team happy.

Sunday, 25 September 2016

Why don't you just

I'm solution oriented. If I hear about a problem, I like to make suggestions about how it can be resolved. Sometimes before people have even stopped speaking, my brain is spinning on ideas.

As a coach and mentor, this trait can be a problem. Thinking about solutions interferes with my active listening. I can't hear someone properly when I'm planning what I'll say next. I can neglect to gather all the context to a situation before jumping in with my ideas. And when I offer my thoughts before acknowledging those of the person who I'm talking to, I lack empathy.

Earlier in my career I was taught the GROW model, which is a tool that has been used to aid coaching conversations since the 1980s. GROW is an acronym that stands for goal, reality, options, way forward. It gives a suggested structure to a conversation about goal setting or problem solving.

When I jump to solutions, I skip straight to the end of the GROW model. I'm focusing on the way forward. While I do want my coaching conversations to end in action, I can end up driving there too fast.

Pace of conversation is a difficult thing to judge. I've started to use a heuristic to help me work out when I'm leaping ahead. If I can prefix a response with "Why don't you just" then it's likely that I've jumped into solution mode alone, without the person that I'm speaking to.

Why don't you just ask Joan to restart the server?

Why don't you just look through the test results and see how many things failed?

Why don't you just buy some new pens?

"Why don't you just" is the start of a question, which indicates I'm not sure that what I'm about to say is a valid way forward. If I'm uncertain, it's because I don't have enough information. Instead of suggesting, I loop back and ask the questions that resolve my uncertainty.

"Why don't you just" indicates an easy option. It's entirely likely that the person has already identified the simplest solutions themselves. Instead of offering an answer that they know, I need to ask about the options they've already recognised and dismissed. There are often many.

"Why don't you just" can also help me identify when I'm frustrated because the conversation is stuck. Perhaps the other person is enjoying a rant about their reality or cycling through options without choosing their own way forward. Then I need to ask a question to push the conversation along, or abandon it if they're simply talking as a cathartic outlet.

This prompt helps me determine the pace of a conversation. I can recognise when I need to slow down and gather more information, or when a conversation has stalled and I need to push the other person along. Perhaps "Why don't you just" will help others who are afflicted with a need for action.

Thursday, 11 August 2016

Fostering professional development

One of the sessions that I attended at CAST2016 was titled "How do I reach the congregation when I'm preaching to the choir?" presented by Rob Bowyer and Erik Davis. One of the main themes of discussion focused on whether people should "sell" professional development to their colleagues or team.

In the introduction to the session, Rob and Erik spoke a little bit about their own contexts. They shared some of the challenges that they've encountered in trying to foster a culture of professional development in both their organisations and their local testing community.

Two particular challenges stuck out for me and I noted them down. Firstly, that "most of the people didn't care" about professional development. Secondly, that "I've been struggling to get people to see the value" in professional development. It struck me that these two challenges in creating a culture of learning could be related.

Do I see value?

I have an ISTQB Foundation certificate. I did this early in my career because I believed that getting this qualification was necessary to find employment in the software testing field. I could see the certificate being mentioned in a lot of job advertisements for testers. 

I saw a clear benefit to me in downloading the syllabus, doing some independent study and taking an exam. This activity was going to open up opportunities in a field of work that I might otherwise be unable to enter. I wanted to be a tester, so I wanted to get the certification.

At that time, I saw the value in this professional development for my career.

On the other hand, I have never completed the BBST Foundation course. I have heard a lot about this qualification and investigated the material that is available online. I have advocated for people in my team to attend this course and published the business benefits I used to argue for this opportunity. But I have not completed the course personally.

I did not learn about BBST Foundation until I had reached a point where I had learned many, but not all, of the concepts in the course via other means. I had heard a lot about the time investment required to complete the course successfully. When given the opportunity to take the class, I decided not to.

At that time, I did not see the value in this professional development for my career.

Do I care?

In the case of ISTQB, a manager might have assumed by my actions that I cared about my professional development. In the case of BBST, a manager might have assumed by my actions that I did not care about my professional development. Both conclusions are reached by assumptions, which are present in any communication.

The Satir Interaction Model describes what happens inside of us as we communicate - the process we go through as we take in information, interpret it, and decide how to respond [Ref].

Ref: "I think we have an issue" -- Delivering unwelcome messages
Fiona Charles

The steps in the Satir Interaction Model between intake and response are hidden. This means that the end result of the process that assigns meaning and significance can be quite surprising to the recipient, which can be a catalyst for conflict.

For example, imagine that I give a manager an input of "I do not want to take the BBST Foundation course". I would be surprised by a response from that manager expressing disappointment that I don't care about my professional development.

We can also climb a Ladder of Inference in our interactions, which refers the idea that there's "a common mental pathway of increasing abstraction, often leading to misguided beliefs". In essence, this is about leaping to conclusions.

For example, imagine the same manager who received my negative response to the BBST Foundation course receiving a promotional email for the RST course. They might extrapolate from my previous negative response that I will not want to attend RST, that I don't want to take any training courses, and that it would be a waste of time to forward me an email that describes this opportunity. I haven't had any input into this flow of reasoning. The manager has independently climbed a ladder of inference.

I think we need to be aware of both of these communication models when assessing an apparent lack of interest in professional development - particularly when we're labeling what we see as "most of the people didn't care".

Empathy & Understanding

Let's return to the question of whether there is a need to sell professional development. I don't think so. However, I agree with an alternative phrasing suggested in the session: that we should foster professional development.

When I sell, I am trying to be persuasive and articulate the merits of an activity. My communication is broadcast oriented. I want to share my reasoning and rationale. I try to explain why people should participate. My intent is to advertise.

That "I've been struggling to get people to see the value" is a failure to sell.

When I foster, I am seeking to encourage the development of an activity through understanding the obstacles that prevent it from happening. I want to be mindful of the ladder of inference and the judgments that I am applying to the responses of my colleagues and team. I want to be aware of where I've assigned significance and meaning might have distorted the message that I have been given, particularly when people are saying "no" to an opportunity.

That "most of the people didn't care" is a failure to foster.

I believe that there are relatively few people who truly don't care about their professional development. If there are people around you who you would label in this manner, I'd challenge you to think about how you have communicated and the responses that you've received.

What have they actually said? What meaning have you prescribed? Have you really understood?

I believe that in reflection and inquiry there is opportunity to successfully foster professional development.



Other posts from CAST2016:

Saturday, 2 April 2016

Lightning Talks for Knowledge Sharing

The end of March is the halfway point of the financial year in my organisation. It's also the time of mid-year reviews. I don't place much emphasis on the review process that is dictated, but I do see this milestone as a great opportunity to reflect on what I've learned in the past six months and reassess what I'd like to achieve in the remainder of the year.

I wanted to encourage the testers in my team to undertake this same reflection and assessment. I was also very curious about what they would identify as having learned themselves in the past six months. I wanted to see where people were focusing their self-development time and understand what they had achieved.

Then I thought that if I was curious about what everyone was doing, perhaps the testers would feel the same way about each other. So I started to think about how we could effectively share what had been learned by everyone across the team, without overloading people with information.

One of the main facets of my role as a Testing Coach is to facilitate knowledge sharing. I like to experiment with different ways of propagating information like our pairing experiment, coding dojos, and internal testing conference. None of these felt quite right for what I wanted to achieve this time around. I decided to try a testing team lightning talks session.

I was first exposed to the idea of lightning talks at Let's Test Oz. The organisers called for speakers who would stand up and talk for up to five minutes on a topic of their choice. A couple of my colleagues took up this challenge and I saw first-hand the satisfaction they had from doing so. I also observed that the lightning talk format created a one hour session that was diverse, dynamic and fun.

So in December last year I started to warn the testers that at the end of March they would be asked to deliver a five minute lightning talk on something that they had learned in the past six months. This felt like a good way to enforce some reflection and spread the results of this across the team.

I scheduled a half day in our calendars and booked a large meeting room. Three weeks out from the event I asked each tester to commit to a title for their talk along with a single sentence that described what they would speak about. I was really impressed by the diversity of topics that emerged, which reflected the diversity of activities in our testing roles.

One week ahead I asked those who wished to use PowerPoint slides to submit them so that I could create collated presentations. Only about half of the speakers chose to use slides, which I found a little surprising but this helped create some variety in presentation styles.

Then the day prior I finalised catering for afternoon tea and borrowed a set of 'traffic lights' from our internal ToastMasters club so that each presenter would know how long they had spoken for.

On the day itself, 27 testers delivered lightning talks. 

The programme was structured into three sessions, each with nine speakers, that were scheduled for one hour. This meant that there was approximately 50 minutes of talks, then a 10 minute break, repeated three times.



Having so many people present in such a short space of time meant that there was no time for boredom. I found the variety engaging and the succinct length kept me focused on each individual presentation. I also discovered a number of things that I am now curious to learn more about myself!

There were some very nervous presenters. To alleviate some of the stress, the audience was restricted to only the testing team and a handful of interested managers. I tried to keep the tone of the afternoon relaxed. I acted as MC and operated the lights to indicate how long people had been speaking for, keeping both tasks quite informal. 

There was also a good last minute decision to put an animated image of people applauding in the PowerPoint deck so that it would display between each speaker. This reminded people to recognise each presenter and got a few giggles from the audience.

After the talks finished, I asked the audience to vote on their favourite topic and favourite speaker of the day. I also asked for some input into our team plan for the next six months with particular focus on the topics that people were interested in learning more about. Though I could sense that people were tired, it felt like good timing to request this information and I had a lot of feedback that was relatively cohesive.

Since the session I've had a lot of positive comments from the testers who participated that it was a very interesting way to discover what their peers in other teams had been learning about. I was also pleased to hear from some of those who were most reluctant to participate that many of their fears were unfounded. 

From a coaching perspective, I was really proud to see how people responded to the challenge of reflecting on their own progress, identifying a piece of learning that they could articulate to others in a short amount of time, then standing up and delivering an effective presentation.

I'll definitely be using the lightning talks format for knowledge sharing again.

Friday, 13 November 2015

Using strong-style pairing and a coding dojo for test automation training

At work we're implementing a brand new automation suite for one of our internet banking applications. This is the first framework that I've introduced from a coaching perspective as opposed to being the tester implementing automation day-to-day within a delivery team.

Aside from choosing tools and developing a strategy for automation, I've discovered that a large proportion of the coaching work required is to train the testers within the teams in how to install, use and extend the new suite.

I've done a lot of classroom training and workshops before, but I felt that these formats weren't well suited to teaching automation. Instead I've used two practices that are traditionally associated with software development rather than testing: strong-style pairing and a coding dojo.

I've been surprised at how well these practices have worked for our test automation training and thought I would share my experience.

Strong-style pairing

After a series of introductory meetings to explain the intent of the new suite and give a high-level overview of its architecture, each tester worked independently using the instructions on our organisation wiki to get the tests running on their local environment.

As the testers were completing their installations, I worked in parallel to create skeleton tests with simple assertions in different areas of the application, one area per tester. To keep the training as simple as possible I wanted to split out distinct areas of focus for individual learning and reduce the potential for merge conflicts of our source code.

As they were ready, I introduced an area to each tester via individual one hour pairing sessions using strong-style pairing. The golden rule of strong-style pairing is:

"for an idea to go from your head into the computer it MUST go through someone else's hands"

For these sessions I acted as the navigator and the tester who I was training acted as the driver. As the testers were completely unfamiliar with the new automation suite, strong-style pairing was a relatively comfortable format. I did a lot of talking, while the testers themselves worked hands-on, and together we expanded the tests in their particular area of the application.

As the navigator, I prepared for each pairing session by thinking up a series of objectives at varying degrees of difficulty to accommodate different levels of skill. My overarching goal was to finish the hour with a commit back to the repository that included some small change to the suite, which was achieved in two-thirds of the sessions.

As a coach, I found these sessions really useful to judge how much support the testers will require as we progress from a prototype stage and attempt to fulfill the vision for this suite. I now have a much more granular view of where people have strengths and where they may require some help.

I had a lot of positive feedback from the testers themselves. For me the success was that many were able to continue independently immediately following the session and make updates to the tests on their own.

Coding Dojo

At this point everyone had installed the suite individually, then had their pairing session to get a basic understanding of how to extend an existing test. The next step was to learn how to implement a new test within the framework.

I felt that a second round of individual pairing would involve a lot of needless repetition on my part, explaining the same things over and over again. Ultimately I wanted the testers in the team to start pairing with each other to learn collaboratively as part of our long-running pairing experiment.

I found a "how do you put on a coding dojo?" video and decided to try it out.

I planned the dojo as a two hour session for six testers. I decided to allow 90 minutes for coding, with 15 minutes on each side for introduction and closing activities. Within the 90 minutes, each of the six testers would have 15 minutes in the navigator/co-pilot role, and 15 minutes at the keyboard in the driver/pilot role.

I thought carefully about the order in which to ask people to act in these roles. I wanted to start with a confident pilot who would put us on the right course. I also wanted the testers to work in the pairs that they would work in immediately following the session to tackle their next task. So I created a small timetable. To illustrate with fictitious testers:



On the morning of the session I sent an email out to all the participants that reiterated our objective, shared the timetable, and explained what they would not require their own laptops to participate.

We started the session at 1pm. I had my laptop prepared, with only the relevant applications open and all forms of communication with the outside world (email, instant messaging, etc.) switched off. The laptop was connected to a projector and we had a large flipchart with markers to use a shared notes space.

I reiterated the content of the morning email and shared our three rules:

  • The facilitator asks questions and doesn't give answers
  • Everyone must participate in the code being written
  • Everyone must take a turn at the keyboard

Then I sat back and watched the team work together to create a new test!

Though I found it quite challenging to keep quiet at times, I could see that the absence of a single authority was getting the group to work together. It was really interesting to see the approach taken, which differed from how I thought they might tackle the problem. I also learned a lot more about the personalities and social dynamics within the team by watching the way they interacted.

It took almost exactly 90 minutes to write a new test that executed successfully and commit it back to the repository. Each tester had the opportunity to contribute and there was a nice moment when the test passed for the first time and the room collectively celebrated!

I felt that the session achieved the broader objective of teaching all the testers how to implement a new test, and provided enough training so that they can now work in their own pairs to repeat the exercise for another area of the application.

I intend to continue to use both strong-style pairing and coding dojos to teach test automation.







Friday, 7 August 2015

How do you become a great tester?

At the fifth annual Kiwi Workshop for Software Testing (KWST5) that happened earlier this week, James Bach asked a seemingly simple question during one of the open season discussions that I have been thinking about ever since.

"How do you know you're a good tester?"

Since the conference I've had a number of conversations about this, with testers and non-testers, in person and on Twitter. During these conversations I've found it much easier to think of ways to challenge the responses provided by others than to think of an answer to the original question for myself.

Today I asked a colleague in management how they knew that the testers within the team they managed were good testers. We spent several minutes discussing the question in person then, later in the morning, they sent me an instant message that said "... basically a good tester knows they are not a great tester." 

This comment shunted my thinking in a different direction. I agree that most of the people who I view as good testers have a degree of professional uncertainty about their ability. But I don't think that it is this in isolation that makes them a good tester, rather it's the actions that are driven from this belief. And this lead me to my answer.

"How do you know you're a good tester?"

I know I'm a good tester because I want to become a great tester. In order to do this, I actively seek feedback on my contribution from my team members, stakeholders and testing peers. I question my testing and look for opportunities to improve my approach. I imagine how I could achieve better outcomes by improving my soft skills. I constantly look to learn and broaden my testing horizons.





What would you add?

Thursday, 16 July 2015

Mobile Testing Taster

I recently ran a one-hour hands-on workshop to give a group of 20 testers a taste of mobile application testing. This mobile testing taster included brainstorming mobile-specific test ideas, sharing some mobile application testing mnemonics, hands-on device testing, and a brief demonstration of device emulation.

Preparation

In advance of the session, I asked the participants to bring along a smartphone or tablet, either Apple or Android, with the chosen test application installed. I selected a test product with an iOS app, an android app, and a website optimised for mobile. I asked those who were able to bring a laptop, in order to compare mobile and web functionality.

I set up the room so that participants were seated in small groups of 3 – 4 people. Each table had one large piece of flipchart paper and three different coloured markers on it. The chairs were arranged along two adjacent sides of the table so that participants within each small group could collaborate closely together.

Brainstorming

After a brief outline of what the session would cover, I asked participants to start brainstorming their test ideas for the chosen test application that they had available on the devices in front of them. They were allowed to use the device as a reference, and I asked them to choose one coloured marker to note down their ideas as a group.

Five of the six groups of participants started a feature tour of the application. Their brainstorming started with the login screen, then moved through the main functionality of the application. The other team took a mobile focused approach from the very beginning of the session.

After five minutes, I paused the activity. I wanted to switch the thinking of everyone in the room from functionality to mobile-specific test ideas. I encouraged every team to stop thinking about features and instead to start thinking about what was unique about the application on mobile devices.

To aid this shift, I handed out resources for popular mobile testing mnemonics: the full article text for I SLICED UP FUN from Jonathan Kohl and the mind map image of COP FLUNG GUN from Dhanasekar Subramanian. These resources are full of great prompts to help testers think of tests that may apply for their mobile application. I also encouraged the groups to make use of their laptops to spot differences between the web and mobile versions of the software.

The participants had a further 15 minutes to brainstorm from this fresh perspective using a different coloured marker. For a majority of groups this change in colour emphasised a noticeable change in approach.

At the end of the brainstorming session there was quite a variety in the nature and number of test ideas generated in each small group. I asked the participants to stand up, walk around the room, look at the work of other groups, and read the ideas generated by their peers.

Testing

Armed with ideas, the next phase of the workshop was to complete ten minutes of hands-on device testing. I asked each tester to pick a single test idea for this period of time, so that they focused on exploring a particular aspect of the application. 

Each group was asked to use the final coloured marker to note any problems they found in their testing. There were relatively few problems, but they were all quite interesting quirks of the application.

Though ten minutes was a very short period of time, it was sufficient to illustrate that testing a mobile application feels very different to testing on a computer. The participants were vocal about enjoying the experience. As a facilitator I noticed that this enjoyment made people more susceptible to distraction.

It was also interesting to see how much functionality was covered despite the testing being focused on the mobile-specific behaviours of the application. For example, one tester navigated through the product looking at responsive design when switching between portrait and landscape view, which meant that she completed a quick visual inspection of the entire application.

Emulation

While discussing ideas for this session, Neil Studd introduced me to the Device Mode function available in Chrome Developer Tools. During the last part of the workshop I played a five minute video about device mode, then showed a quick live demonstration of how our test application rendered in various devices through this tool.

Device mode is well documented. I presented it as an option for getting an early sense of how new features will behave without having to track down one of our limited number of test devices. Emulators are not a substitute for physical devices, but they may help us consider responsive design earlier in our development process.

As facilitator I did feel like this was a lot to cover in an hour. However, the session filled its purpose of giving the attendees a relatively rounded introduction to mobile testing. Perhaps you'll find a similar mobile testing taster is useful in your organisation.

Monday, 6 July 2015

Testing Coach Cafe Service Menu

One of the things I've been thinking about is how I can get more involved with the work of individuals in my team without being a nuisance. I have deliberately avoided scheduling recurring one-on-ones or continually dropping by desks to see how people are doing, but I do want to be more actively involved in helping people tackle their testing problems and improve their skills.

At the recent Nordic Testing Days conference I had the opportunity to speak with Stephen Janaway, a former Test Coach based in the UK. I mentioned my conundrum and he shared a solution from his organisation. They moved to a pull system where their coaches created a service menu for the development teams that explained what the coaches are available to help with. Stephen spoke about this system during his presentation at the conference and posted a real example of A Coaching Cafe Service Menu on his blog.

I really liked this idea and decided to adopt the same approach with my team. With a little re-use and a bit of fresh thinking, I created a Testing Coach Cafe Service Menu for my organisation:

The A3 poster version of the Testing Coach Cafe Service Menu

The menu provides an overview of some of the ways that I'd like to be working with each of the testers in my team. I hope it will prompt them to ask me for assistance -- a pull system rather than me imposing myself on them -- and clarify my role as their Testing Coach.

I'm keen to do more individual coaching sessions that are focused on what people really want. If a number of people are requesting similar things, I plan to start running small group sessions. If I don't have the skills requested, I can find resources in the community, call on others in the team, or use external providers who may be able to help. And, if there's something that people want that isn't listed, then I've encouraged them to ask for that too!

To share the menu with my team I created a printed brochure for every individual and an A3 poster that has been posted on our Testing Wall. I like the tactile nature of physical information, I think it helps emphasise important messages and creates serendipitous continued reminders. I also added the content to our organisation wiki and shared a soft copy of the brochure version via email.

Brochures and A3 poster versions of the Testing Coach Cafe Service Menu

Alongside the information, I've made it clear that people can ask for these services anytime - online or in person. As well as seeking new skills, I've encouraged people to start "putting their paddles in the air" based on a recent post from Lillian Grace:

... in the back of my mind I felt a bit guilty, like I shouldn’t be asking for help unless it was absolutely critical - and then I quickly realised, but that’s not how I like to be treated. Asking for help isn’t what you should do when you’re desperate, it’s literally when you would like help. I dearly appreciate it when someone surfaces an inkling of a concern in time for me to deal with it.

The initial reaction to the Testing Coach Cafe Service Menu has been very positive and I hope that it will help me better serve my team.

Wednesday, 24 June 2015

A pairing experiment for sharing knowledge between agile teams

Over the past month I've started running a pairing experiment in my organisation. The primary purpose of this experiment is to share knowledge between testers who are working in different agile teams, testing different applications and platforms.

The Experiment Framework

After researching pair testing, I decided to create a structured framework for experimenting with pairing. I felt there was a need to set clear expectations in order for my 20+ testers to have a consistent and valuable pairing experience.

This did felt a little dictatorial, so I made a point of emphasizing the individual responsibility of each tester to arrange their own sessions and control what happened within them. There has been no policing or enforcement of the framework, though most people appear to have embraced the opportunity to learn beyond the boundaries of their own agile team.

I decided that our experiment will run for three one-month iterations. Within each month, each pair will work together for one hour per week, alternating each week between the project team of each person in the pair. As an example, imagine I pair Sandi in Project A is paired with Danny in Project B. In the first week of the iteration they will pair test Project A at Sandi's desk, then in the second week they will pair test Project B at Danny's desk, and so on. At the end of the monthly iteration each pair should have completed four sessions, two in each project environment.

In between iterations, the team will offer their feedback on the experiment itself and the pairing sessions that they have completed. As we are yet to complete a full iteration I'm looking forward to receiving this first round of feedback shortly. I intend to adapt the parameters of the experiment before switching the assigned pairs and starting the second iteration.

At the end of the three months I hope that each person will have a rounded opinion about the value of pairing in our organisation and how we might continue to apply some form of pairing for knowledge sharing in future. At the end of the experiment, we're going to have an in-depth retrospective to determine what we, as a team, want to do next.


An example of how one tester might experience the pairing experiment

A Sample Session

In our pair testing experiment, both the participants are testers. To avoid confusion when describing a session, we refer to the testers involved as a native and a visitor.

The native hosts the session at their work station, selects a single testing task for the session, and holds accountability for the work being completed. The native may do some preparation, but pairing will be more successful if there is flexibility. A simple checklist or set of test ideas is likely to be a good starting point.

The visitor joins the native to learn as much as possible, while contributing their own ideas and perspective to the task.

During a pairing session there is an expectation that the testers should talk at least as much as they test so that there is shared understanding of what they're doing and, more importantly, why they are doing it.

When we pair, a one hour session may be broken into the following broad sections:

10 minutes – Discuss the context, the story and the task for the session.

The native will introduce the visitor to the task and share any test ideas or high-level planning they have prepared. The visitor will ask a lot of questions to be sure that they understand what the task is and how they will test it.

20 minutes – Native testing, visitor suggesting ideas, asking questions and taking notes.

The native will be more familiar with the application and will start the testing session at the keyboard. The native should talk about what they are doing as they test. The visitor will make sure that they understand every action taken, ask as many questions as they have, and note down anything of interest in what the native does including heuristics and bugs.

20 minutes – Visitor testing, native providing support, asking questions and taking notes.

The visitor will take the keyboard and continue testing. The visitor should also talk about what they are doing as they test. The native will stay nearby to verbally assist the visitor if they get confused or lost. Progress may be slower, but the visitor will retain control of the work station through this period for hands-on learning.

10 minutes – Debrief to collate bug reports, reflect on heuristics, update documentation.

After testing is complete it’s time to share notes. Be sure that both testers understand and agree on any issues discovered. Collate the bugs found by the native with those found by the visitor and document according to the traditions of the native team (post-it, Rally, etc.). Agree on what test documentation to update and what should be captured in it. Discuss the heuristics listed by each tester, add any to the list that were missed.

After the session the visitor will return to their workstation and the pair can update documentation and the wiki independently.

To support this sample structure and emphasise the importance of communication, the following graphic that included potential questions to ask in each phase was also given to every tester:

Questions to ask when pair testing

I can see possibilities for this experiment to work for other disciplines - developers, business analysts, etc. I'm looking forward to seeing how the pairing experiment evolves over the coming months as it molds to better fit the needs of our team.

Thursday, 28 May 2015

Dominos to illustrate communication in pair testing

I recently ran a one hour workshop to introduce pair testing to my team. I wanted to make the session interactive rather than theoretical however, having done the research, I struggled to find any practical tips for training people in how to pair effectively. Having created something original to suit my purpose, I thought I would share my approach in case it is useful for others.

I coach a large team of 20 agile testers who are spread across several different teams, testing different applications and platforms. Though I wanted the workshop to be hands-on, the logistics of 10 pairs performing software testing against our real systems was simply too challenging. I needed to go low-tech, while still emulating the essence of what happens in a pair testing session.

So, what is the essence of pair testing? I spent several days thinking on this and, in the end, it wasn't until I bounced ideas around with a colleague that I realised. Communication.

Most people understand the theory of pairing immediately. Two people, one machine, sharing ideas and tackling a single task together. It's not a difficult concept. But the success of pairing hinges on the ability of those who are paired to communicate effectively with one another. How we speak to each other impacts both our enjoyment and our output.

With this goal in mind I started to research communication exercises, and found this:

Dominos

One of the listening skills activities that I do is that you have people get in groups of 2, you give one of them a pack of 8 dominos and the other a shape diagram of rectangles (dominos) in a random pattern. Only the person without the dominos should see the pattern. They sit back to back on the floor or the one with the dominos at a table and the other in a chair back to back. The one with the diagram instructs the other on placing the dominos to match the diagram. The one with the dominos cannot speak. They get 2 min. I usually do this in a big group where they are all working in pairs at once.
Then they switch roles, get a new pattern and do the exercise again, this time the person with the dominos is allowed to speak. 2 min. usually successful.
Then we debrief looking at challenges, jargon words used, analyze how they provided instructions without being able to watch the person, tone, questions asked, etc. ( I have this all in a document if you want it) It is quite fun and enlightening for those who are training to be able to be in a support role with technology.


Though it wasn't quite right for my workshop, this was an exercise for pairs that was interactive, communication focused, and involved toys. I decided to adapt it for my purpose and use dominos to illustrate two different types of knowledge sharing -- "follow me" and "flashlight" -- that hoped to see occur in our real-life pair testing sessions.

Follow Me

The workshop participants were placed in pairs. One person in the pair was given a packet of dominos and a diagram of 8 dominos in a pattern. They were given 2 minutes to arrange their dominos to match the diagram while their partner observed.

I asked each pair to push all their dominos back into a pile. The person who had arranged the dominos was asked to pick up the instruction diagram and hold it out of view of their partner. The person without the instructions was then given 2 minutes to repeat the same domino arrangement with limited assistance from their partner who was forbidden from touching the dominos!

Though the person with the dominos had seen the puzzle completed and knew it's broad shape, it was clear that they would need to talk to their partner and ask a lot of questions about the diagram in order to repeat the arrangement precisely. It was interesting to observe the different approaches; not every pair successfully completed the second part of this exercise within the 2 minute time frame.

After the exercise we had a short debrief. The participants noticed that:

  • pairs who talked more were able to complete the task quicker,
  • there were advantages to using non-verbal communication, particularly pointing and nodding, to help the person arranging the dominos, 
  • though it seemed easy when observing the task, attempting to repeat the same steps without the diagram was more challenging than people expected, 
  • it was frustrating for the person with the instructions to be unable to touch the dominos, and
  • keeping an encouraging tone when giving instructions helped to focus people on the task rather than feel stressed by the short deadline.


I felt that there were clear parallels between this activity and a pair testing scenario in which a tester is exploring a completely unfamiliar domain with guidance from a domain expert. I emphasised the importance of being honest when help is required, and keeping up a constant dialog where people are uncertain.

Flashlight

In the same pairs, one person was given a diagram of 8 dominos while the other was given a partial diagram that included only four. The person with access to only the smaller diagram was given 2 minutes to arrange the full set of 8 dominos.

Example of a full map of 8 dominos (left) next to a corresponding partial map of 4 dominos (right)

In this iteration the person who was arranging the dominos was given some understanding of what was required, but still needed need to ask their partner for assistance to complete the entire puzzle. As previously, the person with the complete picture was not permitted to touch the dominos and kept their instructions hidden from their partner.

Again we had a short debrief. The participants felt that this exercise was much easier than the first. Because the person arranging the dominos was bringing their own knowledge to the task it meant that almost every pair completed the arrangement within the 2 minutes.

As a facilitator I noticed that this little bit of extra knowledge changed the communication dynamics of some pairs quite dramatically. Instead of talking throughout, the observers remained silent as their partner completed the arrangement of the first four dominos. Only once the person with the dominos had completed the task to the extent of their abilities did they ask their pair for input.

The pairs who worked in this way were ultimately slower than their colleagues who kept talking to one another. One way that talking made things quicker was in eliminating double-handling of dominos -- "You'll need that one later".

Having shared this reflection, the two people switched roles and, with new diagrams, repeated the activity. With the expectation set that communication should remain continuous, it seemed that the pairs worked quicker together. The second iteration was certainly noisier!

I felt that there were clear parallels between this activity and one in which a tester is exploring a domain where they have some familiarity but are not an expert. It's important to remember that there is always something to learn, or opportunities to discover the ways in which the maps of others differ to our own. This exercise illustrated how important it is to continue communicating even when we feel comfortable in our own skills.

I was happy with how the dominos activities highlighted some important communication concepts for effective pair testing. If you'd like to repeat this workshop in your own workplace I would be happy to share my domino diagrams to save you some time, please get in touch.