Wednesday 13 December 2017

Pairing for skill vs. Pairing for confidence

I went to a WeTest leadership breakfast this morning. We run in a Lean Coffee format and today we had a conversation about how to build confidence in people who have learned basic automation skills but seem fearful of applying those skills in their work.

I was fortunate to be sitting in a group with Vicki Hann, a Test Automation Coach, who had a lot of practical suggestions. To build confidence she suggested asking people to:
  • Explain a coding concept to a non-technical team mate
  • Be involved in regular code reviews
  • Practice the same type of coding challenge repeatedly

Then she talked about how she buddies these people within her testing team.

Traditionally when you have someone who is learning you would buddy them with someone who is experienced. You create an environment where the experienced person can transfer their knowledge or skill to the other.

In a situation where the person who is learning has established some basic knowledge and skills, their requirements for a buddy diversify. The types of activities that build confidence can be different to those that teach the material.

Confidence comes from repetition and experimentation in a safe environment. The experienced buddy might not be able to create that space, or the person who is learning may have their own inhibitions about making mistakes in front of their teacher.

Vicki talked about two people in her organisation who are both learning to code. Rather than pairing each person with someone experienced, she paired them with each other. Not day-to-day in the same delivery team, but they regularly work together to build confidence in their newly acquired automation skills.

In their buddy session, each person explains a piece of code that they’ve written to the other. Without an experienced person in the pair, both operate on a level footing. Each person has strengths and weaknesses in their knowledge and skills. They feel safe to make mistakes, correct each other, and explore together when neither know the answer.

I hadn’t considered that there would be a difference in pairing for skill vs. pairing for confidence. In the past, I have attempted to address both learning opportunities in a single pairing by putting the cautious learner with an exuberant mentor. I thought that confidence might be contagious. Sometimes this approach has worked well and others not.

Vicki gave me a new approach to this problem, switching my thinking about confidence from something that is contagious to something that is constructed. I can imagine situations where I’ll want to pair two people who are learning, so that they can build their confidence together. Each person developing a belief in their ability alongside a peer who is going through the same process.


Wednesday 6 December 2017

Conference Budgets

There has been conversation on Twitter recently about conferences who do not offer speaker compensation. If you haven't been part of this discussion I would encourage you to read Why I Don't Pay to Speak by Cassandra Leung, which provides a detailed summary.

I take an interest in these conversations from two perspectives: I regularly speak at international conferences and I co-organise the annual WeTest conferences for the New Zealand testing community.

As an organiser, the events that I help to run cover all speaker travel and accommodation. We make a special effort to care for our conference speakers and have built a reputation in the international testing community as being an event that is worth presenting at.

WeTest is a not-for-profit company that is entirely driven by volunteers. How do we afford to pay all of our speakers?

Humble Beginnings

Our 2014 WeTest conference was a half-day event in a single city.

We had 80 participants who paid $20 per person. They received a conference t-shirt along with a catered dinner of pizza and drinks.

All of our speakers were local to the city, so there were no travel or accommodation expenses. Our budget was balanced by the support of our primary sponsor, Assurity.

Our total budget for this event was approximately $3,000 where our income and expenses were:

WeTest Budget 2014

Stepping Up

By 2016 we felt that we had built an audience for a more ambitious event. We embarked on a full-day conference tour with the same program running in two cities within New Zealand.

We had 150 participants in each city who paid $150 per person. This was a significant jump in scale from our previous events, so we had to establish a formal scaffold for our organisation. WeTest was registered as a company, we created a dedicated bank account, and launched our own website.

This was also the first year that we invited international speakers. 25% of our speaker line-up, or three out of twelve speakers, traveled to New Zealand from overseas. Covering their travel and accommodation costs significantly altered the dynamics of our budget. Running the conference in two different cities meant that there were travel and accommodation costs for our New Zealand based speakers and organisers too.

Our total budget for this event was approximately $50,000 where our income and expenses were:

WeTest Budget 2016

The Big League

Our 2016 events sold out quickly and we had long waiting lists. To accommodate a larger audience, we grew again in 2017. This meant securing commercial venues, signing contracts, paying deposits, registering for a new level of tax liability and formalising our not-for-profit status.

In 2017 we had around 230 participants in each city. We introduced an earlybird ticket at $150 per person, so that our loyal supporters would not experience a price-hike and we could collect some early revenue to cover upfront costs. Our standard ticket was $250 per person.

40% of our speaker line-up, or four out of ten speakers, traveled to New Zealand from overseas. We incurred similar speaker travel and accommodation expenses to the previous year.

Our total budget for this event was approximately $100,000 where our income and expenses were:

WeTest Budget 2017

To re-iterate, WeTest is a not-for-profit organisation that is volunteer-led. The profit of our 2017 events will be reinvested into the testing community and help us to launch further events in the New Year.

In the discussion about speaker reimbursement we often discuss in the abstract. I hope that these examples provide specific evidence of how a conference might approach speaker reimbursement, whether they are a small community event or a larger endeavour.

At WeTest we have consistently balanced our budget without asking speakers to pay their own way. We are proud of the diverse speaker programs that have been supported by this approach. In 2018 we look forward to continuing to provide a free platform for our speakers to deliver great content.

Sunday 29 October 2017

Strategies for automated visual regression

In my organisation we have adopted automated visual regression in the test strategy for three of our products. We have made different choices in implementing our frameworks, as we use automated visual regression testing for a slightly different purpose in each team. In this post I introduce the concept of automated visual regression then give some specific examples of how we use it.

What is visual regression?

The appearance of a web application is usually defined by a cascading style sheet (CSS) file. Your product might use a different flavour of CSS like SCSS, SASS, or LESS. They all describe the format and layout of your web-based user interface.

When you make a change to your product, you are likely change how it looks. You might intentionally be working on a design task e.g. fixing the display of a modal dialog. Or you might be working on a piece of functionality that is driven through the user interface, which means that you need to edit the content of a screen e.g. adding a nickname field to a bank account. In both cases you probably need to edit the underlying style sheet.

A problem can arise when the piece of the style sheet that you are editing is used in more than one place within the product, which is often the case. The change that you make will look great in the particular screen that you're working in, but cause a problem in another screen in another area of the application. We call these types of problems visual regression.

It is not always easy to determine where these regression issues might appear because style sheets are applied in a cascade. An element on your page may inherit a number of display properties from parent elements. Imagine a blue button with a label in Arial font where the colour of the button is defined for that element but the font of the button label is defined for the page as a whole. Changing the font of that button by editing the parent definition could have far-reaching consequences.

We use automated visual regression to quickly identify differences in the appearance of our product. We compare a snapshot taken prior to our change with a snapshot taken after our change, then highlight the differences between the two. A person can look through the results of these image comparisons to determine what is expected and what is a problem.

Manufactured example to illustrate image comparison

Team One Strategy

The first team to adopt automated visual regression in my organisation was our public website, a product with a constantly evolving user interface.

The test automation strategy for this product includes a number of targeted suites. There are functional tests written in Selenium that examine the application forms, calculators, and other tools that require user interaction. There are API tests that check the integration of our website to other systems. We have a good level of coverage for the behaviour of the product.

Historically, none of our suites specifically tested the appearance of the product. The testers in the team found it frustrating to repetitively tour the site, in different browsers, to try to detect unexpected changes in how the website looked. Inattentional blindness meant that problems were missed.

The team created a list of the most popular pages in the site based on our analytics. This list was extended, so that it included at least one page within each major section of the website, to define an application tour for the automated suite to capture screenshots for comparison.

The automated visual regression framework was implemented to complete this tour of the application against a configurable list of browsers. It launches BrowserStack, which means that it is able to capture images against desktop, tablet, and mobile browsers. The automated checks replace a large proportion of the cross-browser regression testing that the testers were performing themselves.

The team primarily use the suite at release, though occasionally make use of it during the development process. The tool captures a set of baseline images from the existing production version of the product and compares these to images from the release candidate. The image comparison is made at a page level: a pixel-by-pixel comparison with a fuzz tolerance for small changes.

Team Two Strategy

The second team to adopt automated visual regression was our UI toolkit team. This team develop a set of reusable user interface components so that all of our products have a consistent appearance. The nature of their product means that display problems are important. Even a difference of a single pixel can be significant.

The tester in the this team made automated visual regression the primary focus of their test strategy. They explored the solution that the first team had created, but decided to implement their own framework in a different way.

In our toolkit product, we have pages that display a component in different states e.g. the button page has examples of a normal button, a disabled button, a button that is being hovered on, etc. Rather than comparing the page as a whole with a fuzz tolerance, this tester implemented an exact comparison at a component level. This meant that the tests were targeted and would fail with a specific message e.g. the appearance of the disabled button has changed.

The initial focus for this framework was getting component level coverage across the entire toolkit with execution in a single browser. This suite was intended to run after every change, not just at release. The tester also spent some time refining the reporting for the suite, to usefully abstract the volume of image comparisons being undertaken.

Once the tests were reliable and the reporting succinct, the tester extended the framework to run against different browsers. Cross-browser capability was a lower priority than in the Team One.

Team Three Strategy

A third team are starting to integrate automated visual regression into their test strategy. They work on one of our authenticated banking channels, a relatively large application with a lot of different features.

This product has mature functional test automation. There are two suites that execute through the user interface: a large suite with mocked back-end functionality and a small suite that exercises the entire application stack.

For this product, implementing automated visual regression for a simple application tour is not enough. We want to examine the appearance of the application through different workflows, not just check the display of static content. Rather than repeating the coverage provided by the existing large test suite, the team extended the framework to add an automated visual regression test.

This suite is still under development and, of the three solutions, it is the largest, the slowest, and requires the most intervention by people to execute. The team created a configuration option to switch on screenshot collection as part of the existing functional tests. This generates a set of images that will either represent the 'before' or the 'after' state, depending on which version of the application is under test.

Separate to the collection of images is a comparison program that takes the two sets of screenshots and determines where there are differences. The large suite of functional tests means that there are many images to compare, so the developers came up with an innovative approach to perform these comparisons quickly. They first compare a hash string of the image then, in the event that these differ, they perform the pixel-by-pixel comparison to determine what has changed.

In this team the automated visual regression has a fractured implementation. The collection and comparison happen separately. The focus remains on a single browser and the team continue to iterate their solution, particularly by improving the accuracy and aesthetics of their reporting.

Conclusion

We use automated visual regression to quickly detect changes in the appearance of our product. Different products will require different strategies, because we are looking to address different types of risk with this tool.

The three examples that I've provided, from real teams in my organisation, illustrate this variety in approach. We use visual regression to target:
  • cross-browser testing, 
  • specific user interface components, and
  • consistent display of functional workflows. 
As with any test automation, if you're looking to implement automated visual regression consider the problem that you're trying to solve and target your framework to that risk.

Thursday 12 October 2017

Identifying and influencing how people in your team contribute to test automation

This is a written version of my keynote at The Official 2017 European Selenium Conference in Berlin, Germany. 
If you'd prefer to watch the talk, it is available on the Selenium YouTube channel.


How do your colleagues contribute to test automation?

Who is involved in the design, development and maintenance of your test suites?

What would happen if people in your team changed how they participate in test automation?

How could you influence this change?

This article will encourage you to consider these four questions.

Introduction

When I was 13 years old I played field hockey. I have a lot of fond memories of my high school hockey team. It was a really fun team to be part of and I felt that I really belonged.

When I was 10 years old I really wanted to play hockey. I have clear memories of asking my parents about it, which is probably because the conversation happened more than once. I can remember how much I wanted it.

What happened in those three years, between being a 10 year old who wanted to play hockey and a 13 year old creating fond memories in a high school hockey team? Three things.

The first barrier to playing hockey was that I had none of the gear. I lived in a small town in New Zealand, both my parents were teachers, and hockey gear was a relatively large investment for my family. To participate in the sport I needed a stick, a mouth guard, shin guards, socks. Buying all this equipment gave me access to the sport.

Once I had the gear, I needed to learn how to use it. It turned out that my enthusiasm for getting onto the field did not translate into a natural ability. In fact, initially I was quite scared of participating. I had to learn to hit the ball and trap it, the different positions on the field, and what to do in a short corner. Learning these skills gave me the confidence to play the game, which meant that I started to enjoy it.

The third reason that I ended up in a hockey team when I was 13 years old was because that is where my friends were. As a teenager, spending time with my friends after school was excellent motivation to be part of a hockey team.

Access, skills, and motivation. These separated me at 10 years of age from me at 13 years of age. These separated a kid who really wanted to participate in a sport from someone who felt like they were part of a team.

This type of division is relatable. Team sports are an experience that many of us share, both the feelings of belonging and those of exclusion. Access, skills, and motivation also underpin other types of division in our lives.

Division in test automation

If you look at a software development team at a stand-up meeting, they are all standing together. People are physically close to each other, not on opposite sides of a chasm. But within that group are divisions, and different divisions depending on the lens that you use.

If we think about division in test automation for a software development team then, given what I’ve written about so far, you imagine something like this:

A linear diagram of division

People are divided into categories and progress through these pens from left to right. To be successful in test automation I need access to the source code, I need the skills to be able to write code, and I need to be enthusiastic about it. Boom!

Except, perhaps it’s not that simple or linear.

What if I’m a new tester to a team, and I have the coding skills but not permissions to contribute to a repository? What if I’m enthusiastic, but have no idea what I’m doing? It’s not always one, then the other, then the other. I am not necessarily going to acquire each attribute in turn.

Instead, I think the division looks something like this.

A Venn diagram of division

A Venn diagram of division by access, skills and motivation. An individual could have any three of these, or any combination of the three, or none at all.

To make sense of this, I’d like to talk in real-life examples from teams that I have been part of, which feature five main characters:

Five characters of an example team

The gray goose represents the manager. The burgundy red characters represent the business: the dog is the product owner, the horse is the business analyst. Orange chickens are the developers, and the yellow deer are the testers.

Team One

Team One

This was an agile development team in a large financial institution. I was one of two testers. We were the only two members of the team who were committing code to the test automation suite. We are the two yellow deer right in the middle of the Venn diagram with access, skills and motivation.

The developers in this team could have helped us, they had all the skills. They didn’t show any interest in helping us, but also we didn’t give them access to our code. The three orange chickens at the top of the Venn diagram show that the developers had skills, but no motivation or access.

The business analyst didn’t even know that we had test automation, and there was no product owner in this team. However there was a software development manager and they were a vocal advocate for the test automation to exist, though they didn’t understand it. The burgundy red horse at the top right is outside of the diagram, the grey goose is in motivation alone.

The test automation that this farmyard created was low-level, it executed queries against a database. As we were testing well below the user interface where the business felt comfortable, they were happy to have little involvement. The code in the suite was okay. It wasn’t as good as it could have been if the developers were more involved, but it worked and the tests were stable.

Team Two

Team Two

This was a weird team for me. You can see that as a tester, the yellow deer, I had the skills and motivation to contribute to test automation, but no access. I was bought in as a consultant to help the existing team of developers and business analysts to create test automation.

The developers and business analysts had varying skills. There were a couple of developers who were the main contributors to the suites. The business analysts had access and were enthusiastic, but they didn’t know how to write or read code. Then there were a couple of developers who had the access and skills, but firmly believed that test automation was not their job, they’re the chickens on the left without motivation.

This team built a massive backlog of technical debt in their test automation because the developers who were the main contributors preferred to spend their time doing development. The test code was elegant, but the test coverage was sparse.

Team Three

Team Three

In this team everyone had access to the code except the project manager, but skills and motivation created division.

I ended up working on this test suite with one of the business analysts. He bought all the domain and business knowledge, helped to locate test data, and made sure that the suites had strong test coverage across all of the peculiar scenarios. I bought the coding skills to implement the test automation.

In this team I couldn’t get any of the developers interested in automation. Half had the skills, half didn’t, but none of them really wanted to dive in. The product owner and the other BA who had access to the code were not interested either. They would say that they trusted what the two of us in the middle were producing, so they felt that they didn’t need to be involved.

I believe that the automation we created was pretty good. We might have improved with the opportunity to do peer review within a discipline. The business analyst reviewed my work, and I reviewed his, but we didn’t have deep cross-domain understanding.

Team Four

Team Four

This was a small team where we had no test automation. We had some unit tests, but there wasn’t anything beyond that. This meant that we did a lot of repetitive testing that, in retrospect, seems a little silly.

I was working with two developers. We had a business analyst and a product owner, but no other management alongside us. The technical side of the delivery team all had access to the code base and the skills to write test automation, but we didn’t have time or motivation to do so. The business weren’t pushing for it as an option.

You may have heard similarities to your situation in these experiences. Take a moment to consider your current team. Where would you put your colleagues in a test automation farmyard?

Contributors to test automation

Next, think about how people participate in test automation dependent on where they fall into this model. Originally I labelled the parts of the diagram as access, skills, and motivation:


If I switch these labels to roles they might become:


A person who only has access to test automation is an observer. They're probably a passive observer, as they don't have the skills or the motivation to be more involved.

A person who only has skills is a teacher. They don't have access or motivation, but they can contribute their knowledge.

A person who only has motivation is an advocate. They're a source of energy for everyone within the team.

Where these boundaries overlap:


A problem solver is someone with access and skills who is not motivated to get involved with test automation day-to-day. These people are great for helping to debug specific issues, reviewing pull requests, or asking specific questions about test coverage. Developers often sit in this role.

Coaches have skills and motivation, but no access. They’re an outside influence to offer positive and hopefully useful guidance. If you consider a wider set of colleagues, you might treat a tester from another development team as a coach.

Inventors are those who have access and motivation, but no skills. These are the people who can see what is happening and get super excited about it, but don’t have skills to directly participate. In my experience they’ll throw out ideas, some are crazy and some are genius. These people can be a source of innovation.

And in the middle are the committers. These are generally the people who keep the test suite going. They have access, skills, and motivation.

How people contribute to test automation

Changing roles

Now that you have labels for the way that your colleagues contribute to your test automation, consider whether people are in the "right" place.

I’m not advocating for everyone in your team to be in the middle of the model. Being a committer is not necessarily a goal state, there is value in having people in different roles. However there might be specific people who you can shift within the model that would create a big impact for your team.

Consider how people were contributing to test automation in the four teams that I shared earlier.

Team One

In team one, all the developers were teachers. They had skills, but nothing else. In retrospect if I was choosing one thing to change here, we should have given at least one of the developers access to the code so that they could step into a problem solving role and provide more hands-on help.

Team Two

In team two, I found it frustrating to coach the team without being able to directly influence the code. In retrospect, I could have fought harder for access to the code base. I think that as a committer I would have had greater impact on the prioritisation of testing work and the depth of test coverage provided by automation.

Team Three

In team three, it would have been good to have a peer reviewer from the same discipline. Bringing in a developer to look at the implementation of tests, and/or another business analyst to look at business coverage of the tests, could have made our test automation more robust.

Team Four

In team four, we needed someone to advocate for automation. Without a manager, I think the product owner was the logical choice for this. If they’d been excited about test automation and created an environment where it had priority, I think it would have influenced the technical team members towards automation.

Think about your own team. Who would you like to move within this model? Why? What impact do you think that would have?

Scope of change

To influence, you first need to think about what specifically you are trying to change. Let's step back out to the underlying model of skills, access, motivation. These three attributes are not binary themselves. If you are trying to influence change for a person in one of these dimensions, then you need to understand what exactly you are targeting, and why.

Access

What does it mean to have access to code? Am I granted read-only permissions to a repository, or can I edit existing files, or even create new ones? Does access include having licenses to tools, along with permission to install and setup a development environment locally.

In some cases, perhaps access just means being able to see a report of test results in a continuous integration server like Jenkins. That level of access may be enough to involves a business analyst or a product owner in the scope of automated test coverage.

When considering access, ask:
  • What are your observers able to see?
  • What types of problems can your problem solvers respond to?
  • How does access limit the ideas of your inventors?

Skill

Skill is not just coding skill. Ash Winter has developed a wheel of testing which I think is a useful prompt for thinking more broadly about skill:

Wheel of Testing by Ash Winter

Coding is one skill that helps someone contribute to test automation. Skill also includes test design, the ability to retrieve different types of test data, creating a strategy for test automation, or even generating readable test reports.

How do the skills of your teachers, coaches, and problem solvers differ? Where do you have expertise, and where is it lacking? What training should your team seek?

Motivation

Motivation is not simply "I want test automation" or "I don’t want test automation". There's a spectrum - how much does a person want it? You might have a manager who advocates for 100% automated test coverage, or a developer who considers anything more than a single UI test to be a waste of time.

How invested are your advocates? Should they be pushing for more or backing off a little?

How engaged are your coaches and inventors?

Wider Perspective

The other thing to consider is who isn’t inside the fences at all. The examples that I shared above featured geese, dogs, horses, chickens, deer. Who is not in this list? Are there other animals around your organisation who should be part of your test automation farmyard?

Test automation may be helpful for your operations team to understand the behaviour of a product prior to its release. If you develop executable specifications using BDD, or something similar, could they be shared as a user manual for call centre and support staff?

A wider perspective can also provide opportunities for new information to influence the design of your test automation. Operations and support staff may think of test scenarios that the development team did not consider.

Conclusion

Considering division helps us to feel empathy for others and to more consciously split ourselves in a way that is "right" for our team. Ask whether there are any problems with the test automation in your team. If there are no problems, do you see any opportunities?

Next, think about what you can do to change the situation. Raise awareness of the people around you who don't have access that would be useful to them. Support someone who is asking for training or time to contribute to automation. Ask or persuade a colleague to move themselves within the model.

Testers have a key skill required to be an agent of change, we ask questions daily.

How do your colleagues contribute to test automation? 

Who is involved in the design, development and maintenance of your test suites?

What would happen if people in your team changed how they participate in test automation?

How could you influence this change?

Sunday 17 September 2017

How to start a Test Coach role

I received an email this morning that said, in part:

I've been given the opportunity to trial a test coaching approach in my current employer (6-7 teams of 4-5 devs). 

I had a meeting with the head of engineering and she wants me to act almost like a test consultant in that I'm hands off. She expects me to be able to create a system whereby I ask the teams a set of questions that uncover their core testing problems. She's also looking for keys metrics that we can use to measure success.

My question is whether you have a set of questions or approach that allows teams to uncover their biggest testing problems? Can you suggest reading material or an approach?

On this prompt, here is my approach to starting out as a Test Coach.

Avoid Assessment

A test coach role is usually created by an organisation who are seeking to address a perceived problem. It may be that the testers are slower to respond to change, or that testers are less willing to engage in professional development, or that the delivery team does not include a tester and the test coach is there to introduce testing in other disciplines. 

Generally, the role is created by a manager who sits beyond the delivery teams themselves. They have judged that there is something missing. I think it is a bad idea to start a test coach role with a survey of testing practices intended to quantify that judgement. You might represent a management solution to a particular problem that does not exist in the eyes of the team. 

Your first interaction will set the tone of future interactions. If you begin by asking people to complete a survey or checklist, you pitch your role as an outsider. Assessments are generally a way to claim power and hierarchy, neither of which will benefit a test coach. You want to work alongside the team as a support person, not over them.

Assessment can also be dangerous when you enter the role with assumptions of what your first actions will be. If you think you know where you need to start, it can be easy to interpret the results of an assessment so that it supports your own bias.

But if not by assessment, how do you begin?

Initiation Interviews

Talk to people. One-on-one. Give them an hour of your time and really listen to what they have to say. I try to run a standard set of questions for these conversations, to give them a bit of structure, but they are not an assessment. Example questions might include:

  • Whereabouts do you fit in the organisation and how long have you been working here?
  • Can you tell me a bit about what you do day-to-day?
  • What opportunities for improvement do you see?
  • What expectations do you have for the coaching role? How do you think I might help you?
  • What would you like to learn in the next 12 months?

I don't ask many questions in the hour that I spend with a person. Mostly I listen and take notes. I focus on staying present in the conversation, as my brain can tend to wander. I try to keep an open mind to what I am hearing, and avoid judgement in what I ask.

In this conversation I consciously try to adopt a position of ignorance. Even if I think that I might know what a person does, or what improvements they should be targeting, or even where they should be focused on their own development. I start this conversation with a blank slate. Some people have said "This is silly, you know what I do!", to which I say "Let's pretend that I don't". This approach almost always leads me to new insights.

This is obviously a lot slower than sending out a bulk survey and asking people to respond. However, it gives you the opportunity as a coach to do several important things. You demonstrate to the people that you'll be working with that you genuinely want their input and will take the time to properly understand it. You start individual working relationships by establishing an open and supportive dialog. And you give yourself an opportunity to challenge your own assumptions about why you've been bought into the test coach role.

Then how do you figure out where to start?

Finding Focus

When my role changed earlier in the year, I completed 40 one-on-one sessions. This generated a lot of notes from a lot of conversations, and initially the information felt a little overwhelming. However, when I focused on the opportunities for improvement that people spoke about, I started to identify themes.

I grouped the one-on-one discussions by department, then created themed improvement backlogs for each area. Each theme included a list of anonymous quotes from the conversations that I had, which fortuitously gave a rounded picture of the opportunities for improvement that the team could see.

I shared these documents back with the teams so that they had visibility of how I was presenting their ideas, then went to have conversations with the management of each area to prioritise the work that had been raised.

What I like about this approach is that I didn't have to uncover the problem myself. There was no detective work. I simply asked the team what the problems were, but instead of framing it negatively I framed it positively. What opportunities for improvement do you see? Generally people are aware of what could be changed, even when they lack the time or skills to drive that change themselves.

Then asking for management to prioritise the work allows them to influence direction, but without an open remit. Instead of asking "What should I do?", I like to ask "What should I do first?".

Measuring Success

The final part of the question I received this morning was about determining success of the test coach role. As with most measures of complex systems, this can be difficult.

I approach this by flipping the premise. I don't want to measure success, I want to celebrate it.

If you see something improve, I think part of the test coach role is to make sure that the people who worked towards that improvement are being recognised. If an individual steps beyond their comfort zone, call it out. If a team have collectively adopted and embedded a new practice, acknowledge it.

Make success visible.

I believe that people want to measure success when they are unable to see the impact of an initiative. As a test coach, your success is in the success of others. Take time to reflect on where these successes are happening and celebrate them appropriately.


That's my approach to starting in a test coach role. Avoiding assessment activities. Interviewing individuals to understand their ideas. Finding focus in their responses, with prioritisation from management. Celebrating success as we work on improvements together.

Thursday 24 August 2017

The pricing model for my book

I have encountered mixed opinion about whether I should continue to offer my book for free, along with curiosity as to whether I am earning any money by doing so. In this post I share some information about purchases of my book, rationale for my current pricing model, and my argument for why you should invest in a copy.

What do people pay?

At the time of writing this blog, my book has 1,719 readers.

On average, 1 in 5 people have chosen to pay for my book. Of those who pay, 53% pay the recommended price, 10% pay more, and 37% pay less.

Based on a rough estimate of the number of hours that I spent writing and revising the book, I calculate that my current revenue gives an income of $8 an hour.

I incurred minimal cost in the writing process: the LeanPub subscription, gifts for my reviewers and graphic designer, and a celebratory dinner after I published. If I subtract these expenses, my income drops to $5 an hour.

The minimum wage in New Zealand is $15.75 per hour. This means that based on current sales, I would have been better paid by working in a fast food outlet than in writing my book. I don't think that's a particularly surprising outcome for a creative pursuit.

Why offer it for free?

I wrote the book to collate ideas and experiences about testing in DevOps into a single place. I wanted to encourage people to develop their understanding of an emerging area of our industry. My primary motivation was to share information and to do that I think it needs to be accessible.

I have been told that offering something for free creates a perception that it is not valuable. That people are likely to download the book, but not to read it, as they haven't invested in the purchase. That people won't trust the quality of a book that a writer is willing to offer for nothing.

I weigh these arguments against someone being unable to access the information because they cannot afford it, or someone who is unwilling to enter their payment details online, or someone using cost as an excuse to avoid learning. These reasons make me continue to offer free download as an option.

I recognise that being able to set a minimum price of zero is a privilege. If I was writing for a living, then this would not be an option available to me. I do worry that my actions impact those writers who would be unable to do the same.

Why pay?

If my book is available for free, then why should you pay for a copy?

I believe that I have written a useful resource, but every writer is likely to feel that way about their book! Rather than just taking my word for it, I have collected reader testimonials from around the world including the USA, Netherlands, UK, India, Pakistan, Ecuador, and New Zealand.

The testimonials endorse the practical content, industry examples, and the breadth of my research in my book. They compliment my writing style as being clear and easy-to-read. They state that there is a wide audience for the book - anyone who holds a role in a software development team.

Alongside their opinions, you can preview a sample chapter and the table of contents prior to purchase. I hope that all of this information creates a persuasive case for the value contained within the book itself.

If you believe that the book will be useful to you, then I believe that the recommended price is fair.

A Practical Guide to Testing in DevOps is now available on LeanPub.

Tuesday 15 August 2017

Encouraging testers to share testing

In theory, agile development teams will work together with a cross-functional, collaborative approach to allow work to flow smoothly. In practice, I see many teams with a delivery output that is limited by the amount of work that their testers can get through.

If testers are the only people who test, they can throttle their team. This can happen because the developers and business people who are part of the team are unwilling to perform testing activities. It can also be the tester who is unwilling to allow other people to be involved in their work. I have experienced both.

There is a lot of material to support non-testers in test activities. I feel that there is less to support the tester so that they feel happy to allow others to help them. I'd like to explore three things that could prevent a tester from engaging other people to help with test activities.

Trust

Do you trust your team? I hope that most of you will work in a team where your instinct is to answer that question with 'yes'. What if I were to ask, do you trust your team to test?

In the past, I have been reluctant to hand testing activities to my colleagues for fear that they will miss something that I would have caught. I worried that they would let bugs escape. I trusted them in general, but not specifically with testing.

On reflection, I think my doubt centered on their mindset.

In exploratory testing, often a non-tester will take a confirmatory approach. This means that if the product behaves correctly against the requirements, it passes testing. It is easy for anyone, regardless of their testing experience, to fall into a habit of checking off criteria rather than interrogating the product for the ways in which it might fail.

In test automation, it is usually the developer who has the skill to assist. My observation is that developers will generally write elegant test automation. They can also fall into the trap of approaching the task from a confirmatory perspective. Automation often offers an opportunity to quickly try a variety of inputs, but developers don't always look at it from this perspective.

If you share these doubts, or have others that prevent you from trusting your team to test, how could you change your approach to allow others to help you?

One thing that I have done, and seen others do, is to have short handovers at either end of a testing task. If a non-tester is going to pick up a test activity, they first spend ten minutes with the tester to understand the test plan and scope. When the non-tester feels that they have finished testing, they spend a final ten minutes with the tester to share their coverage and results.

These short handovers can start to build trust as they pass the testing mindset to other people in the team. As time passes, the tester should find that their input in these exchanges decreases to a point where the handovers become almost unnecessary.

Identity

If your role in the team is a tester, this identity can be tightly coupled to test activities. What will you do if you don't test? If other people can test, then why are you there at all?

I particularly struggled with this as a consultant. I would be placed into an agile development team as a tester, but often the most value that I could deliver would be in encouraging other people to test. This felt a bit like cheating the system by getting other people to do my role. I believe that a lot of people write about the evolving tester role because of this dissonance.

The clearest way that I have to challenge the tester identity in a constructive way is the concept of elastic role boundaries that I co-developed with Chris Priest. This concept highlights the difference between tasks and enduring commitments. We can be flexible in taking ownership of small activities while still retaining a specialist identity in a particular area of the team.

In simpler terms, a colleague helping with a single testing activity does not make them a tester. I do not see a threat to our role in sharing test activities. I believe that a tester retains their identity and standing in the team as a specialist even when they share testing work.

Vulnerability

The third barrier that I see is that testers are unwilling to ask for help. This isn't an attribute that is unique to testers, many people are unwilling to ask for help. In an agile team, failing to pull your colleagues to help with testing may limit your ability to deliver.

Even when you think that it is clear that you need help, don't assume that your colleagues understand when you are under pressure. In my experience, people are often blissfully unaware of the workload of others in the team. Even when the day begins with a stand-up meeting, it can be difficult to determine how much work sits behind each task that a person has in progress. Make it clear that you would welcome assistance.

You may be reluctant to ask for help because you see it as a sign of weakness. It might feel like everyone else in the team can maintain a certain pace while you are always the person who needs a hand. In my experience, asking for help is rarely perceived as weakness by others. More often, I have seen teams respond with praise for bravery, eagerness to alleviate stress, and relief that they have been given permission to help.

It can also be difficult to share testing when you work in a wider structure alongside other agile teams that have the same composition as your team. You might believe that your team is the only one where the testers cannot handle testing themselves. In this situation, remember the ratio myth. There are a lot of variables in determining the ratio of testers to the rest of the team. Sometimes a little bit of development can spin off a lot of testing, or vice versa. Test your assumptions about how other teams are operating.

If you are vulnerable, you encourage others to behave in the same way. A tester sharing testing can encourage others to seek the help that they need too. If you're unwilling to ask someone for help directly, start by making it clear to your team what your workload is and extend a general invitation for people to volunteer to assist.

***

If you work in an agile development team and feel reluctant to share test activities with your colleagues, you might be creating unnecessary pressure on both yourself and your team by limiting the pace of delivery. I'd encourage you to reflect on what is preventing you from inviting assistance.

If you doubt that your colleagues will perform testing to your standard, try a handover. If you worry about the necessity of your role if you share testing, perhaps elastic role boundaries will explain how specialists can share day-to-day tasks and retain their own discipline. If you are reluctant to ask for help directly, start by making your workload clear so that your team have the opportunity to offer.

I encourage you to reflect on these themes and how they influence the way that you work, to get more testers to share testing.

Sunday 23 July 2017

Exploring the top of the testing pyramid: End-to-end and user interface testing

A few weeks ago I found myself in a circular conversation about end-to-end testing. I felt like I was at odds with my colleague, but couldn't figure out why. Eventually we realised that we each had a different understanding of what end-to-end meant in the testing that we were discussing.

That conversation lead to this poll on Twitter:


The poll results show that roughly a quarter of respondents considered end-to-end testing to be primarily about the infrastructure stack, while the remaining majority considered it from the perspective of their customer workflow. Odds are that this ratio means I'm not the only person who has experienced a confusing conversation about end-to-end tests.

I started to think about the complexity that is hidden at the top of the testing pyramid. The model states that the smallest number of tests to automate are those at the peak, labelled as end-to-end (E2E) or user interface (UI) tests.

Testing Pyramid by Mike Cohn

These labels are used in the test pyramid interchangeably, but end-to-end and user interface testing are not synonymous. I can think of seven different types of automation that might be labelled by one or either of those two terms:

Seven example of user interface and/or end-to-end testing

The table above might be confusing without examples, so here are a few from my own experience.

1. User interface; Not full infrastructure stack; Not entire workflow

In my current organisation we have a single page JavaScript application that allows the user to perform complex interactions through modal overlays. We run targeted tests against this application, through the browser, using static responses in place of our real middleware and mainframe systems. 

This suite is not end-to-end in any sense. We focus on the front-end of our infrastructure and test pieces of our customer workflows. We could call this suite user interface testing.

2. User interface; Full infrastructure stack; Not entire workflow

I previously worked with an application that was consuming data feeds from a third party provider. We were writing a user interface that displayed the results of calculations that relied on these feeds as input. We had a suite of tests that ran through the user interface to test each result in isolation.

Multiple calculations made up a workflow, so the tests did not cover an entire customer journey. However they did rely on the third-party feed being available to return test data to our application, so they were end-to-end from an infrastructure perspective. In this team we used the terms user interface tests and end-to-end tests interchangeably when talking about this suite.

3. User interface; Not full infrastructure stack; Entire workflow

In my current organisation we have an online banking application written largely in Java. Different steps of a workflow, such as making a payment, each display separately on a dedicated page. We have a suite of tests that run through the common workflows to test that everything is connected correctly.

This suite executes tests through the browser against the deployed web application, but uses mocked responses instead of calling the back-end systems. It is a suite that targets workflows. We could call this a user interface or end-to-end suite.

4. User interface; Full infrastructure stack; Entire workflow

In the same product as the first example, there is another suite of automation that runs through the user interface against the full infrastructure stack to test the entire customer workflow. We interact with the application using test data that is provisioned across all the dependent systems and the tests perform all the steps in a customer journey. We could call this suite user interface or end-to-end testing.

5. No user interface; Full infrastructure stack; Not entire workflow

In my current organisation we have a team working on development of open APIs. There is no user interface for the API as a product. The team have a test suite that interacts with their APIs and relies on the supporting systems to be active: databases, middleware, mainframe, and dependencies to other APIs.

These tests are end-to-end from an infrastructure perspective, but their test scope is narrow. They interrogate successful and failing responses for individual requests, rather than looking at the sequence of activities that would be performed by a customer.

6. No user interface; Not full infrastructure stack; Entire workflow

Earlier in my career I worked in telecommunications. I used to install, configure, and test the call control and charging software within cellphone networks. We had an in-house test tool that we could use to trigger scripted traffic through our software. This meant that we could bypass the radio network and use scripts to test that a call would be processed correctly, from when a person dialed the number to when they ended the call, without needing to use a mobile device.

These automated tests were end-to-end tests from a workflow perspective, even though there was no user interface and we weren't using the entire network infrastructure.

7. No user interface, Full infrastructure stack; Entire workflow

The API team in the fifth example have a second automated suite where a single test performs multiple API requests in sequence, passing data between calls to emulate a customer workflow. These tests are end-to-end from both the infrastructure and the customer experience perspective.


As these examples illustrate, end-to-end and user interface testing can mean different things depending on the product under test and the test strategy adopted by a team. If you work in an organisation where you label your test automation with one of these terms, it may be worth checking that there is truly a shared understanding of what your tests are doing. Different perspectives of test coverage can create opportunities for bugs to be missed.

Wednesday 12 July 2017

Test Automation Canvas

Test automation frameworks grow incrementally, which means that their design and structure can change over time. As testers learn more about the product that they are testing and improve their automation skills, this can reflect in their code.

Recently I've been working with a group of eight testers who belong to four different agile teams that are all working on the same set of products. Though the testers regularly meet to share ideas, their test automation code had started to diverge. The individual testers had mostly been learning independently.

A manager from the team saw these differences emerging and felt concerned that the automated test coverage was becoming inconsistent between the four teams. The differences they saw in testing made them question whether there were differences in the quality of delivery. They asked me to determine a common approach to automated test coverage by running a one hour workshop.

I am external to the team and have limited understanding of their context. I did not want to change or challenge the existing approach or ideas from this position, particularly given the technical skills that I could see demonstrated by the testers themselves. I suspected that there were good reasons for what they were doing, but perhaps not enough communication.

I decided that a first step would be to create an activity that would get the testers talking to each other, gather information from these conversations, then summarise the results to share with the wider team.

To do this, I thought a bit about the attributes of a test automation framework. The primary reason that I had been engaged was to discuss test coverage. But coverage is a response to risk and constraints, so I wanted to know what those were too. I was curious about the mechanics of the suites: dependencies, test data, source control, and continuous integration. I had also heard varying reports about who was writing and reviewing automation in each team, so I wanted to talk about engagement and maintenance of code.

I settled on a list of nine key areas:

  1. RISKS - What potential problems does this suite mitigate? Why does it exist?
  2. COVERAGE - What does this suite do?
  3. CONSTRAINTS - What has prevented us from implementing this suite in an ideal way? What are our known workarounds?
  4. DEPENDENCIES - What systems or tools have to be functional for this suite to run successfully?
  5. DATA - Do we mock, query, or inject? How is test data managed?
  6. VERSIONING - Is there source control? What is the branching model for this suite?
  7. EXECUTION - Is the suite part of a pipeline? How often does it run? How long does it take? Is it stable?
  8. ENGAGEMENT - Who created the suite? Who contributes to it now? Who is not involved, but should be?
  9. MAINTAINABILITY - What is the code review process? What documentation exists?

I decided to put these prompts into an A3 canvas format, similar to a lean canvas or an opportunity canvas. I thought that this format would create a balance between conversation and written record, as I wanted both to happen simultaneously.

Here is the blank Test Automation Canvas that I created:

A blank Test Automation Canvas

On the day of the workshop, the eight testers identified four separate automation suites under active development. They then self-selected into pairs, with each pair taking a blank canvas to complete.

It took approximately 20 minutes to discuss and record the information in the canvas. I asked them to complete the nine sections in the order that they are numbered in the earlier list: risks, coverage, constraints, dependencies, data, versioning, execution, engagement, and maintainability.

Examples of completed Test Automation Canvas

Then I asked the pairs to stick their completed canvas on the wall. We spent five minutes circling the room, silently reading the information that each pair had provided. As everyone had been thinking deeply about one specific area, this time was to switch to thinking broadly.

In the last 15 minutes, we finished by visiting each canvas in turn as a group. I asked two questions at each canvas to prompt group discussion: is anything unclear and is anything missing. This raised a few new ideas, and some misunderstanding between different teams, so notes were added into the canvas'.

After the workshop, I took the information from the canvas' to create a single A3 summary of all four automation frameworks, plus the exploratory testing that is performed using a separate tool:

Example of Test Automation Summary

In the image above, each row is a different framework. The columns are rationale, coverage, dependencies, mechanics, and improvement opportunities. Within mechanics are versioning, review, pipeline, contributors and data.

I shared this summary image in a group chat channel for the testers to give their feedback. This led to a number of small revisions and uncovered one final misunderstanding. Now I think that we have a reference point that clearly states the collective understanding of test automation among the testers. The next step is to share this information with the wider team.

I hope that having this information recorded in a simple way will create a consistent basis for future iterations of the frameworks. If the testers respect the underlying rationale of the suite and satisfy the high-level coverage categories, then slight differences in technical implementation are less likely to create the perception that there is a problem.

The summary should also support testers to give feedback in their code reviews. I hope that it provides a reference to aid constructive criticism of code that does not adhere to the statements that have been agreed. This should help keep the different teams on a similar path.

Finally, I hope that the summary improves visibility of the test automation frameworks for the developers, business people, and managers who work in these teams. I believe that the testers are doing some amazing work and hope that this reference will promote their efforts.

Friday 23 June 2017

The Interview Roadshow

Recently I have been part of a recruitment effort for multiple roles. In May we posted two advertisements to the market: automation tester and infrastructure tester. Behind the scenes we had nine vacancies to fill.

This was the first time that I had been involved in recruiting for such a large number of positions simultaneously. Fortunately I was working alongside a very talented person in our recruitment team, Trish Burgess, who had ideas about how to scale our approach.

Our recruitment process for testers usually includes five steps from a candidate perspective:
  1. Application with CV and cover letter
  2. Screening questions
  3. Behavioural interview
  4. Practical interview
  5. Offer
We left the start of the process untouched. There were just over 150 applications for the two advertisements that we posted and, after reading through the information provided, we sent three screening questions to a group of 50 candidates. We asked for responses to these questions by a deadline, at which point we selected who to interview.

Usually we would run the two interviews separately. Each candidate would be requested to attend a behavioural interview first then, depending on the feedback from that, a practical interview as a second step. Scheduling for the interviews would be agreed between the recruiter, the interviewers, and the candidates on a case-by-case basis.

As we were looking to fill nine vacancies, we knew that this approach wouldn't scale to the number of people that we wanted to meet. We decided to trial a different approach.

The Interview Roadshow

Trish proposed that we run six parallel interview streams. To achieve this we would need twelve interviewers available at the same time - six behavioural and six practical - to conduct the interviews in pairs.

The first hurdle was that we didn't have six people who were trained to run our practical interview, as we usually ran them one-by-one. I asked for volunteers to join our interview panel and was fortunate to have a number of testers come forward. I selected a panel of eight where four experienced interviewers were paired with four new interviewers. The extra pair gave us cover in case of unexpected absence or last minute conflicts.

We assembled a larger behavioural interview panel too, which gave us a group of 16 interviewers in total. Several weeks in advance of the interview dates, while the advertisements were still live, Trish booked three half-day placeholder appointments into all their diaries:
  • Friday morning 9.30am - 12pm
  • Monday afternoon 1pm - 3.30pm
  • Wednesday morning 9.30am - 12pm

In the weeks leading up to the interviews themselves, the practical interviewer pairs conducted practice interviews with existing staff as a training exercise for the new interviewers. We also ran a session with all the behavioural interviewers to make sure that there was a consistent understanding of the purpose of the interview and that our questions were aligned.

From the screening responses I selected 18 people to interview. We decided to allocate the candidates by their experience into junior, intermediate, and senior streams, then look to run a consistent interview panel for each group. This meant that the same people met all of the junior candidates, and similarly at other levels.

The easiest way to illustrate the scheduling is through an example.

For the first session on Friday morning we asked the candidates to arrive slightly before 9.30am. Trish and I met them in the lobby, then took them to a shared space in our office to give them a short explanation of how we were running the interviews. I also took a photo of each candidate, which I used later in the process when collating feedback.

Then we delivered the candidates to their interviewers. We gave the interviewers a few minutes together prior to the candidate arriving, for any last minute preparation, so the interviews formally began ten minutes after the start of their appointment (at 9.40am).

Here is a fictitious example schedule for the first set of interviews:



The first interviews finished by 10.40am, at which point the interviewers delivered the candidate back to the shared space. We provided morning tea and they had 20 minutes to relax prior to their next interview at 11am. Trish and I were present through the break and delivered the candidates back to the interviewers.

Here is a fictitious example schedule for the second interviews:



The second interview session finished by 12pm, at which point the interviewers would farewell the candidate and collate their feedback from both sessions.

Retrospective Outcomes

The main benefit to people involved in the interview roadshow was that it happened within a relatively short time frame. Within four working days we conducted 36 interviews. As a candidate, this meant fast feedback on the outcome. As an interviewer, it meant less disruption of my day-to-day work.

We were happily surprised that we had 18 candidates accept the interview offer immediately. We had assumed that some people would be unavailable, as when we schedule individual interviews there is a lot of back-and-forth. Trish had given an indication of the interview schedule when asking the screening questions. The set times seemed to motivate candidates make arrangements so that they could attend.

By running two interviews in succession, the candidate only had to visit our organisation once. In our usual process recruitment process a candidate might visit twice: the first time for a behavioural interview and the second for a practical interview. One trip means fewer logistical concerns around transport, childcare, and leaving their current workplace.

On the flip side, running two interviews in succession meant that people had to take more time away from their current role in order to participate. We had feedback from one candidate that it was a long time for them to spend away from the office.

There were three areas that we may look to improve.

Having six candidates together in the pre-interview briefing and refreshment break was awkward. These were people who didn't know each other, were competing for similar roles, and were in the midst of an intense interview process. The conversation among the group was often stilted or non-existent - though perhaps this is a positive thing for candidates who need silence to recharge?

In our usual process the hiring manager would always meet the person that was applying for the vacancy in their team. In this situation, we had individual hiring managers who were looking for multiple roles at multiple levels - junior, intermediate, senior. With the interview roadshow approach, we had some successful candidates who were proposed to a role where the hiring manager hadn't met them. Though this worked well for us, as there was a high degree of trust among the interviewers, it may not in other situations.

The other thing that became difficult in comparison to our usual approach was dealing with internal applicants. We had multiple applications from within the organisation and it was harder to handle these in a discrete way with such a large panel of interviewers. The roadshow approach to interviewing also made these people more visible in their aspirations, though we tried to place them in rooms that were away from busy areas.

Overall, I don't think that we could have maintained the integrity of our interview process for such a large group of candidates by any other means. The benefits of scaling to an interview roadshow outweigh the drawbacks and it is something that I think we will adopt again in future, as required.

I personally had a lot of fun in collating the candidate feedback, seeing which candidates succeeded, and suggesting how we could allocate people to teams. Though it is always hard to decline the candidates that are unsuccessful, I think we have a great set of testers coming in to join us as a result of this process and I'm looking forward to working with them.

Thursday 8 June 2017

Using SPIN for persuasive communication

I can recall several occasions early in my career where I became frustrated by my inability to persuade someone to my way of thinking. Reflecting on these conversations now, I can still bring to mind the feelings of agitation as I failed. I thought I had good ideas. I would make my case, sometimes multiple times, to no avail. I was simply not very good at getting my way.

The frustration came from my own failure, but I was also frustrated by seeing others around me succeed. They could persuade people. I couldn't figure out why people were listening to them, but not me. I was unable to spot the differences in our approach, which meant that I didn't know what I should change.

Some years later, in my role as a test consultant, I had the opportunity to attend a workshop on the fundamentals of sales. The trainer shared an acronym, SPIN, which is a well-known sales technique developed in the late 1980s.

SPIN was a revelation to me and I believe that it has significantly improved my ability to persuade. In this post I'll explain what the acronym stands for and give examples of how I apply SPIN in a testing context.

What is SPIN?

SPIN stands for situation, problem, implication, and need.

A SPIN conversation starts with explaining what you see. Describe the situation and ask questions to clarify where you're unsure. Avoid expressing any judgement or feelings - this should be a neutral account of the starting point.

Then discuss the problems that exist in the current state. Where are the pain points? Share the issues that you see and draw out any that you have missed. Try to avoid making the problems personal, as this part of the conversation can be derailed into unproductive ranting.

Next, think about what the problems mean for the business or the team. Consider the wider organisational context and question how these problems impact key measures of your success. Where is the real cost? What is the implication of keeping the status quo.

Finally, describe what you think should happen next. This is the point of the conversation where you present your idea, or ideas, for the way forward. What do you think is needed?

To summarise in simple terms, the parts of SPIN are:
  • Situation - What I see
  • Problem - Why I care
  • Implication - Why you should care
  • Need - What I think we should do

A SPIN example

My first workplace application of SPIN was at a stand-up meeting. I was part of a team that were theoretically running a fortnightly scrum process. In reality it was a water-scrum-fall where testing kept being flooded at the end of each sprint.

I had been trying, unsuccessfully, to change our approach to work. Prior to this particular stand-up I sat down and noted some thoughts against SPIN. With my preparation in mind, at the stand-up I said something like:

"It seems like the work isn't being delivered to testing until really late in the sprint, and then everything arrives at once. This means that we keep running out of time for testing, or we make compromises to finish on time. 

If we run out of time, then we miss our sprint goal. If we compromise on test coverage, then we all doubt what we are delivering. Both of these outcomes have a negative impact on our team morale. At the end of each fortnight I feel like we are all pretty flat. 

I'd like us to try having developers work together on tasks so that we push work through the process, rather than individual developers tackling many tasks in the backlog at once. That way we should see work arrive in testing more regularly through the sprint. What do you think?"

To my amazement, this was the beginning of a conversation where I finally convinced the developers to change how they were allocating work.

Did you spot the SPIN in that example?

  • Situation - What I see - It seems like the work isn't being delivered to testing until really late in the sprint, and then everything arrives at once.

  • Problem - Why I care - This means that we keep running out of time for testing, or we make compromises to finish on time. 

  • Implication - Why you should care - If we run out of time, then we miss our sprint goal. If we compromise on test coverage, then we all doubt what we are delivering. Both of these outcomes have a negative impact on our team morale. At the end of each fortnight I feel like we are all pretty flat. 

  • Need - What I think we should do - I'd like us to try having developers work together on tasks so that we push work through the process, rather than individual developers tackling many tasks in the backlog at once. That way we should see work arrive in testing more regularly through the sprint.

In the first few conversations where I applied SPIN, I had to spend a few minutes preparing. I would write SPIN down the side of a piece of paper and figure out what I wanted to say in each point. This meant that I could confidently deliver my message without feeling like I was citing the different steps of a sales technique.

Preparing for a conversation using SPIN

SPIN in a retrospective

As I became confident with structuring my own conversations using SPIN, I started to observe the patterns of success for others. Retrospectives provided a lot of data points for both successful and unsuccessful attempts at persuasion.

Many retrospective formats encourage participants to write their thoughts on sticky notes. When prompted with a question like "What could we do differently" I noticed that different people would usually note down their ideas using a single piece of SPIN. Where an individual consistently chose the same piece of SPIN in their note taking, they created a perception of their contributions among the audience. 

Let me explain this with an example. Imagine a person who takes the prompt "What could we do differently" and writes three sticky notes:
  1. We all work from home on Wednesday
  2. The air conditioning is too cold
  3. Our product owner was sick this week
All three are observations, the 'situation' of SPIN that describe what they see. Though they might be thinking more deeply about each, without any additional information the wider team are probably thinking "so what?"

Similarly, if your sticky notes are mostly problems, then your team might think that you're whiny. If your sticky notes are mostly solutions, then your team might think that you're demanding. In the absence of a rounded explanation your contribution can be misinterpreted.

I'm not suggesting that you write every retrospective sticky note using the SPIN format!

I use SPIN in a retrospective in two ways. Firstly to remind myself to vary the type of written prompt that I give myself when brainstorming on sticky notes, to prevent the perception that can accompany a consistent approach. Secondly to construct a rounded verbal explanation of the ideas that I have, so I have the best chance of persuading my team.

SPIN with gaps

There may be cases where you cannot construct a whole SPIN.

Generally I consider the points of SPIN with an audience in mind. When I think about implication, I capture reasons that the person, or people, that I am speaking to should care about what I'm saying. If I'm unable to come up with an implication, this is usually an indicator that I've picked the wrong audience. When I can't think of a reason that they should care, then I need to pick someone else to talk to.

Sometimes I can see a problem but I'm not sure what to do about it. When this happens, I use the beginning of SPIN as a way to start a conversation. I can describe the situation, problems, and implications, then ask others what ideas they have for improvement. It can be a useful way to launch a brainstorming activity.

Conclusion

SPIN is one facet of persuasive communication. It offers guidance on what to say, but not how to say it. In addition to using SPIN, I spent a lot of time considering the delivery of my arguments in order to improve the odds of people accepting my ideas.

Though I rarely have to write notes in the SPIN format as I did originally, I still use SPIN as a guide to structure my thinking. SPIN stops me from jumping straight to solutions and helps me to consider whether I have the right audience for my ideas. I've found it a valuable technique to apply in a variety of testing contexts.