Wednesday, 27 June 2018

3 ways to define your role without a RACI matrix

I lead a team of Test Coaches, which is still a relatively unusual role in our industry. As a Test Coach I was part of a number of conversations about my role. As we scaled the team of coaches and I took a leadership position, these conversations increased in frequency and audience.

Uncertainty about roles can happen when there are:
  • new roles created, or 
  • multiple people performing the same role in different areas of an organisation, or 
  • a number of roles that interact closely with a shared purpose.

In all three situations there are alternatives to drawing up a traditional RACI matrix, which can feel like marking territory when executed poorly. I prefer collaborative approaches to role definition. This article explains how to use an interactive activity with practical scenarios, a role expectation mapping canvas, and a rainbow roles overview.

Interactive activity with practical scenarios

When our test coaching function grew there were questions about how we would operate. Some people had preconceptions about our role being similar to test management. Others had concerns that we might overstep boundaries toward people leadership or setting technical direction.

To kick-off our expanded services we scheduled a session for all the testers in the organisation, along with their line managers and delivery leadership, to explain test coaching. Attended by over 120 people, part of the agenda was an interactive activity to clarify and define our role.

We asked the audience to pair up with someone that they wouldn't normally work with, preferably someone from another department. We gave each pair a questionnaire that listed 13 scenarios and asked them to consider whether or not they would involve a Test Coach in each situation.

The scenarios in the activity were descriptive and practical. They were framed in the first person to encourage people to imagine themselves in each situation. Not just "bugs are being ignored", but "my bugs are being ignored".

Questionnaire for Test Coach role

Where a pair disagreed, they were allowed to record a two responses, however we encouraged discussion of differing opinions in the hope that a single answer could be found.

Once this task was complete, we handed out sets of coloured paper squares to each pair. Each set included green, yellow, and pink sheets to represent yes, maybe, and no respectively.

We used a set of slides to work through the questionnaire. Each question was displayed on a slide with a blue background, a neutral colour. I read out the question and each pair in the audience had to hold up a coloured card that corresponded to their answer. The card was green if they would engage a Test Coach, pink if they would not, and yellow if they were unsure.

Asking each pair to vote kept the audience engaged, and helped us to understand where there was confusion about our role. Where the responses were mixed, or where they consistently differed to what we expected the response to be, we gained insight into situations where there might be overlap with other roles or where we might have to clarify the intent of our involvement.

After each vote, the next slide had the same question with a background colour that revealed the answer: green, yellow or pink. I provided a short explanation of our rationale for involvement in each situation and gave the audience the opportunity to ask any questions that they had.

Slides that explain when to ask a Test Coach

This approach to defining our role was well received and created a clear set of expectations about how to interact with the new Test Coaches.

I have since applied the same type of exercise in a delivery tribe with five leadership roles. Both the team and the leaders were uncertain of who should be driving the outcome in some instances. I adapted the activity so that the columns of the questionnaire represented each role.


Questionnaire for support roles in agile delivery

The audience split into pairs who discussed a set of scenarios, choosing which role they would expect an action from in each situation. We used five different coloured cards for the audience to share their thoughts, one colour per role. With more roles in scope and a larger audience, there was robust conversation about some of these scenarios and the activity was longer than when it was focused on a single role.

Role expectation mapping canvas

The test coaches in my organisation are part of a wider team that includes technical and practices coaching. As the team grew there was confusion, particularly from newer coaches, about our responsibilities. During a team offsite day, we split into our specialist coaching disciplines for an activity to discuss this.

My manager facilitated the activity using the role expectation mapping canvas developed by Tony O'Halloran based on ideas from Jimmy Janlen's description of role expectation mapping. Each subset of coaches in the team completed the activity together i.e. all of the Test Coaches worked on a single canvas.

The canvas is separated into five parts:
  1. Roles that you work with
  2. Externally visible signs of success
  3. Things that you are responsible for
  4. Expectations of the people that you interact with
  5. Challenges that you face


Role expectation mapping canvas

As the manager of the test coaching function and the longest serving test coach, I decided to act as the scribe for this activity. I refrained from contributing to the conversation and instead focused on recording it. This was difficult, but my silence created space for the other test coaches to express their ideas, opinions and doubts.

As each test coach worked in a different neighbourhood the purpose of the activity was to understand what was common. Across each department in our organisation there are consistent stakeholder roles, indicators of success, responsibilities, expectations and challenges. The canvas was useful to focus conversation and capture these, to create clarity and alignment in our function.

This canvas was designed to be applied in a delivery team to clarify relationships between different roles. Though I am yet to use it in that context, I can see how it might be successful.

Rainbow roles

A tribe that I interact with as a Test Coach has other test leadership roles at a tribe and team level, along with delivery managers who have an interest in quality. This environment had created confusion in the test chapter of the tribe and conflict in direction for testing.

To resolve this, a meeting was scheduled for the people directly involved in test leadership. The facilitator drove the discussion towards a RACI matrix, which was distributed to the participants of the meeting after the session.

In this context the RACI matrix was usefully divisive for the leadership audience. It clearly showed which role was responsible for which task. However I felt that we needed to present something else to the rest of the tribe. Something that illustrated how we had agreed to work collaboratively towards a shared purpose, rather than defining our separate territories.

I converted the information from the RACI matrix into a one-page overview with five streams that showed how the test leadership team would work together on common testing goals: strategy, standards, day-to-day support, recruitment, and personal development.

Rainbow Roles

It isn't miles away from the original, but the information is presented in simple language and intentionally illustrates our different contributions towards a single result. The rainbow colours started as a joke to further emphasize our aspirations of harmony, but became core to the identity of this diagram: rainbow roles.

It is not unusual for roles to become murky as the organisation evolves. In each example we found clarity through collaboration using tools that emphasized how different people work together. I encourage you to treat confusion or conflict as an opportunity to positively engage with others about your role and theirs.

For another take on this topic, you might also want to read Riot's agile team leadership model: A story of challenging convention by Ahmed Sidky and Tricia Broderick.

Thursday, 7 June 2018

The world of test automation capability

Imagine a traditional capability matrix. A spreadsheet where each column is skill, each row is the name of an individual, and then the map is populated by people who rate their competence. These spreadsheets are useful for seeing trends within a team, both strengths and opportunities for improvement, but there are limitations.

A matrix captures state, but not journeys. It shows skills, but not always how the skills are applied. It lists what is important in a specific domain, but not how other people are using different skills to solve the same problem. The limitations of the matrix can stifle thinking about capability where evolution, application, and diversity of skill are important.

We could try to capture this information in our spreadsheet. Telling these stories by colour-coding or cross-referencing multiple tabs. Or we could think about capability from another angle.

The world of test automation capability is a model that illustrates the skills and experience of a test team using layers of the earth: core, mantle and crust. At the core are the testers with the least knowledge about coding and test automation frameworks. Paths to the surface travel through the mantle of programming languages towards the crust of applying coding skills.



Divisions exist within the layers. The mantle of programming languages might include Java, JavaScript, Visual Basic, Ruby, and Swift. The crust of test automation frameworks might include suites for web-based user interfaces, iOS and Android mobile applications, desktop applications, and APIs. Splitting each layer of the model proportionally allows us to see relative strengths and weaknesses in capability and tooling.

Divisions within the mantle and crust can be shared by different teams. In my organisation testers are split across four different departments. There is some overlap in the programming languages and automation frameworks in use across these different areas. In forming a strategy for training and framework evolution, we want to consider the distribution of testers within this model.

Part of the test coaching function in my organisation is to formalise paths between layers of the world, to support testers to learn a programming language and apply this knowledge to implementation of a test automation framework. These paths are created through training courses, coding dojos and knowledge sharing opportunities. Informal paths will exist where testers are self-taught or prototyping their own frameworks.

To illustrate, our current world of test automation capability can be visualised as:



The model shows testers at the core who have no coding or framework experience. In the mantle, the strongest programming language is Java, though capability also exists in four other languages. Our division in programming language is usually by department, with different areas adopting different solutions. At the crust, the largest population are working on WebDriver Frameworks, which includes testers who work on web-based applications across all areas of the organisation. The crust also includes API frameworks as an emerging area, UFT as a legacy framework choice, and the mobile teams who have specialised tooling in xCode and Espresso.

The world of test automation capability is a snapshot at a given point in time. Divisions within the model will shift as programming languages and automation frameworks evolve. People move through the model as their knowledge and experience develops. Paths will change as learning needs transform.

It is quite a different snapshot to a matrix though. It is one that includes relationships between training and skill level, between skills and frameworks, and between people across different departments. The evolution, application, and diversity of skill is easier to see.

Our Test Automation Strategy is formed within the context of this world, describing the forces that the Test Coaches will apply to the model to facilitate and direct movement. Our strategy differs by department, applying a different lens on the overarching visual.



Though how we rate our skill relative to others is useful, there is rich narrative in mapping skills within an environment then talking about how that environment is changing. The world of test automation capability is a new perspective that creates an opportunity for a new story.

Sunday, 20 May 2018

9 quick ideas for flexible testing

When you're a tester in an agile team, it can be easy to fall into a comfortable testing pattern. The transparency of daily stand-up and reflection of retrospectives can create an illusion of continuous improvement. These routines make us feel that we work in a flexible way but, if we dig a little deeper, we may not be as adaptable as we think.

If you think back to the last time that you felt uncomfortable at work, there's a strong probability that this feeling was associated with a change that you were experiencing. A flexible approach means that you are willing to accept and adopt change regularly, which means that you routinely experience discomfort.

When was the last time that you were surprised by the outcome of your retrospective or quizzed by a colleague in your stand-up? When was the last time that a stakeholder asked questions about your test artifacts? If you can't remember being challenged recently then you, and your team, might be stuck.

Being flexible is not just about activities or outcomes. Imagine that you used to plan your testing in an Excel spreadsheet and now you capture test ideas in a mind map. Does this make you a flexible tester? Not necessarily.

To be a versatile thinker you need to regularly inspect your own habits and create opportunities to collaborate with different people. If you cultivate flexibility as an attitude and make it part of the way that you work, you'll become more aware of how you think. You can change what you deliver, but a flexible tester will also challenge how they deliver.

How can you do that?

Here are nine quick, practical ideas that may help you develop your flexibility as a tester:

  1. Change the order of your test approach to break a routine.
  2. Ask for advice from a non-tester on how to diagnose a bug.
  3. Actively seek test ideas from non-testers outside your agile team e.g. UX, Ops.
  4. Copy the format of a colleague's test report to see your own results in a new light.
  5. Pair with a tester in another team to see a different test approach first-hand.
  6. Invite someone else to test your product, then debrief about what they found.
  7. Experiment with a tool that you haven't tried before.
  8. Take a second look at something that you thought was a bad suggestion.
  9. Ask for constructive feedback about your testing.

Being in an agile team does not guarantee that you are behaving in an agile way. Try to develop the habits that cultivate flexibility, so that you continue to learn and your testing continues to evolve.

Saturday, 12 May 2018

No unit tests? No problem!

A couple of weeks ago I created a Twitter poll about unit tests that asked:

"Is code without unit tests inherently bad code?" 
The conversations that emerged covered a number of interesting points, which challenged some of my assumptions about unit tests and how we evaluate code.

What is bad code?

When I framed my original question, I deliberately chose the phrase "inherently bad code". I was trying to emphasize that the code would be objectively bad. That the absence of unit tests would be a definitive sign, one of a set of impartial measures for assessing code.

In my organisation, most of our agile development teams include unit tests in their Definition of Done. Agile practitioners define a Definition of Done to understand what is required for a piece of work to be completed to an acceptable level of quality. In this context, the absence of unit tests is something that the agile development team have agreed would be bad.

A Definition of Done may seem like an unbiased measure, but it is still a list that is collectively agreed by a team of people. The code that they create isn't good or bad in isolation. It is labeled as good or bad based on the criteria that this group have agreed will define good or bad for them. The bad code of one team may be the good code of another, where the Definition of Done criteria differs between each.

Is the code inherently bad when it doesn't do what the end user wanted? Not necessarily. What if it the unexpected is still useful? There are a number of famous products that were originally intended for a completely different purpose e.g. bubble wrap was originally marketed as wallpaper [Ref].

I believe there is no such thing as inherently bad code. It is important to understand how the people who are interacting with your code will judge its value.

Why choose to unit test?

Many people include unit testing in their test strategy as a default, without thinking much about what type of information the tests provide, the practices used to create them, or risks that they mitigate.

Unit tests are usually written by the same developer who is writing the code. They may be written prior to the code, in a test driven development approach, or after the code. Unit tests define how the developer expects the code to behave by coding the "known knowns" or "things we are aware of and understand" [Ref].

By writing unit tests the developer has to think carefully about what their code should do. Unit tests catch obvious problems with an immediate feedback loop to the developer, by running the tests locally and through build pipelines. If the developer discovers issues and resolves them as the code is being created, this offers opportunities for other people to discover unexpected or interesting problems via other forms of testing.

Where there are different types of automated testing, across integration points or through the user interface, unit tests offer an opportunity to exercise a piece of functionality at the source. This is especially useful when testing a function that behaves differently as data varies. Rather than running all of these variations through the larger tests, you may be able to implement these checks at a unit level.

Unit tests require the developer to structure their code so that it is testable. These implementation patterns create code that is more robust and easier to maintain. Where a production problem requires refactoring of existing code, the presence of unit tests can make this a much quicker process by providing feedback that the code is still behaving as expected.

The existence of unit tests does not guarantee these benefits. It is entirely possible to have a lot of unit tests that add little value. The developer may have misunderstood how to implement the tests, worked in isolation, or designed their test coverage poorly. The merit of unit tests is often dependent on team culture and other collaborative development practices.

No unit tests? No problem!

Though there are some solid arguments for writing unit tests, their absence isn't always a red flag. In some situations we can realise the benefits of unit testing through other tools.

Clean implementation patterns that make code easier to maintain may be enforced by static analysis tools. These require code to follow a particular format and set of conventions, rejecting anything that deviates from the agreed norm before it is committed to the code base. These tools can even detect some of the same functional issues as unit tests.

Rather than writing unit tests to capture known behaviour, you may choose to push this testing up into an integration layer. Where the data between dependent systems includes a lot of variation, shifting the tests can help to examine that the relationship is correct rather than focusing on the individual components. There is a trade-off in complexity and time to execution, but simple integrated tests can still provide fast feedback to the developers in a similar fashion to unit testing.

When dealing with legacy code that doesn't include unit tests, trying to retrofit this type of testing may not be worth the effort. Similarly if the code is unlikely to change in the future, the effort to implement unit tests might not provide a return through easy maintainability, as maintenance will not be required.

There may be a correlation between unit tests and code quality, but one doesn't cause the other. "Just because two trends seem to fluctuate in tandem ... that doesn’t prove that they are meaningfully related to one another" [Ref].

Wednesday, 25 April 2018

How do you choose a test automation tool?

When you’ve agreed what you want to automate, the next thing you’ll need to do is choose a tool. As a tester most of the conversations I observed from a distance, between managers and people selecting a tool, focused on only one aspect.

Cost.

Managers do care about money, don’t get me wrong. But choosing a tool based on cost alone is foolish, and I believe that most managers will recognise this.

Instead of making cost the primary decision making attribute, where possible defer the question of cost until you’re asking for investment. Your role as the testing expert in a conversation about choosing a tool is to advocate for the other aspects of the tool, beyond cost, that are important to consider.

Support

You want to choose a tool that is well supported so that you know help is available if you encounter problems.

If you’re purchasing a tool from a vendor, is it formally supported? Will your organisation have a contractual arrangement with the tool vendor in the event that something goes wrong? What type of response time can you expect when you encounter issues, or have questions? Is the support offered within your time zone? In New Zealand, there is rarely support offered within our working hours, which makes every issue an overnight feedback loop. This has a big impact in a fast-paced delivery environment.

If it’s an open source tool, or a popular vendor tool, how is it supported by documentation and the user community? When you search for information about the tool online, do you see a lot of posts? Are they from people across the world? Are their posts in your language? Have questions posted in forums or discussion groups been responded to? Does the provided documentation look useful? Can you gauge whether there are people operating at an expert level within the toolset, or is everything you see online about people experimenting or encountering problems at an entry level?

The other aspect of support is to consider how often you’ll need it. If the tool is well designed, hopefully it’s relatively intuitive to use. A lack of online community may mean that the tool itself is of a quality where people don’t need to seek assistance beyond it. They know what to do. They can reach an expert level using the resources provided with the tool.

Longevity

How long has the tool been around? There’s no right or wrong with longevity. If you’re extending a legacy application you may require a similarly aged test tool. If you’re working in a new JavaScript framework you may require a test tool that’s still in beta. But it’s important to go into either situation with eyes open about the history of the tool and it’s possible future.

Longevity is not just about the first release, but how the tool has been maintained and updated. How often is the tool itself changing? When did the last update occur? Is it still under active development? As a general rule, you probably don’t want to pick a tool that isn’t evolving to test a technology that is.

Integration

Integration is a broad term and there are different things to consider here.

How does the tool integrate with the other parts of your technology? Depending on what you’re going to test, this may or may not be important. In my organisation, our web services automation usually sits in a separate code base to our web applications, but our selenium-based browser tests sit in the same code base as the product. This means that it doesn’t really matter what technology our web services automation uses, but the implementation of our selenium-based suite must remain compatible with the decisions and direction of developers and architects.

What skills are required to use the tool and do they align with the skills in your organisation? If you have existing frameworks for the same type of testing that use a different tool, consider whether a divergent solution really makes sense. Pause to evaluate how the tool might integrate with the people in your organisation, including training and the impact of shifting expectations, before you look further afield.

Integration is also about whether the test tool will integrate with the development environment or software development process that the team use. If you want the developers in your team to get involved in your test automation, then this is a really important factor to consider when choosing a tool. The smaller the context switch for them to contribute to test code, the better. If they can use the same code repository, the same IDE development environment on their machine, and their existing skills in a particular coding language, then you’re much more likely to succeed in getting them active in your solution. Similarly, if the tool can support multiple people working on tests at the same time, do clean merging from multiple authors, offer a mechanism for review or feedback, these are all useful points to consider related to integration of the tool to your team.

Analyse

In a conversation with management about choosing a tool, your success comes from your ability to articulate why a particular tool is better than another, not just in it’s technical solution. Demonstrate that you’ve thought broadly about the implications of your choice, and how it will impact on your organisation now and in the future.

It’s worth spending time to prepare for a conversation about tools. Look at all the options available in the bucket of possible tools and evaluate them broadly. Make cost the lesser concern, by speaking passionately about the benefits of your chosen solution in terms of support, longevity and integration, along with any other aspects that may be a consideration in your environment.

You may not always get the tool that you’d like, but a decision that has been made by a manager based on information you’ve shared, that has come from a well-thought through analysis of the options available, is a lot easier to accept than a decision made blindly or solely based on cost.

Wednesday, 11 April 2018

Setting strategy in a Test Practice

Part of my role is figuring out where to lead testing within my organisation. When thinking strategically about testing I consider:

  • how testing is influenced by other activities and strategies in the organisation,
  • where our competitors and the wider industry are heading, and
  • what the testers believe to be important.

I prefer to seek answers to these questions collaboratively rather than independently and, having recently completed a round of consultation and reflection, I thought it was a good opportunity to share my approach.

Consultation

In late March I facilitated a series of sessions for the testers in my organisation. These were opt-in, small group discussions, each with a collection of testers from different parts of the organisation.

The sessions were promoted in early March with a clear agenda. I wanted people to understand exactly what I would ask so that they could prepare their thoughts prior to the session if they preferred. While primarily an information gathering exercise, I also listed the outcomes for the testers in attending the sessions and explained how their feedback would be used.

***
AGENDA
Introductions
Round table introduction to share your name, your team, whereabouts in the organisation you work

Transformation Talking Points
8 minute time-boxed discussion on each of the following questions:
  • What is your biggest challenge as a tester right now?
  • What has changed in your test approach over the past 12 months?
  • How are the expectations of your team and your stakeholders changing?
  • What worries you about how you are being asked to work differently in the future?
  • What have you learned in the past 12 months to prepare for the future?
  • How do you see our competitors and industry evolving?
  
If you attend, you have the opportunity to connect with testing colleagues outside your usual sphere, learn a little about the different delivery environments within <organisation>, discover how our testing challenges align or differ, understand what the future might look like in your team, and share your concerns about preparation for that future. The Test Coaches will be taking notes through the sessions to help shape the support that we provide over the next quarter and beyond.
***

The format worked well to keep the discussion flowing. The first three questions targeted current state and recent evolution, to focus testers on present and past. The second three questions targeted future thinking, to focus testers on their contribution to the continuous changes in our workplace and through the wider industry. Every session completed on time, though some groups had more of a challenge with the 8 minute limit than others!

Just over 50 testers opted to participate, which is roughly 50% of our Test Practice. There were volunteers from every delivery area that included testing, which meant that I could create the cross-team grouping within each session as I intended. There were ten sessions in total. Each ran with the same agenda, a maximum of six testers, and two Test Coaches.

Reflection

From ten hours of sessions with over 50 people, there were a lot of notes. The second phase of this exercise was turning this raw input into a set of themes. I used a series of questions to guide my actions and prompt targeted thinking.

What did I hear? I browsed all of the session notes and pulled out general themes. I needed to read through the notes several times to feel comfortable that I had included every piece of feedback in the summary.

What did I hear that I can contribute to? With open questions, you create an opportunity for open answers. I filtered the themes to those relevant to action from the Test Practice, and removed anything that I felt was beyond the boundaries of our responsibilities or that we were unable to influence.

What didn't I hear? The Test Coaches regularly seek feedback from the testers through one-on-one sessions or surveys. I was curious about what had come up in previous rounds of feedback that wasn't heard in this round. This reflected success in our past activities or indicated changes elsewhere in the organisation that should influence our strategy too.

How did the audience skew this? Because the sessions were opt-in, I used my map of the entire Test Practice to consider whose views were missing from the aggregated summary. There were particular teams who were represented by individuals and, in some instances, we may seek a broader set of opinions from that area. I'd also like to seek non-tester opinion, as in parts of the organisation there is shared ownership of quality and testing by non-testers that makes a wider view important.

How did the questions skew this? You only get answers to the questions that you ask. I attempted to consider what I didn't hear by asking only these six questions, but I found that the other Test Coaches, who didn't write the questions themselves, were much better at identifying the gaps.

From this reflection I ended up with a set of about ten themes that will influence our strategy. The themes will be present in the short-term outcomes that we seek to achieve, and the long-term vision that we are aiming towards. The volume of feedback against each theme, along with work currently in progress, will influence how our work is prioritised.

I found this whole process energising. It refreshed my perspective and reset my focus as a leader. I'm looking forward to developing clear actions with the Test Coach team and seeing more changes across the Test Practice as a result.

Friday, 16 February 2018

How do you decide what to automate?

When you start to think about a new automated test suite, the first conversations you have will focus on what you're going to automate. Whether it's your manager that is requesting automation or you're advocating for it yourself, you need to set a strategy for test coverage before you choose a tool.

There are a lot of factors that contribute to the decision of what to automate. If you try to decide your scope in isolation, you may get it wrong. Here are some prompts to help you think broadly and engage a wide audience.

Product Strategy

The strategy for the product under test can heavily influence the scope of the associated test automation solution.

Imagine that you're part of a team who are releasing a prototype for fast feedback from the market. The next iteration of the product is likely to be significantly different to the prototype that you're working on now. There is probably a strong emphasis on speed to release.

Imagine that you're part of a team who are working on the first release version of a product that is expected to become a flagship for your organisation over the next 5 - 10 years. The product will evolve, but in contrast to the first team it may be at a steadier pace and from a more solid foundation. There is probably a strong emphasis on technical design for future growth.

Imagine that you're part of a team who will deliver the first product in a line of new products that will all use the same technology and software development process. An example may be a program of work that switches legacy applications to a new infrastructure. The goal may be to shift each piece in a like-for-like manner. There is probably a strong emphasis on standard practices and common implementation.

What you automate and the way that you approach automation will vary in each case.

You need to think about, and ask questions about, what's happening at a high-level for your product so that you can align your test automation strategy appropriately. You probably won't invest heavily in framework design for a prototype. Your automation may be shallow, quick and dirty. The same approach would be inappropriate in other contexts.

At a slightly lower level of product strategy, which features in your backlog are most important to the business? Are they also the most important to your customers? There may be a list of features that drive sales and a different set that drive customer loyalty. Your automation may target both areas or focus only on the areas of the application that are most important to one set of stakeholders.

Can you map the upcoming features to your product to determine where change might happen in your code base? This can help you prepare appropriate regression coverage to focus on areas where there is a high amount of change.

Risk Assessment

When testers talk about the benefits of automation, we often focus on the number of defects it finds or the time that we’re saving in development. We describe the benefits in a way that matters to us. This may not be the right language to talk about the benefits of automation with a manager.

In the 2016 World Quality Report the number one reason that managers invest in testing is to protect the corporate image. Testing is a form of reputation insurance. It’s a way to mitigate risk.

When you’re deciding what to automate, you need to think about the management perspective of your underlying purpose and spend your effort in automation in a way that directly considers risk to the organisation. To do this, you need to engage people in a conversation about what those risks are.

There are a lot of resources that can help support you in a conversation about risk. James Bach has a paper titled Heuristic Risk-Based Testing that describes an approach to risk assessment and offers common questions to ask. I've used this as a resource to help target my own conversations about risk.

Both the strategy and the risk conversation are removed from the implementation detail of automation, but they set the scene for what you will do technically and your ongoing relationships with management. These conversations help you to establish credibility and build trust with your stakeholders before you consider the specifics of your test automation solution. Building a strong foundation in these working relationships will make future conversations easier.

How do you consider what to automate at a more technical level?

Objective Criteria

As a general rule, start by automating a set of objective statements about your application that address the highest risk areas.

Objective measures are those that can be assessed without being influenced by personal feelings or opinion. They’re factual. The kind of measure that you can imagine in a checklist, where each item is passed or failed. For example, on a construction site a hard hat must be worn. When you look at a construction worker it is easy to determine whether they are wearing a hat or not.

On the flip side, subjective measures are where we find it difficult to articulate the exact criteria that we’re using to make a decision. Imagine standing in front of a completed construction project and trying to determine whether it should win the “Home of the Year”. Some of the decision making in this process is subjective.

If you don’t know exactly how you’re assessing your product, then a tool is unlikely to know how to assess the product. Automation in a subjective area may be useful to assemble a set of information for a person to evaluate, but it can’t give you a definitive result like an objective check will do.

A common practice to support conversations about objective criteria is the gherkin syntax for creating examples using Given, When, Then. A team that develop these scenarios collaboratively through BDD, or Three Amigos, etc. are likely to end up with a more robust set of automation than a person who identifies scenarios independently.

Repetitive Tasks

Repetition can indicate an opportunity for automation, both repetition in our product code and repetition in our test activities.

In the product, you might have widgets that are reused throughout an application e.g. a paginated table. If you automate a check that assess that widget in one place, you can probably reuse parts of the test code to assess the same widget in other places. This creates confidence in the existing use of the widgets, and offers a quick way to check behaviour when that same widget is used in future.

In our testing, you might perform repetitive data setup, execute the same tests against many active versions of a product, or repeat tests to determine whether your web application can be used on multiple browsers. These situations offer an opportunity to develop automation that can be reused for many scenarios.

By contrast, if we have a product that is unique, or an activity that’s a one-off, then we may not want to automate an evaluation that will only happen once. There are always exceptions, but as a general rule we want to seek out repetition.

There is benefit to collaboration here too, particularly in identifying repetitive process. Your colleagues may perform a slightly different set of tasks to what you do, which could mean that you're unaware of opportunities to automate within their area. Start conversations to discover these differences.

Opportunity Cost

When you automate it’s at the expense of something else that you could be doing. In 1998 Brian Marick wrote a paper titled When should a test be automated?. In his summary at the conclusion of the paper he says:

“The cost of automating a test is best measured by the number of manual tests it prevents you from running and the bugs it will therefore cause you to miss.”

I agree this is an important consideration in deciding what to automate and believe it is often overlooked.

When you decide what to automate, it’s important to discuss what proportion of time you should spend in automating and the opportunity cost of what you're giving up. Agreeing on how long you can spend in developing, maintaining, and monitoring your automation will influence the scope of your solution.

Organisational Change

So far we've considered five factors when deciding what to automate: product strategy, risk, objectivity, repetition, and opportunity cost. All five of these will change.

Product strategy will evolve as the market changes. Risks will appear and disappear. As the features of the product change, what is objective and repetitive in its evaluation will evolve. The time you have available, and your opportunity costs, will shift.

When deciding what to automate, you need to consider the pace and nature of your change.

Is change in your organisation predictable, like the lifecycle of a fish? You know what steps the change will take, and the cadence of it. In 2 weeks, this will happen. In 6 months, that will happen.

Or is change in your organisation unpredictable, like the weather? You might think that rain is coming, but you don’t know exactly how heavy it will be or when it will happen, and actually it might not rain at all.

The pace and nature of change may influence the flexibility of your scope and the adaptability of your implementation. You can decide what to automate now with a view to how regularly you will need to revisit these conversations to keep your solution relevant. In environment with a lot of change, it may be less important to get it right the first time.

Collaboration

Deciding what to automate shouldn’t be a purely technical decision. Reach out for information from across your organisation. Establish credible relationships with management by starting conversations in their language. Talk to your team about objective measures and repetitive tasks. Discuss time and opportunity costs of automation with a wide audience. Consider the pace of change in your environment.

It it important to agree the scope of automation collaboratively.

Collaboration doesn’t mean that you call a meeting to tell people what you think. It isn't a one-way information flow. Allow yourself to be influenced alongside offering your opinion. Make it clear what you believe is suited to automation and what isn’t. Be confident in explaining your rationale, but be open to hearing different perspectives and discovering new information too.

You don't decide what to automate. The decision does not belong to you individually. Instead you lead conversations that create shared agreement. Your team decide what to automate together, and regularly review those choices.

Tuesday, 30 January 2018

A stability strategy for test automation

As part of the continuous integration strategy for one of our products, we run stability builds each night. The purpose is to detect changes in the product or the tests that cause intermittent issues, which can be obscured during the day. Stability builds give us test results against a consistent code base during a period of time that our test environments are not under heavy load.

The stability builds execute a suite of web-based user interface automation against mocked back-end test data. They run to a schedule and, on a good night, we see six successful builds:


The builds do not run sequentially. At 1am and 4am we trigger two builds in close succession. These execute in parallel so that we use more of our Selenium Grid, which can give early warning of problems caused by load and thread contention.

When things are not going well, we rarely see six failed builds. As problems emerge, the stability test result trend starts to look like this:


In a suite of over 250 tests, there might be a handful of failures. The number of failing tests, and the specific tests that fail, will often vary between builds. Sometimes there is an obvious pattern e.g. tests with an image picker dialog. Sometimes there appears to be no common link.

Why don't we catch these problems during the day?

These tests are part of a build pipeline that includes a large unit test suite. In the build that is run during the day, the test result trend is skewed by unit test failures. The developers are actively working on the code and using our continuous integration for fast feedback.

Once the unit tests are successful, intermittent issues in the user interface tests are often resolved in a subsequent build without code changes. This means that the development team are not blocked, once the build executes successfully they can merge their code.

The overnight stability build is a collective conscience for everyone who works on the product. When the build status changes state, a notification is pushed into the shared chat channel:


Each morning someone will look at the failed builds, then share a short comment about their investigation in a thread of conversation spawned from the original notification message. The team decide whether additional investigation is warranted and how the problem might be addressed.

It can be difficult to prioritise technical debt tasks in test automation. The stability build makes problems visible quickly, to a wide audience. It is rare that these failures are systematically neglected. We know from experience that ignoring the problems has a negative impact on cycle time of our development teams. When it becomes part of the culture to repeatedly trigger a build in order to get a clean set of test results, everything slows down and people become frustrated.

If your user interface tests are integrated into your pipeline, you may find value in adopting a similar approach to stability. We see benefits in early detection, raising awareness of automation across a broad audience, and creating shared ownership of issue resolution.

Thursday, 18 January 2018

Three types of coding dojo for test automation

The Test Coaches in my organisation provide support for our test automation frameworks. We create tailored training material, investigate new tools in the market, agree our automation strategy, monitor the stability of our suites, and establish practices that keep our code clean.

A lot of this work is achieved in partnership with the testers. We create opportunities for shared learning experiences. We facilitate agreement of standards and strategy that emerge through conversation. I like using coding dojos to establish these collaborative environments for test automation.

When I run a coding dojo, all the testers of a product gather with a single laptop that is connected to a projector. The group set a clear objective for a test automation task that they would like to complete together. Then everyone participates in the code being written, by contributing verbally and taking a turn at the keyboard. Though we do not adopt the strict principles of a coding dojo as they were originally defined, we operate almost exactly as described in this short video titled 'How to run a coding dojo'.

When using this format for three different types of test automation task - training, refactoring, and discovery - I've observed some interesting patterns in the mechanics of each dojo.

Three types of coding dojo for test automation


Training

Where there are many testers working on a product with multiple types of test automation, individuals may specialize e.g. user interface testing or API testing. It isn't feasible for every person to be an expert in every tool.

Often those who have specialized in one area are curious about others. A coding dojo in this context is about transfer of knowledge to satisfy this curiosity. It also allows those who don't regularly see the code to ask questions about it.

The participants vary in skill from beginner to expert. There may be a cluster of people at each end of the spectrum based on whether they are active in the suite day-to-day.

Training dojo

Communication in this dojo can feel like it has a single direction, from expert to learner. Though everyone participates, it is comfortable for the expert and challenging for the beginner. In supporting the people who are unfamiliar, the experts need to provide a lot of explanation and direction.

This can create quite different individual experiences within the same shared environment. A beginner may feel flooded by new information while an expert may become bored by the slow pace. Even with active participation and rotation of duties, it can be difficult to facilitate this session so that everyone stays engaged.
 

Refactoring

When many people contribute to the same automation suite, variation can emerge in coding practices through time. Occasionally refactoring is required, particularly where older code has become unstable or unreadable.

A coding dojo in this context is useful to agree the patterns for change. Rather than the scope and nature of refactoring being set by the first individual to tackle a particular type of problem, or dictated by a Test Coach, a group of testers collectively agree on how they would like to shape the code.

Though the skill of the participants will vary, they skew towards expert level. The audience for refactoring work is usually those who are regularly active in the code - perhaps people across different teams who all work with the same product.

Refactoring dojo

Communication in this dojo is convergent. There are usually competing ideas and the purpose of the session is to reach agreement on a single solution. As everyone participates in the conversation, the outcome will often include ideas from many different people.

In this example I've included one beginner tester, who might be someone new to the team or unfamiliar with the code. Where the context is refactoring, these people can become observers. Though they take their turn at the keyboard, the other people in the room become their strong-style pair, which means that "for ideas to reach the computer they must go through someone else's hands".

Discovery

As our existing suites become out-dated, or as people hear of new tools that they would like to experiment with, there is opportunity for discovery. An individual might explore a little on their own, then a coding dojo is an invitation to others to join the journey.

The participants of this dojo will skew to beginner. The nature of prototyping and experimentation is that nobody knows the answer!

Discovery dojo

Communication in this dojo is divergent, but directed towards a goal. People have different ideas and want to try different things, but all suggestions are in the spirit of learning more about the tool.

The outcome of this dojo is likely to be greater understanding rather than shared understanding. Though we probably won't agree, but we'll all know a little bit more.

For training, refactoring, and discovery, I enjoy the dynamics of a coding dojo format. I would be curious to know how these experiences match your own, or where you've participated in a dojo for test automation work in a different context.

Thursday, 4 January 2018

30 articles for tech leaders written by women

When I was first promoted to a leadership role in tech, I looked for leadership resources that were written by women with advice targeted to a tech environment.

It took some time to discover these articles, which resonated with me and have each contributed to my leadership style in some way. They are written by a variety of women in the US, UK, Europe and New Zealand, many have ties to the software testing community.

This list includes several themes: leadership, communication, learning, inclusion, and recruitment. I would love your recommendations for other articles that could be added.

Why we should care about doing better - Lynne Cazaly
Follow the leader - Marlena Compton
Dealing with surprising human emotions: desk moves - Lara Hogan
You follow the leader because you want to - Kinga Witko
Entering Groups - Esther Derby
Agile Managers: The Essence of Leadership - Johanna Rothman
Recovering from a toxic job - Nat Dudley
Yes, and... - Liz Keogh
Ask vs. Guess cultures - Katherine Wu
"I just can't get her to engage!" - Gnarly Retrospective Problems - Corinna Baldauf
Eight reasons why no one's listening to you - Amy Phillips
Don't argue with sleepwalkers - Fiona Charles
What learning to knit has reminded me about learning - Emily Webber
Effective learning strategies for programmers - Allison Kaptur
Five models for making sense of complex systems - Christina Wodtke
The comfort zone - Christina Ohanian
WTF are you doing? Tell your teams! - Cassandra Leung
We don't do that here - Aja Hammerly
Here's how to wield empathy and data to build an inclusive team - Ciara Trinidad
Tracking compensation and promotion inequity - Lara Hogan
The other side of diversity - Erica Joy
Hiring isn't enough - Catt Small
'Ladies' is gender neutral - Alice Goldfuss
Where does white privilege show up? - Kirstin Hull
Better hiring with less bias - Trish Khoo
1000 different people, the same words - Kieran Snyder
Why do women try to get ahead by pulling men down? - Missy Titus