Wednesday, 25 April 2018

How do you choose a test automation tool?

When you’ve agreed what you want to automate, the next thing you’ll need to do is choose a tool. As a tester most of the conversations I observed from a distance, between managers and people selecting a tool, focused on only one aspect.

Cost.

Managers do care about money, don’t get me wrong. But choosing a tool based on cost alone is foolish, and I believe that most managers will recognise this.

Instead of making cost the primary decision making attribute, where possible defer the question of cost until you’re asking for investment. Your role as the testing expert in a conversation about choosing a tool is to advocate for the other aspects of the tool, beyond cost, that are important to consider.

Support

You want to choose a tool that is well supported so that you know help is available if you encounter problems.

If you’re purchasing a tool from a vendor, is it formally supported? Will your organisation have a contractual arrangement with the tool vendor in the event that something goes wrong? What type of response time can you expect when you encounter issues, or have questions? Is the support offered within your time zone? In New Zealand, there is rarely support offered within our working hours, which makes every issue an overnight feedback loop. This has a big impact in a fast-paced delivery environment.

If it’s an open source tool, or a popular vendor tool, how is it supported by documentation and the user community? When you search for information about the tool online, do you see a lot of posts? Are they from people across the world? Are their posts in your language? Have questions posted in forums or discussion groups been responded to? Does the provided documentation look useful? Can you gauge whether there are people operating at an expert level within the toolset, or is everything you see online about people experimenting or encountering problems at an entry level?

The other aspect of support is to consider how often you’ll need it. If the tool is well designed, hopefully it’s relatively intuitive to use. A lack of online community may mean that the tool itself is of a quality where people don’t need to seek assistance beyond it. They know what to do. They can reach an expert level using the resources provided with the tool.

Longevity

How long has the tool been around? There’s no right or wrong with longevity. If you’re extending a legacy application you may require a similarly aged test tool. If you’re working in a new JavaScript framework you may require a test tool that’s still in beta. But it’s important to go into either situation with eyes open about the history of the tool and it’s possible future.

Longevity is not just about the first release, but how the tool has been maintained and updated. How often is the tool itself changing? When did the last update occur? Is it still under active development? As a general rule, you probably don’t want to pick a tool that isn’t evolving to test a technology that is.

Integration

Integration is a broad term and there are different things to consider here.

How does the tool integrate with the other parts of your technology? Depending on what you’re going to test, this may or may not be important. In my organisation, our web services automation usually sits in a separate code base to our web applications, but our selenium-based browser tests sit in the same code base as the product. This means that it doesn’t really matter what technology our web services automation uses, but the implementation of our selenium-based suite must remain compatible with the decisions and direction of developers and architects.

What skills are required to use the tool and do they align with the skills in your organisation? If you have existing frameworks for the same type of testing that use a different tool, consider whether a divergent solution really makes sense. Pause to evaluate how the tool might integrate with the people in your organisation, including training and the impact of shifting expectations, before you look further afield.

Integration is also about whether the test tool will integrate with the development environment or software development process that the team use. If you want the developers in your team to get involved in your test automation, then this is a really important factor to consider when choosing a tool. The smaller the context switch for them to contribute to test code, the better. If they can use the same code repository, the same IDE development environment on their machine, and their existing skills in a particular coding language, then you’re much more likely to succeed in getting them active in your solution. Similarly, if the tool can support multiple people working on tests at the same time, do clean merging from multiple authors, offer a mechanism for review or feedback, these are all useful points to consider related to integration of the tool to your team.

Analyse

In a conversation with management about choosing a tool, your success comes from your ability to articulate why a particular tool is better than another, not just in it’s technical solution. Demonstrate that you’ve thought broadly about the implications of your choice, and how it will impact on your organisation now and in the future.

It’s worth spending time to prepare for a conversation about tools. Look at all the options available in the bucket of possible tools and evaluate them broadly. Make cost the lesser concern, by speaking passionately about the benefits of your chosen solution in terms of support, longevity and integration, along with any other aspects that may be a consideration in your environment.

You may not always get the tool that you’d like, but a decision that has been made by a manager based on information you’ve shared, that has come from a well-thought through analysis of the options available, is a lot easier to accept than a decision made blindly or solely based on cost.

Wednesday, 11 April 2018

Setting strategy in a Test Practice

Part of my role is figuring out where to lead testing within my organisation. When thinking strategically about testing I consider:

  • how testing is influenced by other activities and strategies in the organisation,
  • where our competitors and the wider industry are heading, and
  • what the testers believe to be important.

I prefer to seek answers to these questions collaboratively rather than independently and, having recently completed a round of consultation and reflection, I thought it was a good opportunity to share my approach.

Consultation

In late March I facilitated a series of sessions for the testers in my organisation. These were opt-in, small group discussions, each with a collection of testers from different parts of the organisation.

The sessions were promoted in early March with a clear agenda. I wanted people to understand exactly what I would ask so that they could prepare their thoughts prior to the session if they preferred. While primarily an information gathering exercise, I also listed the outcomes for the testers in attending the sessions and explained how their feedback would be used.

***
AGENDA
Introductions
Round table introduction to share your name, your team, whereabouts in the organisation you work

Transformation Talking Points
8 minute time-boxed discussion on each of the following questions:
  • What is your biggest challenge as a tester right now?
  • What has changed in your test approach over the past 12 months?
  • How are the expectations of your team and your stakeholders changing?
  • What worries you about how you are being asked to work differently in the future?
  • What have you learned in the past 12 months to prepare for the future?
  • How do you see our competitors and industry evolving?
  
If you attend, you have the opportunity to connect with testing colleagues outside your usual sphere, learn a little about the different delivery environments within <organisation>, discover how our testing challenges align or differ, understand what the future might look like in your team, and share your concerns about preparation for that future. The Test Coaches will be taking notes through the sessions to help shape the support that we provide over the next quarter and beyond.
***

The format worked well to keep the discussion flowing. The first three questions targeted current state and recent evolution, to focus testers on present and past. The second three questions targeted future thinking, to focus testers on their contribution to the continuous changes in our workplace and through the wider industry. Every session completed on time, though some groups had more of a challenge with the 8 minute limit than others!

Just over 50 testers opted to participate, which is roughly 50% of our Test Practice. There were volunteers from every delivery area that included testing, which meant that I could create the cross-team grouping within each session as I intended. There were ten sessions in total. Each ran with the same agenda, a maximum of six testers, and two Test Coaches.

Reflection

From ten hours of sessions with over 50 people, there were a lot of notes. The second phase of this exercise was turning this raw input into a set of themes. I used a series of questions to guide my actions and prompt targeted thinking.

What did I hear? I browsed all of the session notes and pulled out general themes. I needed to read through the notes several times to feel comfortable that I had included every piece of feedback in the summary.

What did I hear that I can contribute to? With open questions, you create an opportunity for open answers. I filtered the themes to those relevant to action from the Test Practice, and removed anything that I felt was beyond the boundaries of our responsibilities or that we were unable to influence.

What didn't I hear? The Test Coaches regularly seek feedback from the testers through one-on-one sessions or surveys. I was curious about what had come up in previous rounds of feedback that wasn't heard in this round. This reflected success in our past activities or indicated changes elsewhere in the organisation that should influence our strategy too.

How did the audience skew this? Because the sessions were opt-in, I used my map of the entire Test Practice to consider whose views were missing from the aggregated summary. There were particular teams who were represented by individuals and, in some instances, we may seek a broader set of opinions from that area. I'd also like to seek non-tester opinion, as in parts of the organisation there is shared ownership of quality and testing by non-testers that makes a wider view important.

How did the questions skew this? You only get answers to the questions that you ask. I attempted to consider what I didn't hear by asking only these six questions, but I found that the other Test Coaches, who didn't write the questions themselves, were much better at identifying the gaps.

From this reflection I ended up with a set of about ten themes that will influence our strategy. The themes will be present in the short-term outcomes that we seek to achieve, and the long-term vision that we are aiming towards. The volume of feedback against each theme, along with work currently in progress, will influence how our work is prioritised.

I found this whole process energising. It refreshed my perspective and reset my focus as a leader. I'm looking forward to developing clear actions with the Test Coach team and seeing more changes across the Test Practice as a result.