Monday, 25 April 2016

What problems do we have with our test automation?

One of the things I like about my role is that I get to work with different teams who are testing different products. The opportunities and challenges in each of our teams is very different, which gives me a lot of material for problem solving and experimenting as a coach.

I recently spoke with a colleague from another department in my organisation who wanted to know what problems we were currently experiencing with our test automation. It was something I hadn't had to articulate before. As I answered, I grouped my thoughts in to four distinct contexts.

I'd like to share what we are currently struggling with to illustrate the variety of challenges in test automation, even within a single department of a single organisation.

Maintenance at Maturity

We have an automation suite that's over four years old. It has grown alongside the product under development, which is a single page JavaScript web application. The test suite has been contributed to by in excess of 50 people, including both testers and developers.

This suite is embedded in the development lifecycle of the product. It runs every time code is merged into the master branch of the application. Testers and developers are in the habit of contributing code as part of their day-to-day activities and examine the test results several times daily.

In the past four months we have made a concerted effort to improve our execution stability and speed. We undertook a large refactoring exercise to get the tests executing in parallel, they now take approximately 30 minutes to run.

We want to keep this state while continuing to adapt our coverage to the growing application. We want to continue to be sensible about what we're using the tool to check, to continue to use robust coding practices that will succeed when tests are executing in parallel, to continue to keep good logging messages and screenshots of failures that help us accurately identify the reasons.

There's no disagreement on these points. The challenge is in continued collective ownership of this work. It can be hard to keep the bigger picture of our automation strategy in sight when working day-to-day on stories. And it's easy to think that you can be lazy just once.

To help, we try to keep our maintenance needs visible. Every build failure will create a message in the testing team chat. All changes to the test code go through the same code review mechanism as changes to the application code, but the focus is on sharing between testers rather than between developers.

Keeping shared ownership of maintenance requires ongoing commitment from the whole team.

Targeted Tools

Another team is working with a dynamic website driven by a content management system. They have three separate tools that each provide a specific type of checking:

  1. Scenario based tests that examine user flows through specific functions of the site
  2. Scanner that checks different pages for specific technical problems e.g. JavaScript errors
  3. Visual regression tool that performs image comparisons on page layout

The information provided by each tool is very different, which means that each will detect different types of potential problems. Together they provide a useful coverage for the site.

The scanner and visual regression tool are relatively quick to adapt to changes in the site itself. The scenario based tests are targeted in very specific areas that rarely change. This means that this suite doesn't require a lot of maintenance.

Because the test code isn't touched often, it can be challenging when it does need to be updated. It's difficult to remember how the code is structured, how to run tests locally, and the idiosyncrasies in each of the three tools.

All of the tests are run frequently and are generally stable. When they do fail, it's often due to environmental issues in the test environments. This means that when something really does go wrong, it takes time to work out what.

It sounds strange, but part of the challenge is debugging unfamiliar code and interpreting unfamiliar log output. It's our code, but we are hands-on with it so infrequently that there's a bit of a learning curve every time.

Moving to Mock

In a third area of my department we've previously done a lot of full stack automation. We tested through the browser-based front-end, but then went through the middleware, down to our mainframe applications, out to databases, etc.

To see a successful execution in this full stack approach we needed everything in our test environment to be working and stable, not just the application being tested. This was sometimes a difficult thing to achieve.

In addition to occasionally flaky environments, there were challenges with test data. The information in every part of the environment had to be provisioned and align. Each year all of the test environments go through a mandatory data refresh, which means starting from scratch.

We're moving to a suite that runs against mocked data. Now when we test the browser-based front-end, that's all we're testing. This has been a big change in both mindset and implementation. Over the past six months we've slowly turned a prototype into a suite that's becoming more widely adopted.

The biggest challenge has been educating the teams so that they feel comfortable with the new suite. How to install it, how to configure it, how to write tests, how to capture test data, how to troubleshoot problems, etc. It's been difficult to capture all of this information in a way that's useful, then propagate it through the teams who work with this particular product.

Getting people comfortable isn't just about providing information. It's been challenging to persuade key people of the benefits of switching tack, offer one-on-one support to people as they learn, and embed this change in multiple development teams.

Smokin'

The final area we are using automation is in our mobile testing. We develop four native mobile applications: two on iOS and two on Android. In the mobile team the pace of change is astonishing. The platforms shift underneath our product on a regular basis due to both device and operating system upgrades.

We've had various suites in our mobile teams but their shelf life seems to be very short. Rather than pour effort in to maintenance we've decided on more than one occasion to start again. Now our strategy in this space is driven by quick wins.

We're working to automate simple smoke tests that cover at least a "Top 10" of the actions our users complete in each of the applications according to our analytics. These tests will then run against a set of devices i.e. four different android devices for tests of an android application.

Our challenge is alignment. We have four native mobile applications. At the moment the associated automation is in different stages of this boom-and-bust cycle. We have useful and fast feedback, but the coverage is inconsistent.

To achieve alignment, we need to be better about an equal time investment in developing and maintaining these suites. Given the rate of change, this is an ongoing challenge.


*****

That's where we're at right now. I hasten to add that there are a lot of excellent things happening with our automation too, but that wasn't the question I was asked!

I'm curious as to whether any of these problems resonate with others, how the challenges you face differ, or if you're trying solutions that differ to what we're attempting.

Thursday, 14 April 2016

An idea that didn't work

I've had a few encounters recently that reminded me how much people like to learn from the failures of others. So I thought I'd take this opportunity to share an idea that I thought was brilliant, then tell you why it instead turned out to be a real dud.

There is an expectation in my organisation that every tester will be capable of contributing to our automated checks. All of our automation frameworks are coded by hand, there are no recording tools or user-friendly interfaces to disguise that code has to be written.

However, we hire people from a variety of backgrounds and experiences, which means that not everyone has the ability to write code. They are all willing to learn and generally enthusiastic about the prospect, but some of the testers don't have this particular skill right now.

Since I started with the organisation in a coaching role I've had one persistent request in our test team retrospectives. Whether I ask "What can we improve?" or "How can I help you in your role?" the answer is "I want to learn more about automation".

In December last year I decided to tackle this problem.

I sent out a recurring invite to an Automation Lab. For two hours each fortnight on a Friday afternoon all of the testers were invited to study together. The invitation read:

This session is optional for permanent staff to make effective use of your self-development time and have a forum to ask for help in reaching your automation-related goal. This is a fortnightly opportunity to bring your laptop into a quiet lab environment and work with support from your Testing Coach and peers. Whether you're learning Java, scripting groovy, mastering mobile, or tackling SoapUI, it doesn't matter. You could use this lab to learn any language or tool that is relevant.

I ran the Automation Lab for five sessions, which spanned early January through to mid March. Despite there being 30 people in the test team, the largest Automation Lab was attended by just four. Though I was disappointed, I assumed that this low attendance was because people were learning via other means.

In late March we ran another test team retrospective activity. When I asked people what training they needed to do their roles, the overwhelming majority were still telling me "I want to learn more about automation".

As I read through the feedback I felt grumpy. I was providing this opportunity to study in the Automation Lab and it wasn't being used, yet people were still asking for me to help them learn! Then I started thinking about why this had happened.

I had assumed that the blockers to people in my team learning automation were time and support. The Automation Lab was a solution to these problems. I booked a dedicated piece of time and offered direct assistance to learn.

Unfortunately I had assumed incorrectly and solved the wrong thing.

As someone who learned to code at University, I haven't experienced online learning materials as a student. However the plethora of excellent resources that are available to people made me think that finding instruction wasn't a problem. Now I realise that without prior knowledge of coding the resources aren't just available, they're overwhelming. 

The real blocker to people in my team learning automation was direction. They didn't know where to begin, which resources were best for what they need to know, or the aspects of coding that they should focus on.

I had offered time and support without a clear direction. In fact, I had been intentionally broad in my invitation to accommodate the variety of interests in the team: "Whether you're learning Java, scripting groovy, mastering mobile, or tackling SoapUI, it doesn't matter."

I've changed tack.

We're about to embark on a ten week 'Java for Newbies' course. I've had ten testers register as students and another four volunteer as teaching assistants. I'm creating the course material a fortnight ahead of the participants consuming it by pulling theory, exercises and homework from the numerous providers of free online training.

I hope that this new approach to teaching will result in better attendance. I hope that the ten testers who have registered will be a lot more confident in Java at the end of ten weeks. I hope that giving a structured introduction to the basics will lay the foundation for future independent learning.

Most of all, I hope that I've learned from the idea that didn't work. 

Saturday, 2 April 2016

Lightning Talks for Knowledge Sharing

The end of March is the halfway point of the financial year in my organisation. It's also the time of mid-year reviews. I don't place much emphasis on the review process that is dictated, but I do see this milestone as a great opportunity to reflect on what I've learned in the past six months and reassess what I'd like to achieve in the remainder of the year.

I wanted to encourage the testers in my team to undertake this same reflection and assessment. I was also very curious about what they would identify as having learned themselves in the past six months. I wanted to see where people were focusing their self-development time and understand what they had achieved.

Then I thought that if I was curious about what everyone was doing, perhaps the testers would feel the same way about each other. So I started to think about how we could effectively share what had been learned by everyone across the team, without overloading people with information.

One of the main facets of my role as a Testing Coach is to facilitate knowledge sharing. I like to experiment with different ways of propagating information like our pairing experiment, coding dojos, and internal testing conference. None of these felt quite right for what I wanted to achieve this time around. I decided to try a testing team lightning talks session.

I was first exposed to the idea of lightning talks at Let's Test Oz. The organisers called for speakers who would stand up and talk for up to five minutes on a topic of their choice. A couple of my colleagues took up this challenge and I saw first-hand the satisfaction they had from doing so. I also observed that the lightning talk format created a one hour session that was diverse, dynamic and fun.

So in December last year I started to warn the testers that at the end of March they would be asked to deliver a five minute lightning talk on something that they had learned in the past six months. This felt like a good way to enforce some reflection and spread the results of this across the team.

I scheduled a half day in our calendars and booked a large meeting room. Three weeks out from the event I asked each tester to commit to a title for their talk along with a single sentence that described what they would speak about. I was really impressed by the diversity of topics that emerged, which reflected the diversity of activities in our testing roles.

One week ahead I asked those who wished to use PowerPoint slides to submit them so that I could create collated presentations. Only about half of the speakers chose to use slides, which I found a little surprising but this helped create some variety in presentation styles.

Then the day prior I finalised catering for afternoon tea and borrowed a set of 'traffic lights' from our internal ToastMasters club so that each presenter would know how long they had spoken for.

On the day itself, 27 testers delivered lightning talks. 

The programme was structured into three sessions, each with nine speakers, that were scheduled for one hour. This meant that there was approximately 50 minutes of talks, then a 10 minute break, repeated three times.



Having so many people present in such a short space of time meant that there was no time for boredom. I found the variety engaging and the succinct length kept me focused on each individual presentation. I also discovered a number of things that I am now curious to learn more about myself!

There were some very nervous presenters. To alleviate some of the stress, the audience was restricted to only the testing team and a handful of interested managers. I tried to keep the tone of the afternoon relaxed. I acted as MC and operated the lights to indicate how long people had been speaking for, keeping both tasks quite informal. 

There was also a good last minute decision to put an animated image of people applauding in the PowerPoint deck so that it would display between each speaker. This reminded people to recognise each presenter and got a few giggles from the audience.

After the talks finished, I asked the audience to vote on their favourite topic and favourite speaker of the day. I also asked for some input into our team plan for the next six months with particular focus on the topics that people were interested in learning more about. Though I could sense that people were tired, it felt like good timing to request this information and I had a lot of feedback that was relatively cohesive.

Since the session I've had a lot of positive comments from the testers who participated that it was a very interesting way to discover what their peers in other teams had been learning about. I was also pleased to hear from some of those who were most reluctant to participate that many of their fears were unfounded. 

From a coaching perspective, I was really proud to see how people responded to the challenge of reflecting on their own progress, identifying a piece of learning that they could articulate to others in a short amount of time, then standing up and delivering an effective presentation.

I'll definitely be using the lightning talks format for knowledge sharing again.