Monday, 25 April 2016

What problems do we have with our test automation?

One of the things I like about my role is that I get to work with different teams who are testing different products. The opportunities and challenges in each of our teams is very different, which gives me a lot of material for problem solving and experimenting as a coach.

I recently spoke with a colleague from another department in my organisation who wanted to know what problems we were currently experiencing with our test automation. It was something I hadn't had to articulate before. As I answered, I grouped my thoughts in to four distinct contexts.

I'd like to share what we are currently struggling with to illustrate the variety of challenges in test automation, even within a single department of a single organisation.

Maintenance at Maturity

We have an automation suite that's over four years old. It has grown alongside the product under development, which is a single page JavaScript web application. The test suite has been contributed to by in excess of 50 people, including both testers and developers.

This suite is embedded in the development lifecycle of the product. It runs every time code is merged into the master branch of the application. Testers and developers are in the habit of contributing code as part of their day-to-day activities and examine the test results several times daily.

In the past four months we have made a concerted effort to improve our execution stability and speed. We undertook a large refactoring exercise to get the tests executing in parallel, they now take approximately 30 minutes to run.

We want to keep this state while continuing to adapt our coverage to the growing application. We want to continue to be sensible about what we're using the tool to check, to continue to use robust coding practices that will succeed when tests are executing in parallel, to continue to keep good logging messages and screenshots of failures that help us accurately identify the reasons.

There's no disagreement on these points. The challenge is in continued collective ownership of this work. It can be hard to keep the bigger picture of our automation strategy in sight when working day-to-day on stories. And it's easy to think that you can be lazy just once.

To help, we try to keep our maintenance needs visible. Every build failure will create a message in the testing team chat. All changes to the test code go through the same code review mechanism as changes to the application code, but the focus is on sharing between testers rather than between developers.

Keeping shared ownership of maintenance requires ongoing commitment from the whole team.

Targeted Tools

Another team is working with a dynamic website driven by a content management system. They have three separate tools that each provide a specific type of checking:

  1. Scenario based tests that examine user flows through specific functions of the site
  2. Scanner that checks different pages for specific technical problems e.g. JavaScript errors
  3. Visual regression tool that performs image comparisons on page layout

The information provided by each tool is very different, which means that each will detect different types of potential problems. Together they provide a useful coverage for the site.

The scanner and visual regression tool are relatively quick to adapt to changes in the site itself. The scenario based tests are targeted in very specific areas that rarely change. This means that this suite doesn't require a lot of maintenance.

Because the test code isn't touched often, it can be challenging when it does need to be updated. It's difficult to remember how the code is structured, how to run tests locally, and the idiosyncrasies in each of the three tools.

All of the tests are run frequently and are generally stable. When they do fail, it's often due to environmental issues in the test environments. This means that when something really does go wrong, it takes time to work out what.

It sounds strange, but part of the challenge is debugging unfamiliar code and interpreting unfamiliar log output. It's our code, but we are hands-on with it so infrequently that there's a bit of a learning curve every time.

Moving to Mock

In a third area of my department we've previously done a lot of full stack automation. We tested through the browser-based front-end, but then went through the middleware, down to our mainframe applications, out to databases, etc.

To see a successful execution in this full stack approach we needed everything in our test environment to be working and stable, not just the application being tested. This was sometimes a difficult thing to achieve.

In addition to occasionally flaky environments, there were challenges with test data. The information in every part of the environment had to be provisioned and align. Each year all of the test environments go through a mandatory data refresh, which means starting from scratch.

We're moving to a suite that runs against mocked data. Now when we test the browser-based front-end, that's all we're testing. This has been a big change in both mindset and implementation. Over the past six months we've slowly turned a prototype into a suite that's becoming more widely adopted.

The biggest challenge has been educating the teams so that they feel comfortable with the new suite. How to install it, how to configure it, how to write tests, how to capture test data, how to troubleshoot problems, etc. It's been difficult to capture all of this information in a way that's useful, then propagate it through the teams who work with this particular product.

Getting people comfortable isn't just about providing information. It's been challenging to persuade key people of the benefits of switching tack, offer one-on-one support to people as they learn, and embed this change in multiple development teams.

Smokin'

The final area we are using automation is in our mobile testing. We develop four native mobile applications: two on iOS and two on Android. In the mobile team the pace of change is astonishing. The platforms shift underneath our product on a regular basis due to both device and operating system upgrades.

We've had various suites in our mobile teams but their shelf life seems to be very short. Rather than pour effort in to maintenance we've decided on more than one occasion to start again. Now our strategy in this space is driven by quick wins.

We're working to automate simple smoke tests that cover at least a "Top 10" of the actions our users complete in each of the applications according to our analytics. These tests will then run against a set of devices i.e. four different android devices for tests of an android application.

Our challenge is alignment. We have four native mobile applications. At the moment the associated automation is in different stages of this boom-and-bust cycle. We have useful and fast feedback, but the coverage is inconsistent.

To achieve alignment, we need to be better about an equal time investment in developing and maintaining these suites. Given the rate of change, this is an ongoing challenge.


*****

That's where we're at right now. I hasten to add that there are a lot of excellent things happening with our automation too, but that wasn't the question I was asked!

I'm curious as to whether any of these problems resonate with others, how the challenges you face differ, or if you're trying solutions that differ to what we're attempting.

Thursday, 14 April 2016

An idea that didn't work

I've had a few encounters recently that reminded me how much people like to learn from the failures of others. So I thought I'd take this opportunity to share an idea that I thought was brilliant, then tell you why it instead turned out to be a real dud.

There is an expectation in my organisation that every tester will be capable of contributing to our automated checks. All of our automation frameworks are coded by hand, there are no recording tools or user-friendly interfaces to disguise that code has to be written.

However, we hire people from a variety of backgrounds and experiences, which means that not everyone has the ability to write code. They are all willing to learn and generally enthusiastic about the prospect, but some of the testers don't have this particular skill right now.

Since I started with the organisation in a coaching role I've had one persistent request in our test team retrospectives. Whether I ask "What can we improve?" or "How can I help you in your role?" the answer is "I want to learn more about automation".

In December last year I decided to tackle this problem.

I sent out a recurring invite to an Automation Lab. For two hours each fortnight on a Friday afternoon all of the testers were invited to study together. The invitation read:

This session is optional for permanent staff to make effective use of your self-development time and have a forum to ask for help in reaching your automation-related goal. This is a fortnightly opportunity to bring your laptop into a quiet lab environment and work with support from your Testing Coach and peers. Whether you're learning Java, scripting groovy, mastering mobile, or tackling SoapUI, it doesn't matter. You could use this lab to learn any language or tool that is relevant.

I ran the Automation Lab for five sessions, which spanned early January through to mid March. Despite there being 30 people in the test team, the largest Automation Lab was attended by just four. Though I was disappointed, I assumed that this low attendance was because people were learning via other means.

In late March we ran another test team retrospective activity. When I asked people what training they needed to do their roles, the overwhelming majority were still telling me "I want to learn more about automation".

As I read through the feedback I felt grumpy. I was providing this opportunity to study in the Automation Lab and it wasn't being used, yet people were still asking for me to help them learn! Then I started thinking about why this had happened.

I had assumed that the blockers to people in my team learning automation were time and support. The Automation Lab was a solution to these problems. I booked a dedicated piece of time and offered direct assistance to learn.

Unfortunately I had assumed incorrectly and solved the wrong thing.

As someone who learned to code at University, I haven't experienced online learning materials as a student. However the plethora of excellent resources that are available to people made me think that finding instruction wasn't a problem. Now I realise that without prior knowledge of coding the resources aren't just available, they're overwhelming. 

The real blocker to people in my team learning automation was direction. They didn't know where to begin, which resources were best for what they need to know, or the aspects of coding that they should focus on.

I had offered time and support without a clear direction. In fact, I had been intentionally broad in my invitation to accommodate the variety of interests in the team: "Whether you're learning Java, scripting groovy, mastering mobile, or tackling SoapUI, it doesn't matter."

I've changed tack.

We're about to embark on a ten week 'Java for Newbies' course. I've had ten testers register as students and another four volunteer as teaching assistants. I'm creating the course material a fortnight ahead of the participants consuming it by pulling theory, exercises and homework from the numerous providers of free online training.

I hope that this new approach to teaching will result in better attendance. I hope that the ten testers who have registered will be a lot more confident in Java at the end of ten weeks. I hope that giving a structured introduction to the basics will lay the foundation for future independent learning.

Most of all, I hope that I've learned from the idea that didn't work. 

Saturday, 2 April 2016

Lightning Talks for Knowledge Sharing

The end of March is the halfway point of the financial year in my organisation. It's also the time of mid-year reviews. I don't place much emphasis on the review process that is dictated, but I do see this milestone as a great opportunity to reflect on what I've learned in the past six months and reassess what I'd like to achieve in the remainder of the year.

I wanted to encourage the testers in my team to undertake this same reflection and assessment. I was also very curious about what they would identify as having learned themselves in the past six months. I wanted to see where people were focusing their self-development time and understand what they had achieved.

Then I thought that if I was curious about what everyone was doing, perhaps the testers would feel the same way about each other. So I started to think about how we could effectively share what had been learned by everyone across the team, without overloading people with information.

One of the main facets of my role as a Testing Coach is to facilitate knowledge sharing. I like to experiment with different ways of propagating information like our pairing experiment, coding dojos, and internal testing conference. None of these felt quite right for what I wanted to achieve this time around. I decided to try a testing team lightning talks session.

I was first exposed to the idea of lightning talks at Let's Test Oz. The organisers called for speakers who would stand up and talk for up to five minutes on a topic of their choice. A couple of my colleagues took up this challenge and I saw first-hand the satisfaction they had from doing so. I also observed that the lightning talk format created a one hour session that was diverse, dynamic and fun.

So in December last year I started to warn the testers that at the end of March they would be asked to deliver a five minute lightning talk on something that they had learned in the past six months. This felt like a good way to enforce some reflection and spread the results of this across the team.

I scheduled a half day in our calendars and booked a large meeting room. Three weeks out from the event I asked each tester to commit to a title for their talk along with a single sentence that described what they would speak about. I was really impressed by the diversity of topics that emerged, which reflected the diversity of activities in our testing roles.

One week ahead I asked those who wished to use PowerPoint slides to submit them so that I could create collated presentations. Only about half of the speakers chose to use slides, which I found a little surprising but this helped create some variety in presentation styles.

Then the day prior I finalised catering for afternoon tea and borrowed a set of 'traffic lights' from our internal ToastMasters club so that each presenter would know how long they had spoken for.

On the day itself, 27 testers delivered lightning talks. 

The programme was structured into three sessions, each with nine speakers, that were scheduled for one hour. This meant that there was approximately 50 minutes of talks, then a 10 minute break, repeated three times.



Having so many people present in such a short space of time meant that there was no time for boredom. I found the variety engaging and the succinct length kept me focused on each individual presentation. I also discovered a number of things that I am now curious to learn more about myself!

There were some very nervous presenters. To alleviate some of the stress, the audience was restricted to only the testing team and a handful of interested managers. I tried to keep the tone of the afternoon relaxed. I acted as MC and operated the lights to indicate how long people had been speaking for, keeping both tasks quite informal. 

There was also a good last minute decision to put an animated image of people applauding in the PowerPoint deck so that it would display between each speaker. This reminded people to recognise each presenter and got a few giggles from the audience.

After the talks finished, I asked the audience to vote on their favourite topic and favourite speaker of the day. I also asked for some input into our team plan for the next six months with particular focus on the topics that people were interested in learning more about. Though I could sense that people were tired, it felt like good timing to request this information and I had a lot of feedback that was relatively cohesive.

Since the session I've had a lot of positive comments from the testers who participated that it was a very interesting way to discover what their peers in other teams had been learning about. I was also pleased to hear from some of those who were most reluctant to participate that many of their fears were unfounded. 

From a coaching perspective, I was really proud to see how people responded to the challenge of reflecting on their own progress, identifying a piece of learning that they could articulate to others in a short amount of time, then standing up and delivering an effective presentation.

I'll definitely be using the lightning talks format for knowledge sharing again.

Wednesday, 23 March 2016

How do you create a friendly conference?

One of the things that really impressed me about TestBash a few weeks ago was the warm and friendly tone of the event. I haven't been alone in my remarks on the environment created by Rosie Sherry, Vernon Richards and others.

As a conference organiser, I've been thinking a lot about what made each attendee feel this way. What were the specific actions that made such a noticeable difference when compared to other events. Here are five things that I've identified, which I'm hoping to try at the next event that I run.

Pre-Event Emails

Rosie sent an email per day to TestBash attendees in the week leading up to the event. These included the schedules for the workshops and conference, details of associated MeetUp events, invites to a slack channel, social media hashtags, information for a book swap, and a set of behavioural requests.

These emails created a sense of hype and expectation. They got everyone on the same page about the logistics of the event. They allowed people to start interacting online prior to the event itself, if they wished to do so.

These emails also served as the foundation for the community focus of the conference itself. In particular, here are the six things that Rosie asked from attendees in one of her pre-event messages:
  • I ask the old timers to reach out to those that look lost.
  • I ask for everyone to be brave and speak to anyone who looks like they could do with some company.
  • I ask you all to be human, kind, and helpful.
  • I ask you all to focus on making friends and having a good time.
  • I ask you all to create some incredible memories to remember, for yourselves and everyone else who attends.
  • I ask you all to speak to speakers. I can assure you they want to speak to you too.

This clearly set expectations for behaviour prior to the event.

Personalised Name Tags

The first task after registration on conference day was to create your own name tag using very large Ministry of Testing speech bubbles and a permanent marker. The name tags were handwritten. You could choose how to present yourself to others at the conference. First name only, full name or nickname. With or without social media details.

The name tags were large and colourful, which made them easy to locate on a person and easy to read. Though they clearly incorporated the Ministry of Testing brand, they didn't feel corporate. The variety of handwriting on display made something that is normally staid into something that felt informal and fun.

The name tags put a little piece of each personality on display.



No Tester Stands Alone

As the host of the day, Vernon Richards did a great job at specifically reminding people to interact with those who they didn't know. He reiterated the TestBash ethos from the pre-event emails that "No Tester Stands Alone".

This expectation was set at the start in his opening remarks and we were prompted of it prior to each break. Many of the people that I met were attending as the only tester from their company, yet I rarely encountered anyone by themselves.

Vernon demonstrated that you don't have to be afraid to remind people to be friendly.

Single Track

I had never been to a single track conference before. I was really amazed by how different it felt to be part of a large group of people who were all experiencing the same set of speakers. Having over 200 people in one place, focusing on one thing, for an entire day, creates a vibe in itself.

I also found that it changed the type of conversations I had in the breaks. Often at conferences the exchanges over tea and coffee are about what each person listened to during the last session. You'll hear snippets and impressions without really understanding what the other presentation was about. At TestBash we had all heard the same topics, so we had deeper conversations about the ideas, what could work in our own organisations, the doubts that we had, etc.

A set of shared experiences can offer opportunity to explore further together.

Open Social Events

There was a Pre-TestBash evening social and a Post-TestBash evening social. There was a Pre-TestBash run and a Post-TestBash brunch. All of these events were publicly advertised in the Software Testing Club MeetUp group.

Though they were located in a pub, the evening invitations weren't focused on drinking. They instead put emphasis on connecting with other testers. The invitations were open, inclusive, and there was room for everyone, even if it was sometimes slightly crowded!

By keeping those who wanted to socialise in one place, no one felt any fear of missing out. People were really present at these events instead of occupying social media in search of what else was happening.

There was a huge amount of support for people to connect outside of the event itself.


I'm hoping that these five ideas will help bring a little bit of the TestBash magic to my next testing event, and perhaps yours too?

Saturday, 19 March 2016

Use your stand up to make testing visible

Imagine that you're a tester in an agile stand up meeting with your development team. You have all gathered around your visual management board, each person is sharing an update and it's your turn to speak.

"Yesterday I tested story 19574 and didn't find any bugs." you say, moving that story to Done. 

"Today I'm just going to test story 19572." you continue, moving that story to Doing.

"There's nothing blocking me at the moment, so that's it." you finish.

Sound familiar?

Now consider whether, based on the information you've just provided, anyone in your development team actually understands what you've done and what you're planning to do.

If you were to ask them, they might say that you're testing. You are the tester. You just told them you were testing. They're not stupid. 

But what exactly does that mean? Have you been confirming acceptance criteria are met? Have you been investigating how this story handles error scenarios? Have you tested from the perspective of each of the business user personas? Have you tried some simple penetration tests to determine whether this story has covered the basics of security?

We can make testing more visible through the use of visual planning. Illustrating our thinking through visual test models or visual test ideas is a great way to engage our team in conversations about testing. But we can also make our testing more visible by being transparent in our stand ups. 

Start making a conscious effort to be more specific. Take the opportunity in your stand up to verbally create an image in someones mind of the activities that you're undertaking. While you may not have time to tell the whole story, you can certainly share the blurb rather than the title alone.

Without this detail there's a good chance that what you do is a little bit of a mystery to your team. And if they don't really understand what you're doing, then it may be difficult for them to identify opportunities to collaborate with you or help to anticipate problems that could prevent you from completing a testing task.

Making testing visible is not just about changing the documents you produce. Take the time to prepare for your stand up so that you can briefly explain what you are actually doing. I'm sure that you are not "just testing".

Friday, 19 February 2016

How I explain software testing to people who don't work in IT

Can you think of a time when you've been frustrated while using your computer?

There are a lot of reasons that you might feel this way. A website that takes a long time to load. Error messages that stop you from doing what you want to do. Being unable to find that thing you need within all the options that are available in a menu.

Every time you get frustrated, you are encountering what software testers call a bug. Simply put, a bug is something that bugs you, and my job is to prevent bugs from reaching you.

To do this, I talk to the business people who ask for the software, the designers who decide what it will look like, and the analysts who specify how it should work. I think of things that I have seen cause problems in the past and try to prevent the same mistakes from being made again.

I sit alongside developers, the people who write the code that creates the software, and pick up problems while they work. Sometimes they miss a piece of what they're supposed to do, or I might disagree with the way that they've chosen to write something.

Once the software is finished, I check that it works. I think about all the ways that people might intentionally or accidentally break it, and make sure we handle that elegantly. I see if it works on all different sorts of computers and mobile devices. I check whether it is easy to use, responsive, and secure.

All through this process, I am testing. I test through conversation. I test by actually using the software on my computer. And I test by writing automated checks with tools that can run the same things over and over again, to make sure that specific bits of the software are working properly.

Testing helps to eliminate the things that bug you. It's an important part of creating software that people feel happy to use.

Thursday, 11 February 2016

Brave Questions

This year I did not make any New Years Resolutions. But something that I am trying to do more of is to ask brave questions. By brave questions, I mean the type of questions that make me uncomfortably nervous to voice. They are questions to request the things I really want to happen that are perhaps a little unusual or unexpected, so there's a higher probability of people saying 'no'.

I thought I'd share three brave questions I've asked in January in the hopes that it may inspire others to push themselves to ask for the things that they want too.

Asking to pair on a proposal

When proposals for the Conference of the Association for Software Testing (CAST) opened I didn't pay much attention. I'm travelling to Europe shortly for CopenhagenContext and TestBash, so it seemed a bit cheeky to submit for another overseas jaunt in August. I'm also trying to avoid paying to speak this year, and I knew that CAST don't cover speaker travel and accommodation.

Then a friend of mine decided to move to Vancouver in June, and suddenly a trip to Canada in August became a lot more appealing! Having decided to propose, I then started to think about topics that I could talk about and the work involved in creating a presentation from scratch. The more I thought, the larger the whole task seemed, and I spent a few days trying to decide what to do.

I've been facilitating a lot of pairing within my team over the past year, but haven't had the opportunity to do very much hands-on pairing myself. I've heard other presenters speak very positively about their experiences delivering paired presentations. I started to wonder whether pairing might be a way to submit to CAST and see my friend with just half of an idea and half of a talk.

I started to consider who to ask. Given that I'm based in New Zealand, I haven't had the opportunity to meet very many people in the wider testing community. This made it quite difficult to decide who to approach, as a lot of my opinions are formed by how people behave on Twitter!

I came to the conclusion that I wanted to pair with another woman in testing, who I felt that I would work well with, and who was a relatively inexperienced speaker with a lot of great ideas to share. I was also aware that I wanted to limit the financial burden on that person if our proposal was accepted, so I decided to look for someone who was based near Canada.

One person ticked all these boxes. Carol Brands.

After deciding to ask Carol, there was a period of days before I made any kind of contact with her. When I did get in touch, due to the joy of timezones we played skype tag a few times before we were both online simultaneously. Finally, it was time to ask:

A brave question in the real world

It took almost 30 seconds of nervous anticipation for Carol's response to come through, and I was excited when she was instantly on board. We've since submitted a proposal and I'm keeping my fingers crossed that we are accepted to speak as I would like to meet Carol in person after all the discussions we've had so far!

I've found asking to pair on a proposal to be as fun and productive as it was advertised to be. It's been fantastic to have someone to bounce around ideas with. I've also enjoyed the collaborative writing process and the challenge that comes from being questioned on the way that I present my thoughts.

Asking for sponsorship for a conference

For the past three years we've run a WeTest Weekend Workshops conference in November. Pitched as an affordable testing conference for local talent, it's been a half day event with a $20 ticket price that includes a multi-track speaking line up, dinner and an event t-shirt.

Three years of WeTest tshirts

We've had generous sponsorship throughout this time, particularly from Assurity, but we've also been operating on a shoestring. None of the speakers have been paid. In many cases, they've been out of pocket for their travel and accommodation. All of the speakers have been from New Zealand. Our format has been focused on discussion to share experiences rather than ideas that push at the boundary of our profession.

This year I decided that I'd like to do something different. I talked to my co-organisers, Aaron and Shirley, and pitched the idea of national series of conferences. I'd like to create a repeatable full day event, with larger capacity, respectable ticket prices, and international speakers alongside the locals. I'd like us to be able to pay speakers, and cater food, and provide t-shirts, like a real testing conference. To do this, we need to ask for help.

So in January I approached four organisations to ask them to invest in this idea. We are asking for more money than we've ever sought for WeTest events in the past. From early December when all the organisers agreed that this was a good idea, it took me six weeks to muster the courage to even ask this question!

So far I've had a positive response from two of the four organisations, which is very exciting. I remain hopeful that we're going to end up with all four on board.

For me, asking this particular brave question has been a reminder that if you don't ask for something then you probably won't get it. And that sometimes when people say "no" it leaves you no worse off than you were originally.

Asking other people to propose to a conference

In mid-January, Rajesh Mathur who is one of the organisers of Australian Testing Days tweeted that their Call for Proposals had a relatively low response rate from within Australia. 

I was surprised by this as I know a lot of great testers based in Australia. I was also a bit disappointed as I was planning to attend Australian Testing Days to hear some of these people and, based on this tweet, it seemed that they may not have proposed.

Over lunch, I was thinking about this situation and suddenly wondered whether I could do anything to help change it. If there were people who I wanted to hear from, perhaps I could ask them to submit a proposal?

I'm not part of the organising committee for this conference, and it seemed a bit of a bizarre notion to try and influence the program of an event based on what I was hoping it would be. On the other hand, the organisers can't choose people who haven't proposed, so perhaps I'd just be giving them more options?

I decided to send out an email to 30 past and potential Australian Testing Trapeze contributors to ask them to consider submitting. I made it really clear that I had no part in choosing speakers but that I'd love for them to put themselves forward. I shared the link to the CFP, then let Rajesh know on Twitter:

Shoulder tapping potential presenters for a CFP

This was probably the most unusual brave question of January. Most of my associated nerves in asking this were because I wasn't sure what the reaction would be, from both the people who received the email and the organising committee.

Fortunately the responses that I did receive were all positive. Recently the programme was announced and I was delighted to see Catherine Karena in the line up, a speaker selected who I particularly wanted to listen to! I'm also keeping my fingers crossed for the selection of the Speak Easy slot, which is yet to be announced.

This experience has made me wonder why more people don't do this. If you feel like the events in your community could be better perhaps that's because you haven't made any efforts to change them? If there are people who you'd like to hear from, why not encourage them to propose? If there are topics that you'd like to learn about, why not suggest these to the conference organisers?

Brave questions make me uncomfortable. They are also how I stretch the bounds of what I believe to be possible. They are often a doorway into something new and different.

Increasingly I feel like the risk of asking these questions is not as great as my nerves would have me believe and that the opportunities behind each enquiry are worth the effort. Making a point of asking brave questions might be a way to expand your world in 2016. Good luck.