Tuesday, 18 August 2015

Elastic Role Boundaries

How do you explain roles in an agile team?

In this short presentation Chris Priest and Katrina Clokie explain a model of Elastic Role Boundaries to highlight the difference between flexible ownership of small activities and the enduring commitment of a role.

This presentation stemmed from collaborative discussion at the fifth annual Kiwi Workshop for Software Testing (KWST5) with James Bach, Oliver Erlewein, Richard Robinson, Aaron Hodder, Sarah Burgess, Andy Harwood, Adam Howard, Mark Boyt, Mike Talks, Joshua Raine, Scott Griffiths, John Lockhart, Sean Cresswell, Rachel Carson, Till Neunast, James Hailstone, and David Robinson.



Friday, 7 August 2015

How do you become a great tester?

At the fifth annual Kiwi Workshop for Software Testing (KWST5) that happened earlier this week, James Bach asked a seemingly simple question during one of the open season discussions that I have been thinking about ever since.

"How do you know you're a good tester?"

Since the conference I've had a number of conversations about this, with testers and non-testers, in person and on Twitter. During these conversations I've found it much easier to think of ways to challenge the responses provided by others than to think of an answer to the original question for myself.

Today I asked a colleague in management how they knew that the testers within the team they managed were good testers. We spent several minutes discussing the question in person then, later in the morning, they sent me an instant message that said "... basically a good tester knows they are not a great tester." 

This comment shunted my thinking in a different direction. I agree that most of the people who I view as good testers have a degree of professional uncertainty about their ability. But I don't think that it is this in isolation that makes them a good tester, rather it's the actions that are driven from this belief. And this lead me to my answer.

"How do you know you're a good tester?"

I know I'm a good tester because I want to become a great tester. In order to do this, I actively seek feedback on my contribution from my team members, stakeholders and testing peers. I question my testing and look for opportunities to improve my approach. I imagine how I could achieve better outcomes by improving my soft skills. I constantly look to learn and broaden my testing horizons.





What would you add?

Wednesday, 5 August 2015

Formality in open season at a peer conference

I attended the fifth annual Kiwi Workshop for Software Testing (KWST5) this week. Overall, I really enjoyed spending two days discussing the role of testing with a group of passionate people.

I took a lot from the content. But it's not what we discussed that I want to examine here, instead it's how we discussed it. As I was sharing the events of the final day with my husband he made a comment that troubled me. I took to Twitter to gauge how other people felt about his statement:


Since this tweet created a fair amount of discussion, I thought I would take the time to gather my thoughts and those from others into a coherent narrative, and share some of the ways in which I would approach the same situation differently next time.

Who was there?

I found the dynamic at this year's conference different to previous years. It felt like I was going to spend two days with my friends. Among the attendees, there were only two people who I had never met before. Most of the people in the room were frequent attendees at KWST, or frequent attendees at WeTest, or people who help create or contribute to Testing Trapeze, or current colleagues, or former colleagues, or simply friends who I regularly chat to informally outside of work.

This meant that it was the first year that I wasn't nervous about the environment. It was also the first year that I didn't feel nervous about delivering a talk. Though I was a little anxious about the content of my experience report overall I would say that I felt relatively relaxed.

So, who exactly was in the room? James Bach, Oliver Erlewein, Richard Robinson, Aaron Hodder, Sarah Burgess, Andy Harwood, Adam Howard, Mark Boyt, Chris Priest, Mike Talks, Joshua Raine, Scott Griffiths, John Lockhart, Sean Cresswell, Rachel Carson, Till Neunast, James Hailstone, David Robinson and Katrina Clokie.

What happened?

I was the first speaker on the second day of the conference. My experience report was the first set in an agile context. The topic of the role of testing in agile had been touched on through the first day, but not explored.

I knew that there was a lot of enthusiasm for diving in to a real discussion, and was expecting a robust open season. In fact, the passion for the topic far exceeded my expectations. The particular exchanges that my husband questioned were in one particular period of the open season of my experience report.

Oliver proposed a model to represent roles in agile teams that kicked off a period of intense debate. During this time the only cards in use by participants were red, the colour that indicates the person has something urgent to say that cannot wait. I believe this spell of red cards exceeded 30 minutes, based on a comment from Mike who, when called as the subsequent yellow card, said "hooray, I've been waiting almost 40 minutes".

During this period of red cards, there were several occasions where multiple people who were waiting to speak were actively waving red cards. There were people interrupting one another. There were people speaking out of turn, without waiting to be called upon.

There were specific exchanges within this particular period that my husband questioned. I'm going to share four examples that relate specifically to my own behaviour.

The first happened relatively early in the red card period. Aaron made a statement that I started to respond to. When he attempted to interrupt my response, and he was not the first to interrupt me, I replied by raising my voice and telling him to shut up, so that I could finish what I was saying.

Perhaps halfway through the red card period, I had stopped responding to the people who were raising the red cards and the conversation was flowing among the participants themselves. Rich asked, in his role as facilitator, whether I agreed with what people were saying. I replied that no, I thought they were full of sh*t.

Near the end of the exchange I was asked whether I believed, on reflection, that I had behaved as a moron during the first experience I shared in my experience report. As a caveat my interpretation of this comment has been refuted in subsequent Twitter discussions.

Finally, there was a case where three people were speaking at once and none had used a card. I interjected with a comment that "we have cards for a reason" to shut down their conversation.

Was it a problem?

At the time, I didn't think there was a problem. I shared James' view that "it was an intense exchange done in a good and healthy spirit". I found that red card period of open season incredibly challenging, but I never felt unsafe.

On reflection though, I do think there was a problem.

Why?

My behaviour during open season contributed to an atmosphere where people were talking over one another and behaving informally. The lack of discipline in the heat of these exchanges meant that several people in the room withdrew from the discussion.

This goes directly against the spirit of a peer conference, which is designed for everyone to be included equally. I now feel that I was part of an exchange that excluded those who were unable or unwilling to voice an opinion in this atmosphere.

What would I do differently?

In future, I think that I need to remember to respect the formality of a peer conference. I felt that I was among friends and, because of this, I bought an informal attitude to my exchanges.

I believe this reflection is shared by some others who were present. On Twitter, Aaron said "I shouldn't interact with people I know during formal exchanges differently, and open season is a formal exchange". Sean said "Maybe we need to be more conscious of those relationship biases we bring to peer conferences? I'm guilty of it".

In future, if I felt overwhelmed by interruptions, I would stop and ask for support from the facilitator. On reflection, the very first time I felt compelled to raise my voice and start participating in the culture of talking across people would have been a good opportunity to pause and reset expectations for the discussion.

What do other people think?









What do you think? How formal are your peer conferences? How formal should they be?

Thursday, 16 July 2015

Mobile Testing Taster

I recently ran a one-hour hands-on workshop to give a group of 20 testers a taste of mobile application testing. This mobile testing taster included brainstorming mobile-specific test ideas, sharing some mobile application testing mnemonics, hands-on device testing, and a brief demonstration of device emulation.

Preparation

In advance of the session, I asked the participants to bring along a smartphone or tablet, either Apple or Android, with the chosen test application installed. I selected a test product with an iOS app, an android app, and a website optimised for mobile. I asked those who were able to bring a laptop, in order to compare mobile and web functionality.

I set up the room so that participants were seated in small groups of 3 – 4 people. Each table had one large piece of flipchart paper and three different coloured markers on it. The chairs were arranged along two adjacent sides of the table so that participants within each small group could collaborate closely together.

Brainstorming

After a brief outline of what the session would cover, I asked participants to start brainstorming their test ideas for the chosen test application that they had available on the devices in front of them. They were allowed to use the device as a reference, and I asked them to choose one coloured marker to note down their ideas as a group.

Five of the six groups of participants started a feature tour of the application. Their brainstorming started with the login screen, then moved through the main functionality of the application. The other team took a mobile focused approach from the very beginning of the session.

After five minutes, I paused the activity. I wanted to switch the thinking of everyone in the room from functionality to mobile-specific test ideas. I encouraged every team to stop thinking about features and instead to start thinking about what was unique about the application on mobile devices.

To aid this shift, I handed out resources for popular mobile testing mnemonics: the full article text for I SLICED UP FUN from Jonathan Kohl and the mind map image of COP FLUNG GUN from Dhanasekar Subramanian. These resources are full of great prompts to help testers think of tests that may apply for their mobile application. I also encouraged the groups to make use of their laptops to spot differences between the web and mobile versions of the software.

The participants had a further 15 minutes to brainstorm from this fresh perspective using a different coloured marker. For a majority of groups this change in colour emphasised a noticeable change in approach.

At the end of the brainstorming session there was quite a variety in the nature and number of test ideas generated in each small group. I asked the participants to stand up, walk around the room, look at the work of other groups, and read the ideas generated by their peers.

Testing

Armed with ideas, the next phase of the workshop was to complete ten minutes of hands-on device testing. I asked each tester to pick a single test idea for this period of time, so that they focused on exploring a particular aspect of the application. 

Each group was asked to use the final coloured marker to note any problems they found in their testing. There were relatively few problems, but they were all quite interesting quirks of the application.

Though ten minutes was a very short period of time, it was sufficient to illustrate that testing a mobile application feels very different to testing on a computer. The participants were vocal about enjoying the experience. As a facilitator I noticed that this enjoyment made people more susceptible to distraction.

It was also interesting to see how much functionality was covered despite the testing being focused on the mobile-specific behaviours of the application. For example, one tester navigated through the product looking at responsive design when switching between portrait and landscape view, which meant that she completed a quick visual inspection of the entire application.

Emulation

While discussing ideas for this session, Neil Studd introduced me to the Device Mode function available in Chrome Developer Tools. During the last part of the workshop I played a five minute video about device mode, then showed a quick live demonstration of how our test application rendered in various devices through this tool.

Device mode is well documented. I presented it as an option for getting an early sense of how new features will behave without having to track down one of our limited number of test devices. Emulators are not a substitute for physical devices, but they may help us consider responsive design earlier in our development process.

As facilitator I did feel like this was a lot to cover in an hour. However, the session filled its purpose of giving the attendees a relatively rounded introduction to mobile testing. Perhaps you'll find a similar mobile testing taster is useful in your organisation.

Monday, 6 July 2015

Testing Coach Cafe Service Menu

One of the things I've been thinking about is how I can get more involved with the work of individuals in my team without being a nuisance. I have deliberately avoided scheduling recurring one-on-ones or continually dropping by desks to see how people are doing, but I do want to be more actively involved in helping people tackle their testing problems and improve their skills.

At the recent Nordic Testing Days conference I had the opportunity to speak with Stephen Janaway, a former Test Coach based in the UK. I mentioned my conundrum and he shared a solution from his organisation. They moved to a pull system where their coaches created a service menu for the development teams that explained what the coaches are available to help with. Stephen spoke about this system during his presentation at the conference and posted a real example of A Coaching Cafe Service Menu on his blog.

I really liked this idea and decided to adopt the same approach with my team. With a little re-use and a bit of fresh thinking, I created a Testing Coach Cafe Service Menu for my organisation:

The A3 poster version of the Testing Coach Cafe Service Menu

The menu provides an overview of some of the ways that I'd like to be working with each of the testers in my team. I hope it will prompt them to ask me for assistance -- a pull system rather than me imposing myself on them -- and clarify my role as their Testing Coach.

I'm keen to do more individual coaching sessions that are focused on what people really want. If a number of people are requesting similar things, I plan to start running small group sessions. If I don't have the skills requested, I can find resources in the community, call on others in the team, or use external providers who may be able to help. And, if there's something that people want that isn't listed, then I've encouraged them to ask for that too!

To share the menu with my team I created a printed brochure for every individual and an A3 poster that has been posted on our Testing Wall. I like the tactile nature of physical information, I think it helps emphasise important messages and creates serendipitous continued reminders. I also added the content to our organisation wiki and shared a soft copy of the brochure version via email.

Brochures and A3 poster versions of the Testing Coach Cafe Service Menu

Alongside the information, I've made it clear that people can ask for these services anytime - online or in person. As well as seeking new skills, I've encouraged people to start "putting their paddles in the air" based on a recent post from Lillian Grace:

... in the back of my mind I felt a bit guilty, like I shouldn’t be asking for help unless it was absolutely critical - and then I quickly realised, but that’s not how I like to be treated. Asking for help isn’t what you should do when you’re desperate, it’s literally when you would like help. I dearly appreciate it when someone surfaces an inkling of a concern in time for me to deal with it.

The initial reaction to the Testing Coach Cafe Service Menu has been very positive and I hope that it will help me better serve my team.

Wednesday, 24 June 2015

A pairing experiment for sharing knowledge between agile teams

Over the past month I've started running a pairing experiment in my organisation. The primary purpose of this experiment is to share knowledge between testers who are working in different agile teams, testing different applications and platforms.

The Experiment Framework

After researching pair testing, I decided to create a structured framework for experimenting with pairing. I felt there was a need to set clear expectations in order for my 20+ testers to have a consistent and valuable pairing experience.

This did felt a little dictatorial, so I made a point of emphasizing the individual responsibility of each tester to arrange their own sessions and control what happened within them. There has been no policing or enforcement of the framework, though most people appear to have embraced the opportunity to learn beyond the boundaries of their own agile team.

I decided that our experiment will run for three one-month iterations. Within each month, each pair will work together for one hour per week, alternating each week between the project team of each person in the pair. As an example, imagine I pair Sandi in Project A is paired with Danny in Project B. In the first week of the iteration they will pair test Project A at Sandi's desk, then in the second week they will pair test Project B at Danny's desk, and so on. At the end of the monthly iteration each pair should have completed four sessions, two in each project environment.

In between iterations, the team will offer their feedback on the experiment itself and the pairing sessions that they have completed. As we are yet to complete a full iteration I'm looking forward to receiving this first round of feedback shortly. I intend to adapt the parameters of the experiment before switching the assigned pairs and starting the second iteration.

At the end of the three months I hope that each person will have a rounded opinion about the value of pairing in our organisation and how we might continue to apply some form of pairing for knowledge sharing in future. At the end of the experiment, we're going to have an in-depth retrospective to determine what we, as a team, want to do next.


An example of how one tester might experience the pairing experiment

A Sample Session

In our pair testing experiment, both the participants are testers. To avoid confusion when describing a session, we refer to the testers involved as a native and a visitor.

The native hosts the session at their work station, selects a single testing task for the session, and holds accountability for the work being completed. The native may do some preparation, but pairing will be more successful if there is flexibility. A simple checklist or set of test ideas is likely to be a good starting point.

The visitor joins the native to learn as much as possible, while contributing their own ideas and perspective to the task.

During a pairing session there is an expectation that the testers should talk at least as much as they test so that there is shared understanding of what they're doing and, more importantly, why they are doing it.

When we pair, a one hour session may be broken into the following broad sections:

10 minutes – Discuss the context, the story and the task for the session.

The native will introduce the visitor to the task and share any test ideas or high-level planning they have prepared. The visitor will ask a lot of questions to be sure that they understand what the task is and how they will test it.

20 minutes – Native testing, visitor suggesting ideas, asking questions and taking notes.

The native will be more familiar with the application and will start the testing session at the keyboard. The native should talk about what they are doing as they test. The visitor will make sure that they understand every action taken, ask as many questions as they have, and note down anything of interest in what the native does including heuristics and bugs.

20 minutes – Visitor testing, native providing support, asking questions and taking notes.

The visitor will take the keyboard and continue testing. The visitor should also talk about what they are doing as they test. The native will stay nearby to verbally assist the visitor if they get confused or lost. Progress may be slower, but the visitor will retain control of the work station through this period for hands-on learning.

10 minutes – Debrief to collate bug reports, reflect on heuristics, update documentation.

After testing is complete it’s time to share notes. Be sure that both testers understand and agree on any issues discovered. Collate the bugs found by the native with those found by the visitor and document according to the traditions of the native team (post-it, Rally, etc.). Agree on what test documentation to update and what should be captured in it. Discuss the heuristics listed by each tester, add any to the list that were missed.

After the session the visitor will return to their workstation and the pair can update documentation and the wiki independently.

To support this sample structure and emphasise the importance of communication, the following graphic that included potential questions to ask in each phase was also given to every tester:

Questions to ask when pair testing

I can see possibilities for this experiment to work for other disciplines - developers, business analysts, etc. I'm looking forward to seeing how the pairing experiment evolves over the coming months as it molds to better fit the needs of our team.

Wednesday, 17 June 2015

Notes from Nordic Testing Days 2015

Nordic Testing Days 2015 was my first European testing conference, both as an attendee and a speaker. I really enjoyed my time in Estonia. It was fantastic to listen to, and learn from, a set of speakers that I have never heard before. I also enjoyed meeting a number of people that I have previously only known through Twitter.

As a speaker, I presented a two-hour workshop on the first day of the conference titled "Become someone who makes things happen". I was nervous about presenting this to an international audience. I needed people to interact with one another for the workshop to be successful, so it was a relief to have a group of participants who were prepared to discuss, debate and role-play scenarios.

On the second day of the conference I was a last minute replacement for a speaker who was ill, delivering a presentation titled "Sharing testing with non-testers in agile teams". This was a repeat of a talk I gave several times last year.

I'm particularly grateful to those who have included one of my sessions in their highlights of the conference:



Of the sessions that I attended, here are my key takeaways.

Security Testing

I started the conference by attending a full day tutorial titled Exploring Web Application (In)Security. Bill Matthews and Dan Billing presented some fundamentals of security testing by providing an intentionally vulnerable application. Each participant installed the application on a local virtual machine, which meant we could all independently exploit it - a fantastic learning environment.

This session was the first time I learned about the STRIDE security testing mnemonic, which I captured in a mind map format:



Effective Facilitation

Neil Studd presented a session, which I only later realised was his first full-length conference talk, on Weekend Testing Europe: A Behind-the-scenes Guide to Facilitating Effective Learning.

As a co-founder and intermittent organiser of WeTest Workshops MeetUp in Wellington, it was great to listen to some real-life experiences of selecting and facilitating practical workshops. I particularly liked the reminders about the limits of the facilitation role including "each attendee has their own light bulb moment, don't try and manufacture one" and "attendees are there to learn from each other and not just the facilitator".

I took the opportunity to talk to Neil after his presentation, as I had some specific questions for him. As a result I'm definitely planning to use the Weekend Testing Europe archive as a resource in future, both for internal and external workshops.

Gamification

Gamification is something I'd heard of, but never dug in to. Kristoffer Nordström shared his experience of engaging end users using gamification: game techniques in non-game situations to motivate people and drive behaviours.

I found his experience report really interesting and would encourage you to look through his slides in the Nordic Testing Days Archive. Though I'm not sure whether gamification will work in my organisation, or where it might be applied, this talk certainly gave me a better understanding of how others use it.

Bad Work & Quitters

Rob Lambert delivered a keynote on Why remaining relevant is so important. The point that particularly resonated with me was perhaps tangential to the main topic, his view of "bad work".

Rob talked about how it seems that quitting has become trendy with many people voicing their opinions about leaving a place that does "bad work". He questioned the definition of "bad work" by challenging how much of the concept was based on perception.

Rob also said that "sometimes 'bad work' is where the real change happens". This made me reflect on the opportunities I've had to make change. Perhaps it is from the worst situations that we can make the biggest difference.

Reflection

Erik Brickarp delivered a really interesting experience report on Going Exploratory. As he spoke, Erik repeatedly reiterated that he learned from attempting to implement change through regular reflection. Only when he stopped and thought about how he was working did he have the opportunity to realise how he could have approached things differently. 

Erik said "whenever I feel like I don't have time to reflect, that's a strong indication that I should stop and reflect." This was a good reminder to me. When I'm busy at work, that's when I need to take the time to pause and assess.

*****

I really enjoyed my experience at Nordic Testing Days. Thank you to Helena Jeret-Mäe for selecting my workshop as Content Owner, Kadri-Annagret Petersen for being my track co-ordinator, and Grete Napits for running a fantastic conference. I hope to be back again in the future.