Friday, 19 December 2014

Cereal Box Course Summary

When I return to work after attending a training course, my colleagues and my boss usually want to know how it went. The conversation might go something like:

Them: "How was your course?"
Me: "Good."
Them: "What did you learn?"
Me: "Oh, lots of stuff."
Them: "Okay. Cool."

Though this is slightly exaggerated example, I often find it difficult to remember and describe what I have learned. When I leave a course, my brain feels full of new ideas and enthusiasm. But, by the next morning, I have usually returned to thinking about other things.

As a trainer, I don't want my course attendees to return to work and have the conversation I've described above. Instead I want them to be articulate and passionate. One of the ways that I have attempted this is using a cereal box course summary.

This is not an original idea. I heard about it from my colleague Ceedee Doyle, who had heard it from someone else, unfortunately I don't know the source. However, here's how it works.

Ask participants to construct a cereal box for everything they've learned on the course. What they have to put on the box will follow the same conventions as for a real packet of cereal.

The front of the box should show the name of the course, pictures, and a slogan.



The side of the box should list the ingredients of the course.



The back of the box should have highlights, reviews and testimonials.



I have used this activity during the last hour of a two day training course. The participants had 30 minutes to make their cereal box, then we spent 30 minutes sharing their creations, reflections and feedback on the course as a whole.

I found the cereal box course summary a creative and fun activity to finish off a course. People are relaxed and talkative as they work. The cereal box captures a positive and high-level view of the course overall, which creates a favourable tone to end on.

I also like including an opportunity for reflection as part of the course itself. One of our students summarised the benefit of this in their testimonial, saying  "I especially liked the final summing up which made me realise how much I’d learned." [1]

Finally, this activity gives each participant something quirky and concrete to take back to work with them. The appearance of the cereal box on their desk may initiate conversations about the training they attended. The writing on the box should support the conversation. Their colleagues can see what they have learned, and the format in which the information is presented is reflective of the interactive and engaging training environment that we work hard to create.

Sunday, 14 December 2014

Review the Future Retrospective

I was co-facilitating a client workshop recently, and I wanted to include an agile retrospective activity. It was my first introduction to the team and they were using a waterfall development methodology, so I didn't want to go too zany. However I wanted to know how they thought they could improve and, as a facilitator, I find the constant 'post-it notes then sharing' retrospective format to be quite boring to deliver.

I looked through Retr-O-Mat for inspiration and decided that the Remember the Future idea would form the basis of my activity:

Source: Retr-O-Mat

I liked that the premise of this idea put the team in a forward thinking mindset. However it wasn't quite the style I was after so, for the workshop, I chose to adapt the exercise in several ways. 

To get the team talking to one another so that I could observe their dynamics, I wanted to create a more interactive collection activity rather than an individual one.  I asked the team to break into groups of three people rather than working by themselves.

As the team weren't using iterations I changed the language to "Imagine your next release is the best ever!". To shift the focus from looking inwards, imagining only the team perspective, to looking outwards, imagining the reaction of others in the organisation, I asked each group to think about the reactions to their amazing release from their managers, business owners, end users and the team themselves.

Instead of capturing jotting notes or keywords, each group had to capture these reactions in a series of reviews that included a 5-star rating, a comment, and a name, e.g.

★★★★★ "Our key process is now 10 minutes faster! Amazing!" - Bob from Business

Once the reviews were complete, each group presented back to the team. It was interesting to see different themes emerge from each group, including feedback focused on successful delivery of key functional improvements to business people, unusually quick turnaround of tasks that improved flow of work, and simple recognition of the team's achievement.

After the presentations we returned to the activity as it was described in Retr-O-Mat. We asked the team to think about the changes they would have made to their process to receive these reviews. Suggestions for improvement appeared rapidly and, with this shared context, were relatively consistent across all the participants in the workshop.

I found this activity collected the type of information that we were seeking, while also keeping the team engaged and interactive in the workshop itself.

Tuesday, 2 December 2014

Conferences build community

Community is important to me. The primary reason that I volunteer my time to organise testing events is to offer people an opportunity to meet their peers and share ideas. It is the opportunity for interaction that I value, and I think that conferences are an essential part of this. Conferences build community.

A successful conference is not just about delivery of great content. It also provides a platform for every attendee to genuinely connect with another; an interaction that is the start of a supportive, inspiring or challenging professional relationship. When reflecting on a conference I find that the presentations may have faded from memory, but the conversations that spawned ongoing associations are much easier to recall.

As a conference organiser, the responsibility for setting the tone of the conference weighs heavier on me than selecting the ideas. It seems that achieving success in the community aspect of a conference is much more difficult than the content.

And everything begins with the speaker selection.

I get frustrated when I feel that a list of speakers isn't a true mirror of the community behind a conference, but instead a distorted reflection suited to a fairground Hall of Mirrors. As a conference organiser,  I am looking for strong presenters with innovative ideas who truly reflect the diversity of our profession. I am constantly conscious of creating a speaker line up that engages the brain and accurately shows who we are as a group.

This is a challenge and, when I consider diversity, I consider many attributes. As a woman in testing, I certainly think about the gender ratio in our speaker selection. But I also think about years of experience in testing, where people are employed, ethnicity, age and reputation. If I want the conference to offer something for everyone in the community, then I have to consider everyone in the community by focusing on what distinguishes us from each other.

I don't feel that I have ever had to select a speaker who didn't deserve their place. I simply consider diversity alongside the experiences and ideas that people bring. I think about the audience for a topic rather than the topic in isolation. There are instances when a proposal holds little appeal to me personally, but I feel it would be a great session for others within the community, both for its content and the opportunity to establish the presenter as an active voice.

Ideas are rarely innovative in every context. So considering ideas alone is an injustice to the community that the conference is for. I believe that every organiser should actively think about the people that they serve when selecting speakers.

When asked "What did you enjoy about the conference?", attendees at the recent WeTest Weekend Workshops referenced the topics, discussions, session and learning. I think we had fantastic content. However the strongest theme in responses to this question was the people. I believe this feedback reflects our effort as organisers to put the people of the community at the heart of our decisions on their behalf.



What did you enjoy about the conference?
WeTest Weekend Workshops 2014



Wednesday, 19 November 2014

Different Ideas for Defect Management

I believe that a good defect management process should encourage and facilitate discussion between developers and testers. Frustration with the way in which defects are reported and managed may be due to a lack, or absence, of conversation.

The way that you manage defects should depend on your development methodology, location of team members, and company culture. Yet it seems that many testers adopt a bug tracking tool as the only way to communicate problems, with little consideration to establishing team relationships. Finding a bug is a perfect excuse for a tester to speak to a developer; utilise this opportunity!

Here are four different ideas for defect management that pair the tools we use with developer interaction based on my own experiences.

Bug tracking tool

I worked onsite with a client in South America, installing hardware and software then co-ordinating testing across the wider solution. The developers supporting this install were based in New Zealand, which meant that they were largely unavailable during South American business hours.

Though our software had a user interface, much of the function was performed by backend components. The information required to identify and fix problems was recorded in multiple log files stored in different locations.

During the day, I would identify problems, reproduce them, capture a clean set of log files, and then raise an appropriate defect in our bug tracking tool. The tool required a number of attributes to be set for each bug; priority, severity, component, etc. They also had a title, short description, and a set of attached logs that the developer could reference.

In this environment, I felt that the tool was essential to manage a high volume of problems with associated logs. However communicating via the tool alone was ineffective. When the New Zealand based developer arrived at work, he would have an inbox full of notifications from the bug tracking system reflecting the problems resolved, remaining and discovered during our day. The volume of these messages meant that he occasionally missed information that was important, or prioritised his time incorrectly.

I initiated a daily skype session for defect triage to explain which bug tracking notifications he should focus on, and why. This happened at the end of my day and the beginning of his. During this time he would try to ask enough questions to understand the complexities of the problem, so that he could provide a timely fix. These conversations helped us to create a rapid and effective defect turnaround.


Visual management board

I worked in a team developing a web application using a kanban development methodology. We focused on flow through the process, which meant that stories were rarely owned by individuals. Instead tasks across multiple pieces of functionality were shared among all the developers.

The team used a visual management board that occupied a large whiteboard alongside our workspace. This board was frequently referred to and updated throughout the day, not just at our morning stand up meeting. Upon completing a task, both developers and testers would determine their next activity by visiting the board.

We used the same board for defect management. If issues were discovered during testing, I would write each one on a post-it note and attach it to the board. In this team, issues generally manifested in the user interface and were relatively simple to reproduce. A post-it note usually captured enough information for the problem to be understood by others.

New defects would be picked up as a priority when a developer visited the board in search of a new task. They would place their avatar on the defect, and then speak to me about anything that they didn’t understand, or wanted to question the validity of.

As problems were resolved, the developer would commit their change to the repository and we would swap my avatar onto the task. Bugs would then move to “done” in the same manner as other tasks on the board.


Desk delivery

I worked in a team developing a web application using the scrum framework for agile development. In this team stories were adopted by one or two developers, who would work on tasks and take ownership of fixing any associated defects discovered during testing.

We had an electronic visual management board that was used regularly and updated often. There was also a physical visual management board, but this would only match the electronic version immediately after our daily stand up had occurred.

The piece of software that provided the electronic board also offered defect tracking functionality. In this organisation I was reluctant to use a defect tracking application, as I felt the team were at a real risk of communicating solely through tools despite being co-located. Instead I decided to record my bugs on post-it notes.

Unlike the previous scenario, in this situation there was little point in sticking these post-it notes on the physical visual management board. It wasn't being used often enough. Instead I would take them directly to the developer who was working on the story.

My post-it note delivery style varied depending on the developer. One developer was quite happy for me to arrive at his desk, read out what was written on each post-it, then talk about the problems. Another preferred that I show her each defect in her local version of the application so that she was sure she understood what to change.

The developers would work from a set of post-it note defects lined up along their desk. As they resolved problems they would return the associated post-it to me. Having to transfer the physical object increased the opportunity for conversation and helped create relationships. There was also a healthy level of competition in trying not to have the most post-it notes stuck to your desk!


Cloud-based visual model

I worked in a team developing a web application using an agile methodology. This team was co-located in one area of the office, as we all worked part time on other projects.

A portable visual management board was created and maintained by the business analyst using a large piece of cardboard. It was kept under his desk and only used during our daily stand up meeting to discuss our progress.

From a defect management perspective, this organisation prided itself on its trendy culture. Though a bug tracking tool existed it was largely used by call centre staff to record customer issues in production.

In this team I decided to communicate information about my testing using a cloud based visual model. Each person in the team had a MindMeister account. I used this software to create a mind map showing my test ideas, reflect progress through testing, and to record defects, which were highlighted in red and had an attached note that explained the problem.

When I completed a testing task, I would send the developers a link via instant messaging to the cloud-based visual model. They could see where the problems were, and would respond with questions if anything was unclear. They seemed to like being able to see defects within a wider context, and were quite positive about a nifty new tool!

Tuesday, 11 November 2014

Women in Testing on Twitter

After speaking at a Girl Geek Dinner in mid October, I became really aware that my Twitter stream contained tweets that were mostly from men. Over the past month I have experimented with who I am following to try and correct this imbalance.

The categories are imperfect and the list is likely to be incomplete, but here are some of the women in testing that I would recommend following:

DISCLAIMER: I've grouped based on the reasons that I follow these women. Though there are many individuals who could appear in multiple categories, I wanted to share the primary reasons why I value their contributions on Twitter. This is one specific facet of their professional persona and, should you choose to follow them, you may see things quite differently.


Crème de la crème

“the best person or thing of a particular kind.”
Elisabeth Hendrickson and Lisa Crispin are among the most widely known and well regarded software testers in the world. With followers in the thousands, they are likely to be a part of your Twitter stream already. Though Elisabeth and Lisa often tweet about their lives outside of testing, they also share articles and tweets from their extensive professional networks that I would miss otherwise.

Community Leaders

“the person who leads a group or organization.”
As an organiser of WeTest and editor of Testing Trapeze, I like to tweet content from testers around New Zealand to promote the wonderful things that are happening in this part of the world. These are the women who adopt similar behaviour for the communities that they lead.

Alessandra Moreira is the only woman on the Association for Software Testing (AST) Board of Directors and a co-organiser for Weekend Testing Australia and New Zealand (WTANZ). Amy Phillips is the co-organiser for Weekend Testing Europe (WTE).

Anne-Marie Charrett was a co-organiser of Let's Test Oz, Anna Royzman was a co-organiser of the Conference of the Association for Software Testing (CAST) in 2014, Helena Jeret-Mäe is the content owner of the upcoming Nordic Testing Days in 2015, and Rosie Sherry organises TestBash annually as well as running the Ministry of Testing.

In initiatives that specifically focus on women, Jyothi Rangaiah is the editor of the Women in Testing magazine, and Lorinda Brandon runs the Women in Line initiative to get more women speaking at technology conferences.

Challengers

“a call to prove or justify something.”
There are those who generally seek to challenge opinion and question what they've heard. These women appear unafraid; they share and generate content to disrupt the status quo. Through these women I am exposed to new ideas and test my own assumptions.

Fiona CharlesKaren N JohnsonTrish KhooLanette CreamerHilary WeaverKate Falanga and Natalie Bennett are all based in North America, Maaret Pyhäjärvi and Meike Mertsch are in Europe, and Kim Engel in New Zealand.

Cheerleaders

“an enthusiastic and vocal supporter of someone or something.”
I was hesitant to use this term, as some may hold quite a negative connotation of a cheerleader, but I mean it in the sense of the definition above. These women are enthusiastic and vocal members of the testing community. They encourage, converse, and share information, usually in a friendly and upbeat way.

Parimala Hariprasad and Smita Mishra are in India, Anna Baik in the UK, Maria Kedemo in Europe, and Jean Ann Harrison and Teri Charles are in the US. They create a steady flow of positivity with links to a wide variety of content.

Constant

"occurring continuously over a period of time."
Claire Moss gets her own category as a live tweeter. Usually Claire is relatively quiet on Twitter. The exception is when she is attending testing events and her account erupts into a stream of constant activity. You'll almost feel like you've attended the conference itself.

Climbing

"to move to a higher position."
Finally there are the women who share and contribute excellent content on Twitter, but may be less well known than others listed here. Carol Brands, Jacky Fain, Alex Schladebeck, Elisabeth Hocke, Ioana Serban and Shirley Tricker.

*****

As with any post of this kind, I'm certain I've missed people who should be included. Please leave your recommendations in a comment below.

Thursday, 6 November 2014

Visual Test Ideas

There are some circumstances in which a mind map is not the best format to visualise your testing. In particular, where there are complex relationships between data in the application under test.

Visual test ideas can be useful when it feels clumsy to explain what you plan to test in words. If you need the assistance of a whiteboard to outline your thinking to others, this might indicate that you should record your test ideas pictorially instead of in writing.

As with other visual approaches, visual test ideas are a great way to engage people in the activities of testing and encourage collaboration that improves test coverage.

Here are three examples of different ways to record your test ideas visually.

Timelines

Imagine the Transaction History for a bank account. When you load the page, you see the last 30 days of transactions by default. You can change the date range for transactions returned by updating the start and end dates:



Transactions are only returned for the first 90 days of the specified range. If the specified date range is greater than 90 days then the oldest transactions are discarded. Data is only available up to 180 days ago, no transaction information will return prior to that date. And of course, there are no transactions available in the future.

Given these requirements, a number of different test ideas spring to mind. As these ideas are discussed with other people, they might be captured in a timeline format:


Using a timeline to explain these test ideas clearly shows how they relate to one another within the time relationship, where a mind map format would not.

Buckets

Imagine that you have taken out a loan where your repayment obligations are tied to your income. When your earn beyond specific thresholds, then your repayment amount will change. These thresholds are dependent on the type of earning you receive, whether salary, wages or adjusted income.

The example below is from Aaron Hodder. Using a buckets marked with different thresholds, Aaron clearly shows how his test ideas relate to one another:



Venn Diagrams

Imagine a school grading system. Within a given subject, you can receive credits across one or many topics. For example, in Mathematics you can gain credits that apply to Calculus, Algebra, or Calculus & Algebra combined. The number of credits you have in each topic, and in total, will determine whether you are awarded a given qualification for a subject.

Nigel Charman has written about a team operating in this domain:

"In this case, the team are visualising examples of these complex business rules using Venn diagrams. The richness of the visualisation helps us wrap our brains around the complexity, acting as a shorthand form for discussion." [1]

An accompanying picture shows a team working around a whiteboard that is filled with various Venn diagrams:

Source: Visual Specification by Example
Instead of then attempting to reflect these scenarios in words, these same diagrams become the documented test ideas:

Source: Visual Specification by Example

Using Venn diagrams to capture these test ideas clearly shows the relationships between topics within subjects that a mind map format would not.

*****

There are many options for representing test ideas and the relationships between them by making use of engaging illustrations. Think laterally. Though a mind map is an obvious option to present visual information, it isn't always the best way for a tester to share their thoughts with their team.

Wednesday, 29 October 2014

Satir Interaction Model

Yesterday evening I had the opportunity to attend a Fiona Charles workshop on Delivering Difficult Messages [1]. Fiona spoke briefly about the Satir Interaction Model, which she presented as:

"I think we have an issue" -- Delivering unwelcome messages
Fiona Charles

The Satir Interaction Model describes what happens inside us as we communicate — the process we go through as we take in information, interpret it, and decide how to respond [2]. In the model above, there are four fundamental steps to going from stimulus to reply: intake, meaning, significance, then response [3].

These four steps are the Gerald Weinberg simplification of the original work of Virginia Satir. Weinberg has written about the Satir Interaction Model in the context of technical leadership, while Satir wrote for an audience of family therapists [4]. When compared side-by-side, the original work included some additional steps:

Satir Interaction Model
Steven M. Smith

Resolving communication problems

The Satir Interaction Model can be used to dissect communication problems. It can help us to identify what went wrong in an interaction and provides an approach to resolve the issue immediately [5].

Many communication problems occur when a response is received that is beyond the bounds of what was expected. Because the steps in the model between intake and response are hidden, the end result of the process that assigns meaning and significance can be quite surprising to the recipient, which can be a catalyst for conflict.

I like the J. B. Rainsberger example of applying the Satir Interaction Model to a conversation where someone is wished a "Happy Holidays". The responses to this intake may vary wildly based on the meaning and significance that people assign to this phrase. [3]

  • "How dare this person insult Christmas and deny the Christ...!"
  • "If only you'd bother to learn to pronounce 'Chanukah'..."
  • "Have you ever even heard of Candlemas...?!"

When applying the model to resolve misunderstanding, Dale H. Emery says:

I focus first on my own process, because the errors I can most easily correct are the ones that I make. When I see and hear clearly, interpret accurately, assign the right significance, and accept my feelings, I understand other people's messages better, and my responses are more effective and appropriate. And when I understand well, I am better able to notice when other people misinterpret my messages, and to correct the misunderstanding. [2]

Finally, Judy Bamberger offers a very useful companion resource for adopting the Satir Interaction Model in practice [5]. She provides ideas about what could go wrong at each step in the model and offers practical suggestions for how to recover from errors, problems, or misunderstandings.

Association to Myers-Briggs

Weinberg drew a link between the Myers-Briggs leadership styles and the Satir Interaction Model that may be useful for adapting communication styles with different types of people. He suggests that:

The NT visionaries and NF Catalysts, both being Intuitive, skip quickly over the Intake step. … NTs tend to go instantly to Meaning, while the NFs tend to jump immediately to Significance. … The SJ Organizers stay in Intake mode too long … The SP Troubleshooters actually use the whole process rather well … (pp 108 & 109.) [6]

Weinberg then offers the following questions to prompt each type of person to apply each step of the Satir Interaction Model:

For NTs/NFs ask, “What did you see or hear that led you to that conclusion?”
For SJs ask, “What can we conclude from the data we have so far?”
For SPs appeal to their desire to be clever and ask them to teach you how they did it. [7]

Distinguish reaction from response

Willem van den Ende draws the Satir Interaction Model so that both sides of the interaction are shown:

"Debugging" sessions
Willem van den Ende

Using this illustration he specifically differentiates between a reaction a response. A reaction happens when a person skips the meaning and significance stages, and simply jumps straight from intake to response. When both people in an interaction become reactive instead of responsive a fight is the likely result [8]. Understanding that these missing steps may be the cause of a misunderstanding could help resolve the situation.

Unacceptable Behaviour

The Satir Interaction Model may also be useful in structuring conversations to address unacceptable behaviour. Esther Derby suggests that these conversations should begin by getting agreement that the behaviour happened, followed by discussion about the impact of the behaviour, then conclude by allowing the recipient of the message to realise that their behaviour is counter-productive [9].

References

[1] "I think we have an issue" - Delivering unwelcome messages, Fiona Charles

[2] Untangling Communication, Dale H Emery

[3] Don't Let Miscommunication Spiral Out Of Control, J B Rainsberger

[4] Satir Interaction Model, Steven M. Smith

[5] The Satir Interaction Model, Judy Bamberger

[6] Debugging System Boundaries: The Satir Interaction Model, Donald E. Gray

[7] Why Not Ask Why? Get Help From the Satir Interaction Model, Donald E. Gray

[8] "Debugging" sessions, Willem van den Ende

[9] intake->meaning->feeling->response, Esther Derby


Sunday, 12 October 2014

Three examples of context-driven testing using visual test coverage modelling

I talk a lot about visual test coverage modelling. Sometimes I worry that people will think I am advocating it as a testing practice that can be used in the same way across any scenario; a "best practice". This is not the case.

A tester may apply the same practice in different contexts, yet still be context-driven. The crux is whether they have the ability to decompose and recompose the practice to fit the situation at hand [1].

To help illustrate this, here are three examples of how I have adapted visual test coverage modelling to my context, and one example where I chose not to adopt this practice at all.

Integrated with Automation

I was the only tester in an agile team within a large organisation. There were two agile teams operating in this organisation, everybody else followed a waterfall process.

The other agile team had a strong focus on test automation. When I joined, there was an expectation that I would follow the process pioneered in that team. I was asked to create an automated test suite on a shared continuous integration server.

Having previously worked in an organisation that put a high value on continuous integration, I wanted to be sure that its worth was properly understood. I decided to integrate visual test coverage models into the reports generated by the automated suites to illustrate the difference between testing and checking.

At this client I used FreeMind to create mind maps that showed the scope of testing. Icons were used against branches in the map to show where automated checks had been implemented, and where they had not. They also showed problems and where testing had been descoped.


Source: Mind Maps and Automation

The primary purpose of the visual test coverage model in this context was to provide a high-level electronic testing dashboard for the Product Owner that was integrated into the reporting from our automation.

Paper-based Thinking

I was a test consultant in an agile team where my role was to teach the developers and business analysts how to test. This team had no specialist testers. The existing team members wanted to become cross-functional to cover testing tasks.

The team had been working together for a while, with an attitude that testing and checking were synonymous. They were attempting "100% automation" and had started to require test sprints in their scrum process to achieve this.

At this client I wanted to separate the thinking about testing from the limits of what is possible with an automation tool. I wanted to encourage people in the team to think broadly about test ideas before they identified which were candidates for automation.

I asked the team to create their visual test coverage models on large pieces of paper; one per story. They would use one colour pen to note down the functionality, then another colour for test ideas. There was a process for review at several stages within the construction of the model.

Source: How to create a visual test coverage model

The primary purpose of the visual test coverage model in this context was to encourage collaborative lateral thinking about testing. By using paper-based models, each person was forced to think without first considering the capabilities of a tool.

Reporting Session Based Testing

I was one of two testers in an agile team, working in a department of many agile teams. I was involved in the project part time, with the other tester working full time.

The team was very small as most of the work in this particular project was testing. We had a lot of freedom in determining our test approach and the other tester was interested in implementing session based testing.

We used xMind to create a mind map that captured our test ideas and showed how the scope of our testing was separated into sessions. We updated the model to reflect progress of testing and issues discovered. It was also clear where there were still sessions to be executed, so the model also helped us to divide up the work.

Source: Evolution of a Model

The primary purpose of the visual test coverage model in this context was for quick communication of session based test progress between two testers in a team. As someone who was only involved part time, the model served as a simple way to rapidly identify how the status of testing had changed while I was away from client site.

A Quiet Team

I was a test consultant in an organisation where my role was to coach the existing testers to improve their confidence. The testers worked together in a single agile team. One of their doubts was that they thought production issues were due to tests that they had missed. They weren't sure about their test coverage.

The testers were very quiet. The team in general conversed at a low volume and with reluctance. There were only a couple of people who were confident communicators and, as a result, they tended to dominate conversation, both informal and in meetings.

I wanted to shift ownership of testing from the testers to the team. I felt that scoping of test ideas and decisions about priority should be shared. It seemed that the lack of confidence was due, in part, to the testers often working in isolation.

Though I wanted to implement a collaborative practice for brainstorming, I felt that visual test coverage models wouldn't work in this team dynamic. I could imagine the dominant team members would have disproportionate input, while the testers may have ideas that were never voiced.

Instead I thought that the team could adopt a time boxed, silent brainstorm where test ideas were written out on post-it notes. This allowed every person to share their ideas. Decisions about priority of tasks, and candidates for automated checks, could be discussed once everyone had contributed by using the collective output of the group to guide conversation.

*****

Before anything else, ask yourself why the practice you'd like to implement is a good fit for your situation. Being able to articulate the underlying purpose is important, both for communicating to the wider team and for knowing what aspects of the practice must remain to meet that need.

I have found visual test coverage modelling useful in many scenarios, but have yet to implement it in exactly the same way twice. I hope that this illustrates how a tester who aims to be context-driven might adapt a practice they know to suit the specific environment in which they are operating.

Sunday, 5 October 2014

Let's Test Oz

My third and final Let's Test Oz post; three experiences that left a lasting impression.

Cognitive Dissonance

Margaret Dineen gave a presentation where she spoke about cognitive dissonance*, the mental stress that people experience when they encounter things that contradict their beliefs. 

As an example, you might believe that your test environments have been configured correctly and continue to believe this despite repeated evidence to the contrary, perhaps because your test environment administrator is usually so reliable. However, observing the signs of poor configuration will create cognitive dissonance; a cerebral discomfort that tells you something is not quite right.

Margaret shared how she had learned to acknowledge her distress signals, defocus, and complete a self-assessment. She writes an entry in her "Notebook of Woe" to locate the source of cognitive dissonance that she is experiencing. This notebook is a tool to ask herself a series of questions. How do I feel about my work? What deviations am I experiencing from my model?

I love the idea of this notebook, and the message that Margaret delivered alongside it. "Failure provides an opportunity for learning and growth, but it's not comfortable and it's not easy". This constant and deliberate self-assessment is a method for continuous personal improvement, capturing our discomfort before failure has an opportunity to take root.

Bugs in Brains

I ate lunch with Anna Royzman and Aaron Hodder one day. Our conversation meandered through many testing topics, then Anna said something that really struck me.

"I keep finding bugs in people's brains, where do I report those?"

She was speaking about the way in which we, as testers, start to learn how to interrogate a piece of software based on the developer who coded it. When we've worked in a team for a long time, our heuristics may become incredibly specialised.
 
Aaron concurred and provided an example. In a previous workplace, he knew that the root cause analysis would lead in very different directions dependent on the developer that had introduced a bug. One developer would usually introduce bugs that were resolved by a configuration change for an edge case scenario. Another developer would usually introduce bugs that were resolved by a complete functional rewrite for a core system flow.

This was something that I could also relate to, but had never considered as anything unique. I'm now thinking more about whether testers should raise these underlying differences between developers and how we might do so.

Talking to Developers

Keith Klain gave a keynote address in which he spoke about the ways to successfully speak to management about testing. I found his advice just as applicable to the interactions between testers and developers.

Enter conversations with an outcome in mind. Manage your own expectations by questioning whether the outcome you are seeking is a reasonable one. There is a common misconception that testers are wasting developer's time. Having one specific goal for every interaction is likely to keep your conversations focused, succinct and valuable, which will help to build rapport.

Know your audience and target your terminology accordingly. Even if you don't have the same skills that they have, you can still learn their language; interactional expertise. You can talk like a developer without actually being able to do development. For example, learn what third party libraries are used by the developers, and for what purpose, so that you can question their function as a tester.

Work to build credibility with people who matter. If you don't join a team with instant status, remember that this can be built by both direct and indirect means. Cultivating a good reputation with people in other roles may help create respect with those developers who are initially dismissive of you as a tester.


* By the way, Margaret has an excellent article on this topic in the upcoming October edition of Testing Trapeze.

Tuesday, 23 September 2014

The context driven testing community

At the recent Let's Test Oz conference, James Bach presented a model of the context driven testing community. His diagram showed the community split across three levels of engagement where an "inner circle" contained those capable of deep intellectual exchange; committed innovators and philosophers.




James talked about the community in his signature blunt manner, with straightforward language of cliques, pretenders and lobbyists. He assigned the task of niceties to a greeter, the friendly face of welcome, and coupled these people with guides, who identify and elevate people with potential.

James spoke plainly of an exclusive and elitist culture; by definition 'a select group that is superior in terms of ability or qualities to the rest of a group or society'. I believe that James is comfortable in this type of environment, which is similar to the way that he described the ISST on Twitter earlier this year:


My concern is that this rhetoric of exclusion and elitism creates the impression that the context driven testing community is actually a crowd, a commotion, or even a cult.

I believe the community has grown beyond a central clique. I would like to see it represented in a more inclusive way. I see a group of people that are interested in furthering their professional skills; where intent rather than commitment is the ticket to entry.

As such, I'd like to see us adopt an inclusive model:



If it looks a bit like I turned things inside out, then you're on the right track. Let me explain.

Someone with no knowledge of context driven testing is likely to encounter a leader of the community first. These are the people with the greatest reputation and professional presence. To a newcomer, these leaders may appear to be offering a lone dissenting opinion.

Consequently, the leaders of the community are not hidden at its center. They are the public face of context driven testing, and most likely to be approached by those who are eager to learn more. I am certain that James is dealing with more enquiries about context driven testing every day than I am!

Rather than expecting a greeter to welcome people, a path should be marked so that anyone in the community can simply direct the newcomers to this route. I believe there are a common set of first steps that any person with an intent to learn more about context driven testing may take. These should be known and accessible.

This model suggests six ways that people become involved with the community:

  • Readers - start reading testing articles and magazines, e.g. Testing Trapeze
  • Followers - subscribe to context driven testing blogs or follow the Twitter accounts of people who are active in the community
  • Viewers - watch a testing presentation or conference talk online
  • Event Attendees - participate in a local testing MeetUp group or attend a testing conference, e.g. Let's Test 
  • Students - attend a training course on context driven testing, e.g. Rapid Software Testing
  • Inexperienced Practitioners - try a new testing practice in your workplace, e.g. visual test coverage modelling

Newcomers should feel from the very beginning that they have walked into the middle of an inclusive environment. Rather than joining the outer edge of an intellectual clique, they are in the midst of a sphere of possibility. This model offers clarity in the growth and progression that is possible within the context driven testing community.

Commitment and reputation are implicit in the layers of the model. From the center, where people are consumers of information, a person may progress to participating in the community with an active voice.

Contribution is naturally associated with challenge, as by expressing an opinion there's a chance that others will disagree with it. The community ethos is that "no one is entitled to an unchallenged opinion". I simply suggest we move the challenge from our doormat and place it at a point where people are better prepared to respond appropriately.

Finally, the strength of the community is wider than researchers, philosophers, and innovators. Those who are truly committed will naturally aspire to the highest levels, but in a variety of contexts. There are many opportunities, and most of those who operate at this level will happily assist others that want to develop as leaders.

The context driven testing community should articulate the ways in which people can join, market the opportunities for personal development, and encourage newcomers to grow the craft. Creating an inclusive model of the community is a first step in demonstrating the nature of a group that has grown beyond an elite club.

Friday, 19 September 2014

Test Leadership

Fiona Charles began her Test Leadership tutorial at Let's Test Oz by sharing, in her own words, a definition of leadership from Jerry Weinberg:

Leaders create a space where people are empowered to do their best work 

She then offered her own version of this definition, which differed only by one word:

Leaders create a space where people are inspired to do their best work 

This set the scene for an interactive tutorial that provided clarity about how leaders are defined, how they act, and how they grow.

Defining a leader

The first exercise asked us to reflect on our experiences with testing leaders. In small groups, we were asked to share our stories, and discuss the skills and strengths that these leaders possessed.

I heard about:
  • a leader who could accurately assess the abilities of those around them and determine which tasks each person would find interesting, challenging, or rewarding.
  • a leader who worked out how to successfully motivate people so that they became driven to seek knowledge and achieve independently.
  • a leader who could link people together, within organisations and further afield, creating mutually beneficial relationships where none had previously existed.

I thought that the experiences of my group could be summarised by one word; connection. In each example it seemed that the leader was creating connection; between people and tasks, between people and what motivated them, or simply between people.

When it came time for the class to share, it became apparent that we had focused on identifying the actions of leaders rather than the behaviours that they possess. Fiona prompted us to think more about the characteristics being demonstrated. 

In my group, connection was what these leaders did, but how did they do it? With thought, I saw that the three stories shared above had illustrated leadership that was perceptive, persuasive, and astute.

From the long wishlist of attributes for leadership that emerged from the class, I noted three other traits that resonated with me; courage, intuition, and flexibility.

Being a leader

The next exercise saw the class split into two groups that were each set a challenge: to invent a test leadership problem that the other team would be required to solve. We were given 45 minutes, with one member of each group acting as an observer.

Fiona then lead a classroom discussion on the group dynamics that had appeared within each team during the exercise. The team members had an opportunity to share their observations, followed by the nominated observers. It was interesting to hear the class reflect on how leaders had emerged.

In both groups, the people that the group identified as leaders were those who spoke first. They were also the people who were first to pick up a pen to start recording the thoughts of the group. I found myself being labelled as a leader, but felt cheated that this was simply through a series of actions rather than any specific personal attributes that I held.

In the second half of this exercise each team received their problem, then worked to solve it while the other team observed. In both teams the leadership dynamic shifted from the first half of the exercise as those who had originally been labelled as leaders, myself included, made a conscious effort to avoid adopting the role again.

My most striking observation from the latter part of this exercise was that the environment in which we collaborate is very important.

The first team set up two lines of chairs that faced towards an individual at a flip chart who took notes. Communication between this group largely traveled back and forth from the single individual at the front of the room. The leader was the person who literally lead the discussion from the front.

The second team set up a circle of chairs so that everyone faced each other. Communication between this group was more collaborative, in that people felt they were speaking to each other instead of the note taker. A single leader was harder to identify, as many people had equal contributions to the conversation and conclusions of the team.

Growing a leader

We finished the tutorial with an opportunity to reflect on what we had learned. As we sat in silence I realised something that has been eluding me for months; why people have started to call me a leader. I have felt confused that, even though I haven't consciously changed anything about myself, this label had appeared.

This tutorial clearly demonstrated to me that people see leadership as actions. Speaking first. Taking the pen. To be seen as a leader, all you need to do is start doing. When you act like a leader, the label of leader naturally follows.

However, Fiona lead me to realise that it is the characteristics of a leader that distinguish good from bad. Our personal attributes are what separate a courageous action from a stupid one, an intuitive response from an indecipherable one, or a flexible plan from a fickle one.

To grow as a leader, I need to identify the personal attributes behind my leadership actions. It is those characteristics that I should look to develop further in order to feel truly comfortable in a leadership role.

Friday, 12 September 2014

Heuristics and Oracles

Heuristics and oracles may seem like inaccessible concepts for new testers. The words sound academic, removed from the reality of what a tester does every day. In fact they are immensely useful tools for critical thinking.

What are heuristics and oracles, and why should you learn more about them?

Heuristics

Imagine that I want to eat a pickle. My pickles are stored in a large glass jar. In my household the last person to eat a pickle was my husband. He has closed the jar tight. On my first attempt I fail to open it.

What do I do next?

I check that I'm turning towards the left to loosen the lid and try again. Then I retrieve a tea towel to establish a better grip when twisting the lid of the jar. Finally, in some frustration, I go and locate my husband. He successfully opens the jar.

When faced with a jar that will not open there are a number of things that I know are worth trying. These are my jar opening heuristics. When I am instructed to test a software application there are a number of things that I know are worth trying. These are my test heuristics.

Heuristics are simply experience-based techniques for problem solving, learning, and discovery. [Ref.

Every tester will have their own set of heuristics that guide their testing every day. These are innate and developed through experience. The value of learning more about heuristics is in discovering how other people think, and becoming capable of describing our own thinking.

When I run out of inspiration during my testing, there are numerous heuristics that might prompt my next move. Rather than relying on my own brain, two of my favourite resources list a variety of techniques to apply based on the experiences of other testers. These are:


This insight into how others think allows me to introduce variety in my own approach. Instead of consistently finding the same sort of bugs, I broaden my horizons. It's like a single person adopting the mantra of "two heads are better than one". James Lyndsay illustrates this difference with a nifty visualisation in his blog post Diversity matters, and here's why.

Heuristics also give me the words to describe my testing. When questioned about how I discovered a bug my response had always been a nonchalant shrug; "I just played with it". Heuristics changed the way I communicated my testing to others. Once I could clearly articulate what I was doing I gained credibility.

Oracles

Imagine that I go to lunch with a friend. I enter a restaurant at 12pm on Thursday. After an hour enjoying a meal, I leave the restaurant at 1pm on Friday. Although I have experienced only one hour, the world around me has shifted by a day.

How do I know there's a problem here? 

I may have several notifications on my mobile phone from friends and family wondering where I am. I may have a parking ticket. I may spot somebody reading a Friday newspaper.

There are a number of ways in which I might determine that I have skipped a day of my life. These are my time travelling oracles. There are a number of ways in which I might determine that I have discovered a bug in a software application. These are my test oracles.

Oracles are simply the principle or mechanism by which we recognise a problem. [Ref.]

The test oracles that I consciously use most often are those described by Michael Bolton in Testing without a Map. This article describes the original mnemonic of HICCUPPS; history, image, comparable products, claims, user expectations, product, purpose, and statutes. The list has since been extended, which Michael describes in his blog post FEW HICCUPPS.

When I find a bug during my testing I always consider why I think that I have found a bug. I don't like to cite my "gut feeling" or claim "it's a bug because I said so!". Oracles help me to discover the real reason that I think there is a problem.

Knowing my oracle means that I can clearly explain to developers and business stakeholders why the users of the application may agree that I have found a bug. This means that I am much more effective in advocating for the bugs I raise, so they usually result in a change being made.

If I cannot associate the problem with an oracle, then it makes me question whether I have really found a problem at all. I believe this self-imposed litmus test removes a lot of noise from my bug reporting.

Thursday, 4 September 2014

Ten things to read after Agile NZ

I've just spent a great two days at Agile NZ. There were plenty of great speakers who gave me a lot to ponder. Here's a collection of ten articles that reflect the stories, tools, and ideas that I will take away from the conference.

Note that although I've referenced the speaker who introduced me to the material, in most cases they didn't specifically endorse the articles that I have linked to.

The US B-2 bomber crash in Guam [3 minute read]
A cautionary tale of edge case system use and communication failure.
Speaker: Gojko Adzic

Single-Loop and Double-Loop Learning Model [3 minute read]
An explanation of the two ways that we can learn from our experiences.
Speaker: Steph Cooper

The Ladder of Inference [10 minute read]
Examines how we reach conclusions, can be used to improve communication.
Speaker: Steph Cooper

Shifting from unilateral control to mutual learning [20 minute read]
Explains the characteristics of two mental models; unilateral control and mutual learning.
Speaker: Steph Cooper

Shu-Ha-Ri [3 minute read]
A way of thinking about learning techniques.
Speaker: Craig McCormick

Visual Test Modelling [10 minute read]
An examination of how to approach the creation and evolution of a Visual Test Model.
Speakers: Adam Howard & Joanna Yip

OODA Loop [10 minute read]
The process that defines how humans react to stimulus; observe, orient, decide, act.
Speaker: Bruce Keightley

The Palchinsky Principles [3 minute read]
Three principles for innovation.
Speaker: Gojko Adzic

Digital by Default Service Standard [10 minute read]
A set of criteria for digital teams building UK government services.
Speaker: Ben Hayman

6 Days to Air [3 minute read | 40 minute watch]
Learn about the six-day production schedule for writing, recording, and animating a South Park episode.
Speaker: Ben Ross

Wednesday, 27 August 2014

How to create a visual test coverage model

Creating a visual test coverage model to show test ideas in a mind map format is not a new idea. However it can be a challenging change in paradigm for people who are used to writing test cases that contain linear test steps. Through teaching visual modelling I have had the opportunity to observe how some people struggle when attempting to use a mind map for the first time.

Though there is no single right way to create a visual test coverage model, I teach a simple approach to help those testers who want to try using mind maps but aren't sure where to begin. I hope that from this seed, as people become confident in using a mind map to visualise their test ideas, they would then adapt this process to suit their own project environment.

Function First

The first step when considering what to test for a given function is to try and understand what it does. A good place to start a mind map is with the written requirements or acceptance criteria.

Imagine a story that includes developing the add, edit, view, and delete operations for a simple database table. The first iteration of the visual test coverage model might look like this:


Collaborate

Next consider whether all the behaviour of this function is captured in the written requirements. There are likely to be items that have not been explicitly listed. The UI may provide a richer experience than was originally requested. The business analyst may think that "some things just go without saying". There may be application level requirements that apply to this particular function.

Collaboration is the key to discovering what else this function can do. Ask a business analyst and a developer to review the mind map to be sure that every behaviour is captured. This review generally doesn't take long, and a quick conversation early in the process can prevent a lot of frustration later on.

Imagine that the developer tells us that the default design for view includes sort, filter, and pagination. Imagine that the business analyst mentions that we always ask our users to confirm before we delete data. The second iteration of the visual test coverage model might look like this:


Think Testing

With a rounded understanding of what the function does the next thing to consider is how to test it.

For people that are brand new to using a mind map, my suggestion is to start by thinking of the names of the test cases that they would traditionally scope. Instead of writing down the whole test case name, just note the key word or phrase that differentiates that case from others. This is a test idea.

Test ideas are written against the behaviour to which they apply. This means that tests and requirements are directly associated, which supports audit requirements.

Imagine that the tester scopes a basic set of test ideas. The third iteration of the visual test coverage model might look like this:


Expand your horizons

When inspiration evaporates, the next challenge is to consider whether the test ideas captured in the model are sufficient. There are some excellent resources to help answer this question.

The Test Heuristics Cheat Sheet by Elisabeth Hendrickson is a quick document to scan through, and there is almost always a Data Type Attack that I want to add to my model. The Heuristic Test Strategy Model by James Bach is longer, but I particularly like the Quality Criteria Categories that prompt me to think of non-functional test ideas that may apply. Considering common test heuristics can help achieve better test coverage than when we think alone.

Similarly, if there are other testers working in the project team ask them to review the model. A group of testers with shared domain knowledge and varied thinking are an incredibly valuable resource.

Imagine that referring to test heuristic resources and completing a peer review provides plenty of new test ideas. The fourth iteration of the visual test coverage model would have a lot of extra branches!

Lift Off!

From this point the visual test coverage model can be used in a number of ways; a base for structured exploratory testing using session based testing, a visual representation of a test case suite, a tool to demonstrate whether test ideas are covered by automated checks or testing, or as a radar to report progress and status of test execution. Regardless of use, the model is likely to evolve over time.

I hope that this process encourages those who are new to visual test coverage modelling to try it out.

Tuesday, 12 August 2014

Context Driven Change Leadership

I spent my first day at CAST2014 in a tutorial facilitated by Matt Barcomb and Selena Delesie on Context Driven Change Leadership. I thoroughly enjoyed the session and wanted to share three key things that I will take from this workshop and apply in my role.

Change models as a mechanism for feedback

The Satir Change Model shows how change affects team performance over time.


Selena introduced this model at the end of an exercise designed to demonstrate organisational change. She asked us each to mark on the model where we felt our team were at by placing an X mark at the appropriate point on the line.

Most of the marks were consistently placed in the integration phase. There were a couple of outliers in new status quo and a single person in resistance. It was a quick and informative way to gauge the feeling of a room full of people that had been asked to implement change.

I often talk about change in the context of a model, but had never though to use it as a mechanism for feedback; this is definitely something I will try in future.

Systems Thinking

Matt introduced systems thinking by talking about legs. If we were to consider each bone or muscle in the leg in isolation, then they would mean less than if we considered the leg as a whole.

Matt then drew a parallel to departments within an organisation. Where people are focused on their individual pieces, but not the system as a whole, then there is opportunity for failure.


Matt spoke about containers, differences, and exchanges (the CDE model by Glenda Eoyang [1]). These help identify the opportunities to manipulate connections within a complex system.

Containers may be physical, like a room, but they can also be implicit. One example of an implicit container that was discussed in depth was performance reviews, which may drive behaviour that impacts connections between individuals, teams and departments in both positive and negative ways.

Differences may include obvious differences like gender, race, culture, or language. It also includes subtle differences like the level of skill within a team. To manipulate connections you could amplify a difference to create innovation, dampen a difference to remove harmful behaviour, or choose to ignore a difference that is not important.

Exchanges are the interactions between people. One example is how communication flows within the organisation. Is it hierarchical via a management structure, or freely among employees? Another is when someone comes to work in a bad mood they can lower the morale of those around them. Conversely, when one person is excited and happy they can improve the mood of the whole team.

In our end of day retrospective, Betty took the idea of exchanges further:


How will I apply all this?

I have spent a lot of time recently focused on my team. When I return to work, I'd like to step back and attempt to model the wider system within my own organisation. Within this system I want to identify what containers, differences, and exchanges are present. From this I have information to influence change through connections instead of solely within my domain.

Fantastic Facilitation

Matt and Selena had planned a full day interactive exercise to take a group of 30 people through a simulated organisational change.

We started by becoming employees of a company tasked with creating wind catchers. The first round of the exercise was designed to show the chaos of change. I was one of many who spent much of this period feeling frustrated at a lack of activity.

At the start of round two, Erik Davis pulled a bag of Lego from his backpack. He usurped the existing chain of command in our wind catcher organisation to ask Matt, in his role as "the market", whether he had any interest in wind catchers made from Lego. As a result, a small group of people who just wanted to do something started to build Lego prototypes.

Matt watched the original wind catcher organisation start to splinter, and came over to Erik to suggest that the market would also be interested in housing. Being a much more appropriate and easy item to build from Lego, there was a rapid revolt. Soon I was one of around seven people who were working in a new start up, located in the foyer of the workshop area, building houses from Lego.


There was a lot of interesting observations from the exercise itself, but as someone who also teaches I was really impressed by the adaptability of the facilitators. Having met Matt at TestRetreat on Saturday, I knew that he had specially purchased a large quantity of pipe cleaners for the workshop. Now here we were using Lego to build houses, which was surely not what he had in mind!

When I raised this during the retrospective, both Matt and Selena gave excellent teaching advice.

Selena said that when she first started to teach workshops, she wanted them to go as she had planned. What she had since discovered was that if she gave people freedom, within certain boundaries, then the participants often had a richer experience.

Matt expanded this point by detailing how to discover those boundaries that should not move. He tests the constraints of an exercise by removing and adding rules, thinking in turn about how each affects the ultimate goal of the activity.

As a result of this workshop I intend to challenge some of the exercises that I run. I suspect that I am not providing enough freedom for my students to discover their own lessons within the learning objective I am ultimately aiming to achieve.

Sunday, 3 August 2014

Creating a common test approach across multiple teams

I was recently involved in documenting a test strategy for a technology department within a large organisation. This department runs an agile development process, though the larger organisation does not, and they have around 12 teams working across four different applications.

Their existing test strategy document was over five years old and no longer reflected the way that testers were operating. A further complication was that every team had moved away from the original strategy in a different direction and, as a result, there was a lack of consistent delivery from testers across the department.

To kick things off, the Test Manager created a test strategy working group with a testing representative from each application. I was asked to take a leadership role within this group as an external consultant, with the expectation that I would drive the delivery of a replacement strategy document. After an initial meeting of the group, we scheduled our first one hour workshop session.

Before the workshop

I wanted to use the workshop for some form of Test Strategy Retrospective activity, but the one I had used before didn't quite suit. In the past I was seeking feedback from people with different disciplines in a single team. This time the feedback would be coming from a single discipline across multiple teams.

As preparation for the workshop, each tester from the working group was asked to document the process that was being followed in their team. Though each documented process looked quite different, there were some commonalities. Upon reading through these, I decided that the core of the new test strategy was the high-level test process that every team across the department would follow, and that finding this would be the basis of our workshop.

I wanted to focus conversation on the testing activities that made up this core process without group feeling that other aspects of testing were being forgotten. I decided to approach this by starting the workshop with an exercise in broad thinking, then leading the group towards specific analysis.

When reflecting on my own observations of the department, and reading though the documented process from each application, I thought that test activities could be categorised into four groups.

  1. Every tester, in every team in the department, does this test activity, or should be.
  2. Some testers do this activity, but not everyone.
  3. This test activity happens, but the testers don't do it. It may be done by other specialists within the team, other departments within the organisation, or a third party.
  4. Test activities that never happen.

I wrote these categories up on four coloured pieces of paper:



At the workshop

To start the workshop I stuck these categories across a large wall from left to right.

I asked the group to reflect on what they did in their roles and write each activity on an appropriately coloured post-it note. For example, if I wrote automated checks for the middleware layer, but thought that not everyone would do so, then I would write this activity on a blue post-it note.

After five minutes of thinking, everyone got up and stuck their post-it notes on the wall under the appropriate heading. We stayed on our feet through the remainder of the session.

The first task using this information was to identify where there were activities that appeared in multiple categories. There were three or four instances of disagreement. It was interesting to talk through the reasoning behind choices and determine the final location of each activity.

Once every testing activity appeared in only one place we worked across the wall backwards, from right to left. I wanted to discuss and agree on the areas that I considered to be noise in the wider process so that we could concentrate on its heart.

NEVER
The never category made people quite uncomfortable. The test activities in this category were being consistently descoped, even though the group felt that they should be happening in some cases. There was a lot of discussion about moving these activities to the sometimes category. Ultimately we didn't. We wanted to reflect to the business that these activities were consistently being considered as unimportant.

OTHERS
As we talked through what others were doing, we annotated the activities with those who were responsible for it. The level of tester input was also discussed, as this category included tasks happening within the team. For example, unit testing was determined to be the developer's responsibility, but the tester would be expected to understand the coverage provided.

SOMETIMES
When we spoke about what people might do, most activities ended up shifting to the left or the right. Either they were items that were sometimes completed by the tester when they should have been handled elsewhere, or they were activities that should always be happening.

EVERYONE
Finally we talked through what everyone was doing. We agreed on common terminology where people had referred to the same activities using different labels. We moved the activities into an agreed end-to-end test process. Then we talked through that process to assess whether anything had been forgotten.

After the workshop

At the end of the hour, the group had clarity of how their individual approach to testing would fit together in a combined vision. The test activities that weren't common were still captured, and those activities outside the tester's area of responsibility were still articulated. This workshop created a strong common understanding within the working group, which made the process of formalising the discussion in a document relatively easy. I'd recommend this approach to others tasked with a similar brief.