Wednesday, 28 January 2015

Visual Test Models & State Transition Diagrams

This article was originally printed in the November edition of Testing Circus

Visual test models and state transition diagrams are two means of visualising information. Though they may appear similar at a glance, the structure and purpose of each is unique. When used in software testing, they act as tools to guide entirely different test techniques.

This article compares these two types of visualisation in the context of a real-world example based on purchasing a book from e-commerce retailer Amazon.

Visual Test Models

Testing often begins by focusing on function first. Sometimes this is because testing is being driven by a requirements document or the acceptance criteria of a user story. Sometimes it’s because not all of the functionality is present in an iterative delivery model, so the testers have to concentrate on one piece at a time. Sometimes it is just due to personal preference.

When focusing on function, we are testing to see what the software can do.  James Bach lists four key points of function testing in the Heuristic Test Strategy Model:

  1. Identify things that the product can do (functions and sub-functions). 
  2. Determine how you’d know if a function was capable of working. 
  3. Test each function, one at a time. 
  4. See that each function does what it’s supposed to do and not what it isn’t supposed to do.

A visual test model begins as a useful way to quickly capture the things that a product can do, acting as a note taking tool during investigation of a function.

Imagine that we have received the Amazon Shopping Cart function to test. A screenshot of a cart with one item awaiting purchase is shown below:




A visual test model that identifies things that the product can do might look like this:


Once the visual test model shows what the product can do, it can then be used to capture how the tester could know whether each function was capable of working. Test ideas are added to each branch and might include negative tests – those ideas that determine that the function doesn’t do things that it shouldn’t.

When switching between documenting function and capturing test ideas it may be useful to switch colour, to clearly delineate between different types of information. A continued example for the Amazon Shopping Cart is shown below:



From this point the continued use of the model will vary dependent on the test approach. It may be extended to show exploratory test sessions and progress of testing. It may drive discussion around which ideas are suited to automated checking. It may be used as a reference to create pre-scripted test cases in a test case management tool.

A visual test model is a good way to capture quick thinking about what a product can do. It is a simple mechanism to capture test ideas that drive function testing.

State Transition Diagrams

Once the pieces of the product have been functionally verified, the tester will want to examine how they interact together. The shopping cart, the checkout and the order review page may all work in isolation, but unless they also integrate correctly with each other a customer will not be able to purchase a book.

Flow testing is where a tester does one thing after another. The Heuristic Test Strategy Model lists three key points about the flow testing technique:

  1. Perform multiple activities connected end-to-end; for instance, conduct tours through a state model.
  2. Don’t reset the system between actions.
  3. Vary timing and sequencing, and try parallel threads.

State transition diagrams offer a way to visualise paths through a product. Determining how activities are connected end-to-end will help the tester to identify varied flow through the application that may otherwise have been missed.

To create a state transition diagram start by identifiying the scenario for a successful interaction. When purchasing a book using Amazon, the simplest flow through the product may look like this:



Then investigate the other scenarios that a user could encounter at each point in this simple flow. An expanded diagram may look like this:



Because the state transition diagram is focused on interaction, it is a good visualisation for flow testing. Traversing the diagram through varied paths, while adopting different timing and number of users, is a good way to explore a piece of software through a different lens.

Monday, 19 January 2015

Generic Testing Personas

Personas are a tool to consciously adopt the habits and feelings of different people. When used during testing, they can help us to discover different types of problems.

Traditionally persona are developed for a specific application, each describing a rich background and context that shape the actions of user. I think there is value in using the same concept to model stereotypical software users for testing purposes alone.

Here are six generic testing personas that you could adopt when completing a testing task.

Manager Maria

Maria is a busy executive who interacts with the application between meetings. She is impatient and often not focused on her task, completing activities in haste. Maria will:

  • Stick to the quickest workflow through the application
  • Use shortcut keys
  • Fill in the minimum number of fields to get a result
  • Make mistakes in her efforts to get things completed quickly
  • Require fast responses and may repeat an action if the application takes too long to respond
  • Often be called to a meeting midway through a task

Hipster Hilary

Hilary likes to investigate new functionality and areas of the application that are outside of the mainstream. She is an early adopter and an avid explorer. Hilary will:

  • Investigate new features as soon as they become available
  • Explore all possible paths through a workflow to determine which she prefers
  • Frequently use areas of the application that are less popular
  • Have unusual data input compared to other clients e.g. different units of measure
  • Be accessing the application from an unusual browser, operating system or device

Careful Claire

Claire enjoys routine. She uses the same workflows each time that she interacts with the application, taking care to ensure that she is consistent and the information she provides is complete. Claire will:

  • Stick to popular features of the application
  • Notice and investigate any visible changes to these features, e.g. a new button is added
  • Complete every field possible when entering information
  • Be verbose when asked to enter notes of her own, e.g. a reason for editing a record.
  • Be patient with long response times

Sneaky Shirley

Shirley likes to break things. She knows about common security holes in software and likes to explores the applications that she uses to feel confident about their ability to protect her information. Shirley will:

  • Enter SQL and JavaScript injection in to application input fields 
  • Manipulate URLs to attempt to access private information
  • Violate constraints on input fields by entering invalid information
  • Try to generate as many error messages as possible

International Ioana

Ioana is on an overseas vacation. She periodically uses the application for specific reasons, e.g. to retrieve a piece of information or complete a single task. Ioana will:

  • Use the application outside of local business hours
  • Be accessing the application from multiple locations and time zones
  • Use a variety of browsers, operating systems and devices
  • Occasionally have poor network connectivity that is slow and unreliable
  • Be using a variety of keyboard layouts
  • Enter personal information that includes foreign language characters

Elder Elisabeth

Elisabeth is of an older generation with relatively little knowledge of computing. She has trouble understanding many software applications. Elisabeth will:

  • Use the application slowly, taking time to read each screen
  • Frequently use the 'Back' button to remind her previous information
  • Have the interface font of the application enlarged via browser settings or zoom
  • Require simple and clear interfaces in order to successfully complete a task
  • Seek out online help to assist her
  • Be using outdated technology including an older browser and operating system


Who else would you add to this list?

Wednesday, 7 January 2015

Behind the Scenes: Editor of Testing Trapeze

Our June edition of Testing Trapeze clashes with my European holiday plans, so Shirley Tricker has kindly agreed to take the reins as Guest Editor. This has prompted me to think about and record what I actually do as Editor of Testing Trapeze.

Testing Trapeze started in a rapid fashion. I had the idea on the 7th of January last year and our first edition published on the 15th of February. I didn’t really know what I was getting myself in to when we launched, but I had a clear vision of what I wanted to create. My role as Editor has evolved as I learned what was involved in realising my vision. Though I expect it will continue to change, this is a snapshot of how things currently run.

Planning

The first step in creating a magazine is finding people to write. I like to approach potential contributors at least three months in advance of our submission deadline.

Testing Trapeze follows a consistent structure. There are always five articles written by two New Zealanders, two Australians and one international contributor. There are other informal criteria that I try to meet in each edition - people from different cities, at least two women, no more than one person from my organisation, some people with a strong Twitter following, someone who is writing for a magazine for the first time, etc.

Delivering a quality line up within these parameters does require some forward planning. I use a spreadsheet to track past, present and potential contributors, as shown below:



Confirming Contributors

We share who will be contributing to our next edition in a ‘Next time in Testing Trapeze’ teaser. This serves a dual purpose by promoting our writers and making a public commitment on their behalf.

Prior to publishing this teaser, I send an email to all the people who will appear to confirm that they are actually still happy to write for our next edition. They usually are, though on occasion there is a late withdrawal and I have to find an alternative writer.

Submission Review

In the week prior to our submission deadline the writers send in their articles. I think it is important to gratefully acknowledge the receipt of each article immediately. I also remind the writer of our review process, and tell them to expect a reviewed version of their article within a week.

I then pair the writer and their article with a reviewer. I find this the trickiest part of my role as Editor, to match personalities and material so that the review process is a positive experience for both sides.

When I send the article to a reviewer, I ask them to comment on, or modify, the article directly. There is an expectation that any comments or changes will have an appropriate tone, so that their review output can be shared directly with the writer.

Sometimes the reviewer will know who wrote the article. If there are existing relationships between writer and reviewer that may cloud honesty then I generally keep it anonymous. When the writer is a first time contributor, I might ask the reviewer to focus on selecting a few pieces of feedback that are framed in an encouraging way.

In every instance I try to remain a middleman in the review process. There are many reasons for this. The reviewer may make comments they believe are fair and constructive that the writer interprets as harsh criticism. The sheer volume of review comments may be overwhelming and disheartening for the writer. The writer may become confused where the reviewer hasn't provided enough detail to guide them in making changes. I read through each review to create a consistent experience for contributors - reframing, adding or removing feedback as I feel is appropriate.

When sending a reviewed article back to the writer I always position the feedback as a set of suggestions. The writer has the final say in how many changes they'd like to make after a review. Ultimately it's their article and they need to be happy with what is published. I request a final version of the article within a week and often have to plead for author biographies and photographs too, as people are notoriously reluctant to provide these.

Design and Layout

Adam Howard does the bulk of the work to create the layout and design of Testing Trapeze. I kick off the process for a new edition by choosing a colour palette, then providing a set of images for Adam to select from. This is one of my favourite things to do.

I find our images using Google Image search, which includes tools to filter by colour palette and licensing. For example, if I want images about flying that are primarily red and labelled for non-commercial reuse:




I also share with Adam an initial opinion on the order that I would like our writers to appear on the cover and in the magazine. Often these change when I complete a final review, but Adam requires a starting point to lay out a draft.

Halfway through last year I changed the logic I applied to ordering contributors on our cover. Our earlier editions featured the international contributor in the headline position. I started to feel that this undermined our focus as a magazine for Australia and New Zealand. Now the international contributor is always listed last. For other names, the order is loosely based on how well known they are in the local testing community.

I choose the order of articles in the magazine based on their topic and tone. I apply some general rules to this. The first article often has a broad appeal. Sometimes there are a pair of articles that cover a similar area, despite there being no theme to our editions. These are split across the second and fourth position in the magazine so that they are separated in the reader’s mind. I often place the longest article in the edition in the middle.

Editorial

Though it is a relatively easy task, I dread writing the editorial. I think this stems from my own view when reading a magazine - the editorial is in the way of what I really want to read. As Editor, I don’t feel like the star of the magazine. I’m simply creating a platform for others to shine. I try to keep my editorial short and focused on acknowledging all our contributors. I don’t want to distract from our consistently amazing content that speaks for itself.

Final Review

As I receive the final submissions from writers these are saved into a shared Google Drive folder. Adam creates a draft version of the magazine, then I do a final review. When we published our first edition this was an intense process. Now that we’re established, this review usually includes a handful of cosmetic changes.

This is also the point at which I change my mind about ordering of our articles, though I can rarely articulate the reasons why I want to shuffle things around. I feel that a rhythm appears when the articles are strung together. Our final note is static, it’s always our international contributor. Sometimes the other notes have to move for the magazine to really sing.

Publish

Ajoy Singha, the Editor of Testing Circus, agreed to host Testing Trapeze on the Testing Circus website as an associated publication. He did this without seeing a single edition, and his instant support of my idea to create a new magazine was incredibly encouraging.

When we have a final edition I login to the Testing Circus site to create a draft post that includes a photo of our cover and a list of the articles that are included. Initially we adopted the same format as Testing Circus for these posts and, now that we’ve published a few editions, I simply copy and paste from previous issues of Testing Trapeze to create new ones.

I email the final PDF to Ajoy for him to upload to the Testing Circus site. He updates the draft post with a link, then leaves it for me to review and publish. This works well, particularly as we are working across different timezones. We want to publish when our readers in Australia and New Zealand are actually awake!

Marketing

Once we publish, the final task is to let people know that our latest edition is available. I email all the contributors to the edition, both writers and reviewers, to let them know we have published. I tweet and update our Facebook page

As people start to read the edition, I amplify any positive feedback that we receive. This echo usually lasts about a week, by which point I assume that we have reached everyone who is interested. I try to avoid generating noise in our social media accounts.

This approach to marketing relies heavily on Twitter, so I have recently set up a mailing list for Testing Trapeze subscriptions to capture readers who don't use this platform. This list will send one email per edition, or just six emails per year. I hope that we will see many people choose to use this option.

Testing Trapeze is a magazine that I am really proud to be a part of.

Friday, 19 December 2014

Cereal Box Course Summary

When I return to work after attending a training course, my colleagues and my boss usually want to know how it went. The conversation might go something like:

Them: "How was your course?"
Me: "Good."
Them: "What did you learn?"
Me: "Oh, lots of stuff."
Them: "Okay. Cool."

Though this is slightly exaggerated example, I often find it difficult to remember and describe what I have learned. When I leave a course, my brain feels full of new ideas and enthusiasm. But, by the next morning, I have usually returned to thinking about other things.

As a trainer, I don't want my course attendees to return to work and have the conversation I've described above. Instead I want them to be articulate and passionate. One of the ways that I have attempted this is using a cereal box course summary.

This is not an original idea. I heard about it from my colleague Ceedee Doyle, who had heard it from someone else, unfortunately I don't know the source. However, here's how it works.

Ask participants to construct a cereal box for everything they've learned on the course. What they have to put on the box will follow the same conventions as for a real packet of cereal.

The front of the box should show the name of the course, pictures, and a slogan.



The side of the box should list the ingredients of the course.



The back of the box should have highlights, reviews and testimonials.



I have used this activity during the last hour of a two day training course. The participants had 30 minutes to make their cereal box, then we spent 30 minutes sharing their creations, reflections and feedback on the course as a whole.

I found the cereal box course summary a creative and fun activity to finish off a course. People are relaxed and talkative as they work. The cereal box captures a positive and high-level view of the course overall, which creates a favourable tone to end on.

I also like including an opportunity for reflection as part of the course itself. One of our students summarised the benefit of this in their testimonial, saying  "I especially liked the final summing up which made me realise how much I’d learned." [1]

Finally, this activity gives each participant something quirky and concrete to take back to work with them. The appearance of the cereal box on their desk may initiate conversations about the training they attended. The writing on the box should support the conversation. Their colleagues can see what they have learned, and the format in which the information is presented is reflective of the interactive and engaging training environment that we work hard to create.

Sunday, 14 December 2014

Review the Future Retrospective

I was co-facilitating a client workshop recently, and I wanted to include an agile retrospective activity. It was my first introduction to the team and they were using a waterfall development methodology, so I didn't want to go too zany. However I wanted to know how they thought they could improve and, as a facilitator, I find the constant 'post-it notes then sharing' retrospective format to be quite boring to deliver.

I looked through Retr-O-Mat for inspiration and decided that the Remember the Future idea would form the basis of my activity:

Source: Retr-O-Mat

I liked that the premise of this idea put the team in a forward thinking mindset. However it wasn't quite the style I was after so, for the workshop, I chose to adapt the exercise in several ways. 

To get the team talking to one another so that I could observe their dynamics, I wanted to create a more interactive collection activity rather than an individual one.  I asked the team to break into groups of three people rather than working by themselves.

As the team weren't using iterations I changed the language to "Imagine your next release is the best ever!". To shift the focus from looking inwards, imagining only the team perspective, to looking outwards, imagining the reaction of others in the organisation, I asked each group to think about the reactions to their amazing release from their managers, business owners, end users and the team themselves.

Instead of capturing jotting notes or keywords, each group had to capture these reactions in a series of reviews that included a 5-star rating, a comment, and a name, e.g.

★★★★★ "Our key process is now 10 minutes faster! Amazing!" - Bob from Business

Once the reviews were complete, each group presented back to the team. It was interesting to see different themes emerge from each group, including feedback focused on successful delivery of key functional improvements to business people, unusually quick turnaround of tasks that improved flow of work, and simple recognition of the team's achievement.

After the presentations we returned to the activity as it was described in Retr-O-Mat. We asked the team to think about the changes they would have made to their process to receive these reviews. Suggestions for improvement appeared rapidly and, with this shared context, were relatively consistent across all the participants in the workshop.

I found this activity collected the type of information that we were seeking, while also keeping the team engaged and interactive in the workshop itself.

Tuesday, 2 December 2014

Conferences build community

Community is important to me. The primary reason that I volunteer my time to organise testing events is to offer people an opportunity to meet their peers and share ideas. It is the opportunity for interaction that I value, and I think that conferences are an essential part of this. Conferences build community.

A successful conference is not just about delivery of great content. It also provides a platform for every attendee to genuinely connect with another; an interaction that is the start of a supportive, inspiring or challenging professional relationship. When reflecting on a conference I find that the presentations may have faded from memory, but the conversations that spawned ongoing associations are much easier to recall.

As a conference organiser, the responsibility for setting the tone of the conference weighs heavier on me than selecting the ideas. It seems that achieving success in the community aspect of a conference is much more difficult than the content.

And everything begins with the speaker selection.

I get frustrated when I feel that a list of speakers isn't a true mirror of the community behind a conference, but instead a distorted reflection suited to a fairground Hall of Mirrors. As a conference organiser,  I am looking for strong presenters with innovative ideas who truly reflect the diversity of our profession. I am constantly conscious of creating a speaker line up that engages the brain and accurately shows who we are as a group.

This is a challenge and, when I consider diversity, I consider many attributes. As a woman in testing, I certainly think about the gender ratio in our speaker selection. But I also think about years of experience in testing, where people are employed, ethnicity, age and reputation. If I want the conference to offer something for everyone in the community, then I have to consider everyone in the community by focusing on what distinguishes us from each other.

I don't feel that I have ever had to select a speaker who didn't deserve their place. I simply consider diversity alongside the experiences and ideas that people bring. I think about the audience for a topic rather than the topic in isolation. There are instances when a proposal holds little appeal to me personally, but I feel it would be a great session for others within the community, both for its content and the opportunity to establish the presenter as an active voice.

Ideas are rarely innovative in every context. So considering ideas alone is an injustice to the community that the conference is for. I believe that every organiser should actively think about the people that they serve when selecting speakers.

When asked "What did you enjoy about the conference?", attendees at the recent WeTest Weekend Workshops referenced the topics, discussions, session and learning. I think we had fantastic content. However the strongest theme in responses to this question was the people. I believe this feedback reflects our effort as organisers to put the people of the community at the heart of our decisions on their behalf.



What did you enjoy about the conference?
WeTest Weekend Workshops 2014



Wednesday, 19 November 2014

Different Ideas for Defect Management

I believe that a good defect management process should encourage and facilitate discussion between developers and testers. Frustration with the way in which defects are reported and managed may be due to a lack, or absence, of conversation.

The way that you manage defects should depend on your development methodology, location of team members, and company culture. Yet it seems that many testers adopt a bug tracking tool as the only way to communicate problems, with little consideration to establishing team relationships. Finding a bug is a perfect excuse for a tester to speak to a developer; utilise this opportunity!

Here are four different ideas for defect management that pair the tools we use with developer interaction based on my own experiences.

Bug tracking tool

I worked onsite with a client in South America, installing hardware and software then co-ordinating testing across the wider solution. The developers supporting this install were based in New Zealand, which meant that they were largely unavailable during South American business hours.

Though our software had a user interface, much of the function was performed by backend components. The information required to identify and fix problems was recorded in multiple log files stored in different locations.

During the day, I would identify problems, reproduce them, capture a clean set of log files, and then raise an appropriate defect in our bug tracking tool. The tool required a number of attributes to be set for each bug; priority, severity, component, etc. They also had a title, short description, and a set of attached logs that the developer could reference.

In this environment, I felt that the tool was essential to manage a high volume of problems with associated logs. However communicating via the tool alone was ineffective. When the New Zealand based developer arrived at work, he would have an inbox full of notifications from the bug tracking system reflecting the problems resolved, remaining and discovered during our day. The volume of these messages meant that he occasionally missed information that was important, or prioritised his time incorrectly.

I initiated a daily skype session for defect triage to explain which bug tracking notifications he should focus on, and why. This happened at the end of my day and the beginning of his. During this time he would try to ask enough questions to understand the complexities of the problem, so that he could provide a timely fix. These conversations helped us to create a rapid and effective defect turnaround.


Visual management board

I worked in a team developing a web application using a kanban development methodology. We focused on flow through the process, which meant that stories were rarely owned by individuals. Instead tasks across multiple pieces of functionality were shared among all the developers.

The team used a visual management board that occupied a large whiteboard alongside our workspace. This board was frequently referred to and updated throughout the day, not just at our morning stand up meeting. Upon completing a task, both developers and testers would determine their next activity by visiting the board.

We used the same board for defect management. If issues were discovered during testing, I would write each one on a post-it note and attach it to the board. In this team, issues generally manifested in the user interface and were relatively simple to reproduce. A post-it note usually captured enough information for the problem to be understood by others.

New defects would be picked up as a priority when a developer visited the board in search of a new task. They would place their avatar on the defect, and then speak to me about anything that they didn’t understand, or wanted to question the validity of.

As problems were resolved, the developer would commit their change to the repository and we would swap my avatar onto the task. Bugs would then move to “done” in the same manner as other tasks on the board.


Desk delivery

I worked in a team developing a web application using the scrum framework for agile development. In this team stories were adopted by one or two developers, who would work on tasks and take ownership of fixing any associated defects discovered during testing.

We had an electronic visual management board that was used regularly and updated often. There was also a physical visual management board, but this would only match the electronic version immediately after our daily stand up had occurred.

The piece of software that provided the electronic board also offered defect tracking functionality. In this organisation I was reluctant to use a defect tracking application, as I felt the team were at a real risk of communicating solely through tools despite being co-located. Instead I decided to record my bugs on post-it notes.

Unlike the previous scenario, in this situation there was little point in sticking these post-it notes on the physical visual management board. It wasn't being used often enough. Instead I would take them directly to the developer who was working on the story.

My post-it note delivery style varied depending on the developer. One developer was quite happy for me to arrive at his desk, read out what was written on each post-it, then talk about the problems. Another preferred that I show her each defect in her local version of the application so that she was sure she understood what to change.

The developers would work from a set of post-it note defects lined up along their desk. As they resolved problems they would return the associated post-it to me. Having to transfer the physical object increased the opportunity for conversation and helped create relationships. There was also a healthy level of competition in trying not to have the most post-it notes stuck to your desk!


Cloud-based visual model

I worked in a team developing a web application using an agile methodology. This team was co-located in one area of the office, as we all worked part time on other projects.

A portable visual management board was created and maintained by the business analyst using a large piece of cardboard. It was kept under his desk and only used during our daily stand up meeting to discuss our progress.

From a defect management perspective, this organisation prided itself on its trendy culture. Though a bug tracking tool existed it was largely used by call centre staff to record customer issues in production.

In this team I decided to communicate information about my testing using a cloud based visual model. Each person in the team had a MindMeister account. I used this software to create a mind map showing my test ideas, reflect progress through testing, and to record defects, which were highlighted in red and had an attached note that explained the problem.

When I completed a testing task, I would send the developers a link via instant messaging to the cloud-based visual model. They could see where the problems were, and would respond with questions if anything was unclear. They seemed to like being able to see defects within a wider context, and were quite positive about a nifty new tool!