Tuesday, 2 December 2014

Conferences build community

Community is important to me. The primary reason that I volunteer my time to organise testing events is to offer people an opportunity to meet their peers and share ideas. It is the opportunity for interaction that I value, and I think that conferences are an essential part of this. Conferences build community.

A successful conference is not just about delivery of great content. It also provides a platform for every attendee to genuinely connect with another; an interaction that is the start of a supportive, inspiring or challenging professional relationship. When reflecting on a conference I find that the presentations may have faded from memory, but the conversations that spawned ongoing associations are much easier to recall.

As a conference organiser, the responsibility for setting the tone of the conference weighs heavier on me than selecting the ideas. It seems that achieving success in the community aspect of a conference is much more difficult than the content.

And everything begins with the speaker selection.

I get frustrated when I feel that a list of speakers isn't a true mirror of the community behind a conference, but instead a distorted reflection suited to a fairground Hall of Mirrors. As a conference organiser,  I am looking for strong presenters with innovative ideas who truly reflect the diversity of our profession. I am constantly conscious of creating a speaker line up that engages the brain and accurately shows who we are as a group.

This is a challenge and, when I consider diversity, I consider many attributes. As a woman in testing, I certainly think about the gender ratio in our speaker selection. But I also think about years of experience in testing, where people are employed, ethnicity, age and reputation. If I want the conference to offer something for everyone in the community, then I have to consider everyone in the community by focusing on what distinguishes us from each other.

I don't feel that I have ever had to select a speaker who didn't deserve their place. I simply consider diversity alongside the experiences and ideas that people bring. I think about the audience for a topic rather than the topic in isolation. There are instances when a proposal holds little appeal to me personally, but I feel it would be a great session for others within the community, both for its content and the opportunity to establish the presenter as an active voice.

Ideas are rarely innovative in every context. So considering ideas alone is an injustice to the community that the conference is for. I believe that every organiser should actively think about the people that they serve when selecting speakers.

When asked "What did you enjoy about the conference?", attendees at the recent WeTest Weekend Workshops referenced the topics, discussions, session and learning. I think we had fantastic content. However the strongest theme in responses to this question was the people. I believe this feedback reflects our effort as organisers to put the people of the community at the heart of our decisions on their behalf.



What did you enjoy about the conference?
WeTest Weekend Workshops 2014



Wednesday, 19 November 2014

Different Ideas for Defect Management

I believe that a good defect management process should encourage and facilitate discussion between developers and testers. Frustration with the way in which defects are reported and managed may be due to a lack, or absence, of conversation.

The way that you manage defects should depend on your development methodology, location of team members, and company culture. Yet it seems that many testers adopt a bug tracking tool as the only way to communicate problems, with little consideration to establishing team relationships. Finding a bug is a perfect excuse for a tester to speak to a developer; utilise this opportunity!

Here are four different ideas for defect management that pair the tools we use with developer interaction based on my own experiences.

Bug tracking tool

I worked onsite with a client in South America, installing hardware and software then co-ordinating testing across the wider solution. The developers supporting this install were based in New Zealand, which meant that they were largely unavailable during South American business hours.

Though our software had a user interface, much of the function was performed by backend components. The information required to identify and fix problems was recorded in multiple log files stored in different locations.

During the day, I would identify problems, reproduce them, capture a clean set of log files, and then raise an appropriate defect in our bug tracking tool. The tool required a number of attributes to be set for each bug; priority, severity, component, etc. They also had a title, short description, and a set of attached logs that the developer could reference.

In this environment, I felt that the tool was essential to manage a high volume of problems with associated logs. However communicating via the tool alone was ineffective. When the New Zealand based developer arrived at work, he would have an inbox full of notifications from the bug tracking system reflecting the problems resolved, remaining and discovered during our day. The volume of these messages meant that he occasionally missed information that was important, or prioritised his time incorrectly.

I initiated a daily skype session for defect triage to explain which bug tracking notifications he should focus on, and why. This happened at the end of my day and the beginning of his. During this time he would try to ask enough questions to understand the complexities of the problem, so that he could provide a timely fix. These conversations helped us to create a rapid and effective defect turnaround.


Visual management board

I worked in a team developing a web application using a kanban development methodology. We focused on flow through the process, which meant that stories were rarely owned by individuals. Instead tasks across multiple pieces of functionality were shared among all the developers.

The team used a visual management board that occupied a large whiteboard alongside our workspace. This board was frequently referred to and updated throughout the day, not just at our morning stand up meeting. Upon completing a task, both developers and testers would determine their next activity by visiting the board.

We used the same board for defect management. If issues were discovered during testing, I would write each one on a post-it note and attach it to the board. In this team, issues generally manifested in the user interface and were relatively simple to reproduce. A post-it note usually captured enough information for the problem to be understood by others.

New defects would be picked up as a priority when a developer visited the board in search of a new task. They would place their avatar on the defect, and then speak to me about anything that they didn’t understand, or wanted to question the validity of.

As problems were resolved, the developer would commit their change to the repository and we would swap my avatar onto the task. Bugs would then move to “done” in the same manner as other tasks on the board.


Desk delivery

I worked in a team developing a web application using the scrum framework for agile development. In this team stories were adopted by one or two developers, who would work on tasks and take ownership of fixing any associated defects discovered during testing.

We had an electronic visual management board that was used regularly and updated often. There was also a physical visual management board, but this would only match the electronic version immediately after our daily stand up had occurred.

The piece of software that provided the electronic board also offered defect tracking functionality. In this organisation I was reluctant to use a defect tracking application, as I felt the team were at a real risk of communicating solely through tools despite being co-located. Instead I decided to record my bugs on post-it notes.

Unlike the previous scenario, in this situation there was little point in sticking these post-it notes on the physical visual management board. It wasn't being used often enough. Instead I would take them directly to the developer who was working on the story.

My post-it note delivery style varied depending on the developer. One developer was quite happy for me to arrive at his desk, read out what was written on each post-it, then talk about the problems. Another preferred that I show her each defect in her local version of the application so that she was sure she understood what to change.

The developers would work from a set of post-it note defects lined up along their desk. As they resolved problems they would return the associated post-it to me. Having to transfer the physical object increased the opportunity for conversation and helped create relationships. There was also a healthy level of competition in trying not to have the most post-it notes stuck to your desk!


Cloud-based visual model

I worked in a team developing a web application using an agile methodology. This team was co-located in one area of the office, as we all worked part time on other projects.

A portable visual management board was created and maintained by the business analyst using a large piece of cardboard. It was kept under his desk and only used during our daily stand up meeting to discuss our progress.

From a defect management perspective, this organisation prided itself on its trendy culture. Though a bug tracking tool existed it was largely used by call centre staff to record customer issues in production.

In this team I decided to communicate information about my testing using a cloud based visual model. Each person in the team had a MindMeister account. I used this software to create a mind map showing my test ideas, reflect progress through testing, and to record defects, which were highlighted in red and had an attached note that explained the problem.

When I completed a testing task, I would send the developers a link via instant messaging to the cloud-based visual model. They could see where the problems were, and would respond with questions if anything was unclear. They seemed to like being able to see defects within a wider context, and were quite positive about a nifty new tool!

Tuesday, 11 November 2014

Women in Testing on Twitter

After speaking at a Girl Geek Dinner in mid October, I became really aware that my Twitter stream contained tweets that were mostly from men. Over the past month I have experimented with who I am following to try and correct this imbalance.

The categories are imperfect and the list is likely to be incomplete, but here are some of the women in testing that I would recommend following:

DISCLAIMER: I've grouped based on the reasons that I follow these women. Though there are many individuals who could appear in multiple categories, I wanted to share the primary reasons why I value their contributions on Twitter. This is one specific facet of their professional persona and, should you choose to follow them, you may see things quite differently.


Crème de la crème

“the best person or thing of a particular kind.”
Elisabeth Hendrickson and Lisa Crispin are among the most widely known and well regarded software testers in the world. With followers in the thousands, they are likely to be a part of your Twitter stream already. Though Elisabeth and Lisa often tweet about their lives outside of testing, they also share articles and tweets from their extensive professional networks that I would miss otherwise.

Community Leaders

“the person who leads a group or organization.”
As an organiser of WeTest and editor of Testing Trapeze, I like to tweet content from testers around New Zealand to promote the wonderful things that are happening in this part of the world. These are the women who adopt similar behaviour for the communities that they lead.

Alessandra Moreira is the only woman on the Association for Software Testing (AST) Board of Directors and a co-organiser for Weekend Testing Australia and New Zealand (WTANZ). Amy Phillips is the co-organiser for Weekend Testing Europe (WTE).

Anne-Marie Charrett was a co-organiser of Let's Test Oz, Anna Royzman was a co-organiser of the Conference of the Association for Software Testing (CAST) in 2014, Helena Jeret-Mäe is the content owner of the upcoming Nordic Testing Days in 2015, and Rosie Sherry organises TestBash annually as well as running the Ministry of Testing.

In initiatives that specifically focus on women, Jyothi Rangaiah is the editor of the Women in Testing magazine, and Lorinda Brandon runs the Women in Line initiative to get more women speaking at technology conferences.

Challengers

“a call to prove or justify something.”
There are those who generally seek to challenge opinion and question what they've heard. These women appear unafraid; they share and generate content to disrupt the status quo. Through these women I am exposed to new ideas and test my own assumptions.

Fiona CharlesKaren N JohnsonTrish KhooLanette CreamerHilary WeaverKate Falanga and Natalie Bennett are all based in North America, Maaret Pyhäjärvi and Meike Mertsch are in Europe, and Kim Engel in New Zealand.

Cheerleaders

“an enthusiastic and vocal supporter of someone or something.”
I was hesitant to use this term, as some may hold quite a negative connotation of a cheerleader, but I mean it in the sense of the definition above. These women are enthusiastic and vocal members of the testing community. They encourage, converse, and share information, usually in a friendly and upbeat way.

Parimala Hariprasad and Smita Mishra are in India, Anna Baik in the UK, Maria Kedemo in Europe, and Jean Ann Harrison and Teri Charles are in the US. They create a steady flow of positivity with links to a wide variety of content.

Constant

"occurring continuously over a period of time."
Claire Moss gets her own category as a live tweeter. Usually Claire is relatively quiet on Twitter. The exception is when she is attending testing events and her account erupts into a stream of constant activity. You'll almost feel like you've attended the conference itself.

Climbing

"to move to a higher position."
Finally there are the women who share and contribute excellent content on Twitter, but may be less well known than others listed here. Carol Brands, Jacky Fain, Alex Schladebeck, Elisabeth Hocke, Ioana Serban and Shirley Tricker.

*****

As with any post of this kind, I'm certain I've missed people who should be included. Please leave your recommendations in a comment below.

Thursday, 6 November 2014

Visual Test Ideas

There are some circumstances in which a mind map is not the best format to visualise your testing. In particular, where there are complex relationships between data in the application under test.

Visual test ideas can be useful when it feels clumsy to explain what you plan to test in words. If you need the assistance of a whiteboard to outline your thinking to others, this might indicate that you should record your test ideas pictorially instead of in writing.

As with other visual approaches, visual test ideas are a great way to engage people in the activities of testing and encourage collaboration that improves test coverage.

Here are three examples of different ways to record your test ideas visually.

Timelines

Imagine the Transaction History for a bank account. When you load the page, you see the last 30 days of transactions by default. You can change the date range for transactions returned by updating the start and end dates:



Transactions are only returned for the first 90 days of the specified range. If the specified date range is greater than 90 days then the oldest transactions are discarded. Data is only available up to 180 days ago, no transaction information will return prior to that date. And of course, there are no transactions available in the future.

Given these requirements, a number of different test ideas spring to mind. As these ideas are discussed with other people, they might be captured in a timeline format:


Using a timeline to explain these test ideas clearly shows how they relate to one another within the time relationship, where a mind map format would not.

Buckets

Imagine that you have taken out a loan where your repayment obligations are tied to your income. When your earn beyond specific thresholds, then your repayment amount will change. These thresholds are dependent on the type of earning you receive, whether salary, wages or adjusted income.

The example below is from Aaron Hodder. Using a buckets marked with different thresholds, Aaron clearly shows how his test ideas relate to one another:



Venn Diagrams

Imagine a school grading system. Within a given subject, you can receive credits across one or many topics. For example, in Mathematics you can gain credits that apply to Calculus, Algebra, or Calculus & Algebra combined. The number of credits you have in each topic, and in total, will determine whether you are awarded a given qualification for a subject.

Nigel Charman has written about a team operating in this domain:

"In this case, the team are visualising examples of these complex business rules using Venn diagrams. The richness of the visualisation helps us wrap our brains around the complexity, acting as a shorthand form for discussion." [1]

An accompanying picture shows a team working around a whiteboard that is filled with various Venn diagrams:

Source: Visual Specification by Example
Instead of then attempting to reflect these scenarios in words, these same diagrams become the documented test ideas:

Source: Visual Specification by Example

Using Venn diagrams to capture these test ideas clearly shows the relationships between topics within subjects that a mind map format would not.

*****

There are many options for representing test ideas and the relationships between them by making use of engaging illustrations. Think laterally. Though a mind map is an obvious option to present visual information, it isn't always the best way for a tester to share their thoughts with their team.

Wednesday, 29 October 2014

Satir Interaction Model

Yesterday evening I had the opportunity to attend a Fiona Charles workshop on Delivering Difficult Messages [1]. Fiona spoke briefly about the Satir Interaction Model, which she presented as:

"I think we have an issue" -- Delivering unwelcome messages
Fiona Charles

The Satir Interaction Model describes what happens inside us as we communicate — the process we go through as we take in information, interpret it, and decide how to respond [2]. In the model above, there are four fundamental steps to going from stimulus to reply: intake, meaning, significance, then response [3].

These four steps are the Gerald Weinberg simplification of the original work of Virginia Satir. Weinberg has written about the Satir Interaction Model in the context of technical leadership, while Satir wrote for an audience of family therapists [4]. When compared side-by-side, the original work included some additional steps:

Satir Interaction Model
Steven M. Smith

Resolving communication problems

The Satir Interaction Model can be used to dissect communication problems. It can help us to identify what went wrong in an interaction and provides an approach to resolve the issue immediately [5].

Many communication problems occur when a response is received that is beyond the bounds of what was expected. Because the steps in the model between intake and response are hidden, the end result of the process that assigns meaning and significance can be quite surprising to the recipient, which can be a catalyst for conflict.

I like the J. B. Rainsberger example of applying the Satir Interaction Model to a conversation where someone is wished a "Happy Holidays". The responses to this intake may vary wildly based on the meaning and significance that people assign to this phrase. [3]

  • "How dare this person insult Christmas and deny the Christ...!"
  • "If only you'd bother to learn to pronounce 'Chanukah'..."
  • "Have you ever even heard of Candlemas...?!"

When applying the model to resolve misunderstanding, Dale H. Emery says:

I focus first on my own process, because the errors I can most easily correct are the ones that I make. When I see and hear clearly, interpret accurately, assign the right significance, and accept my feelings, I understand other people's messages better, and my responses are more effective and appropriate. And when I understand well, I am better able to notice when other people misinterpret my messages, and to correct the misunderstanding. [2]

Finally, Judy Bamberger offers a very useful companion resource for adopting the Satir Interaction Model in practice [5]. She provides ideas about what could go wrong at each step in the model and offers practical suggestions for how to recover from errors, problems, or misunderstandings.

Association to Myers-Briggs

Weinberg drew a link between the Myers-Briggs leadership styles and the Satir Interaction Model that may be useful for adapting communication styles with different types of people. He suggests that:

The NT visionaries and NF Catalysts, both being Intuitive, skip quickly over the Intake step. … NTs tend to go instantly to Meaning, while the NFs tend to jump immediately to Significance. … The SJ Organizers stay in Intake mode too long … The SP Troubleshooters actually use the whole process rather well … (pp 108 & 109.) [6]

Weinberg then offers the following questions to prompt each type of person to apply each step of the Satir Interaction Model:

For NTs/NFs ask, “What did you see or hear that led you to that conclusion?”
For SJs ask, “What can we conclude from the data we have so far?”
For SPs appeal to their desire to be clever and ask them to teach you how they did it. [7]

Distinguish reaction from response

Willem van den Ende draws the Satir Interaction Model so that both sides of the interaction are shown:

"Debugging" sessions
Willem van den Ende

Using this illustration he specifically differentiates between a reaction a response. A reaction happens when a person skips the meaning and significance stages, and simply jumps straight from intake to response. When both people in an interaction become reactive instead of responsive a fight is the likely result [8]. Understanding that these missing steps may be the cause of a misunderstanding could help resolve the situation.

Unacceptable Behaviour

The Satir Interaction Model may also be useful in structuring conversations to address unacceptable behaviour. Esther Derby suggests that these conversations should begin by getting agreement that the behaviour happened, followed by discussion about the impact of the behaviour, then conclude by allowing the recipient of the message to realise that their behaviour is counter-productive [9].

References

[1] "I think we have an issue" - Delivering unwelcome messages, Fiona Charles

[2] Untangling Communication, Dale H Emery

[3] Don't Let Miscommunication Spiral Out Of Control, J B Rainsberger

[4] Satir Interaction Model, Steven M. Smith

[5] The Satir Interaction Model, Judy Bamberger

[6] Debugging System Boundaries: The Satir Interaction Model, Donald E. Gray

[7] Why Not Ask Why? Get Help From the Satir Interaction Model, Donald E. Gray

[8] "Debugging" sessions, Willem van den Ende

[9] intake->meaning->feeling->response, Esther Derby


Sunday, 12 October 2014

Three examples of context-driven testing using visual test coverage modelling

I talk a lot about visual test coverage modelling. Sometimes I worry that people will think I am advocating it as a testing practice that can be used in the same way across any scenario; a "best practice". This is not the case.

A tester may apply the same practice in different contexts, yet still be context-driven. The crux is whether they have the ability to decompose and recompose the practice to fit the situation at hand [1].

To help illustrate this, here are three examples of how I have adapted visual test coverage modelling to my context, and one example where I chose not to adopt this practice at all.

Integrated with Automation

I was the only tester in an agile team within a large organisation. There were two agile teams operating in this organisation, everybody else followed a waterfall process.

The other agile team had a strong focus on test automation. When I joined, there was an expectation that I would follow the process pioneered in that team. I was asked to create an automated test suite on a shared continuous integration server.

Having previously worked in an organisation that put a high value on continuous integration, I wanted to be sure that its worth was properly understood. I decided to integrate visual test coverage models into the reports generated by the automated suites to illustrate the difference between testing and checking.

At this client I used FreeMind to create mind maps that showed the scope of testing. Icons were used against branches in the map to show where automated checks had been implemented, and where they had not. They also showed problems and where testing had been descoped.


Source: Mind Maps and Automation

The primary purpose of the visual test coverage model in this context was to provide a high-level electronic testing dashboard for the Product Owner that was integrated into the reporting from our automation.

Paper-based Thinking

I was a test consultant in an agile team where my role was to teach the developers and business analysts how to test. This team had no specialist testers. The existing team members wanted to become cross-functional to cover testing tasks.

The team had been working together for a while, with an attitude that testing and checking were synonymous. They were attempting "100% automation" and had started to require test sprints in their scrum process to achieve this.

At this client I wanted to separate the thinking about testing from the limits of what is possible with an automation tool. I wanted to encourage people in the team to think broadly about test ideas before they identified which were candidates for automation.

I asked the team to create their visual test coverage models on large pieces of paper; one per story. They would use one colour pen to note down the functionality, then another colour for test ideas. There was a process for review at several stages within the construction of the model.

Source: How to create a visual test coverage model

The primary purpose of the visual test coverage model in this context was to encourage collaborative lateral thinking about testing. By using paper-based models, each person was forced to think without first considering the capabilities of a tool.

Reporting Session Based Testing

I was one of two testers in an agile team, working in a department of many agile teams. I was involved in the project part time, with the other tester working full time.

The team was very small as most of the work in this particular project was testing. We had a lot of freedom in determining our test approach and the other tester was interested in implementing session based testing.

We used xMind to create a mind map that captured our test ideas and showed how the scope of our testing was separated into sessions. We updated the model to reflect progress of testing and issues discovered. It was also clear where there were still sessions to be executed, so the model also helped us to divide up the work.

Source: Evolution of a Model

The primary purpose of the visual test coverage model in this context was for quick communication of session based test progress between two testers in a team. As someone who was only involved part time, the model served as a simple way to rapidly identify how the status of testing had changed while I was away from client site.

A Quiet Team

I was a test consultant in an organisation where my role was to coach the existing testers to improve their confidence. The testers worked together in a single agile team. One of their doubts was that they thought production issues were due to tests that they had missed. They weren't sure about their test coverage.

The testers were very quiet. The team in general conversed at a low volume and with reluctance. There were only a couple of people who were confident communicators and, as a result, they tended to dominate conversation, both informal and in meetings.

I wanted to shift ownership of testing from the testers to the team. I felt that scoping of test ideas and decisions about priority should be shared. It seemed that the lack of confidence was due, in part, to the testers often working in isolation.

Though I wanted to implement a collaborative practice for brainstorming, I felt that visual test coverage models wouldn't work in this team dynamic. I could imagine the dominant team members would have disproportionate input, while the testers may have ideas that were never voiced.

Instead I thought that the team could adopt a time boxed, silent brainstorm where test ideas were written out on post-it notes. This allowed every person to share their ideas. Decisions about priority of tasks, and candidates for automated checks, could be discussed once everyone had contributed by using the collective output of the group to guide conversation.

*****

Before anything else, ask yourself why the practice you'd like to implement is a good fit for your situation. Being able to articulate the underlying purpose is important, both for communicating to the wider team and for knowing what aspects of the practice must remain to meet that need.

I have found visual test coverage modelling useful in many scenarios, but have yet to implement it in exactly the same way twice. I hope that this illustrates how a tester who aims to be context-driven might adapt a practice they know to suit the specific environment in which they are operating.

Sunday, 5 October 2014

Let's Test Oz

My third and final Let's Test Oz post; three experiences that left a lasting impression.

Cognitive Dissonance

Margaret Dineen gave a presentation where she spoke about cognitive dissonance*, the mental stress that people experience when they encounter things that contradict their beliefs. 

As an example, you might believe that your test environments have been configured correctly and continue to believe this despite repeated evidence to the contrary, perhaps because your test environment administrator is usually so reliable. However, observing the signs of poor configuration will create cognitive dissonance; a cerebral discomfort that tells you something is not quite right.

Margaret shared how she had learned to acknowledge her distress signals, defocus, and complete a self-assessment. She writes an entry in her "Notebook of Woe" to locate the source of cognitive dissonance that she is experiencing. This notebook is a tool to ask herself a series of questions. How do I feel about my work? What deviations am I experiencing from my model?

I love the idea of this notebook, and the message that Margaret delivered alongside it. "Failure provides an opportunity for learning and growth, but it's not comfortable and it's not easy". This constant and deliberate self-assessment is a method for continuous personal improvement, capturing our discomfort before failure has an opportunity to take root.

Bugs in Brains

I ate lunch with Anna Royzman and Aaron Hodder one day. Our conversation meandered through many testing topics, then Anna said something that really struck me.

"I keep finding bugs in people's brains, where do I report those?"

She was speaking about the way in which we, as testers, start to learn how to interrogate a piece of software based on the developer who coded it. When we've worked in a team for a long time, our heuristics may become incredibly specialised.
 
Aaron concurred and provided an example. In a previous workplace, he knew that the root cause analysis would lead in very different directions dependent on the developer that had introduced a bug. One developer would usually introduce bugs that were resolved by a configuration change for an edge case scenario. Another developer would usually introduce bugs that were resolved by a complete functional rewrite for a core system flow.

This was something that I could also relate to, but had never considered as anything unique. I'm now thinking more about whether testers should raise these underlying differences between developers and how we might do so.

Talking to Developers

Keith Klain gave a keynote address in which he spoke about the ways to successfully speak to management about testing. I found his advice just as applicable to the interactions between testers and developers.

Enter conversations with an outcome in mind. Manage your own expectations by questioning whether the outcome you are seeking is a reasonable one. There is a common misconception that testers are wasting developer's time. Having one specific goal for every interaction is likely to keep your conversations focused, succinct and valuable, which will help to build rapport.

Know your audience and target your terminology accordingly. Even if you don't have the same skills that they have, you can still learn their language; interactional expertise. You can talk like a developer without actually being able to do development. For example, learn what third party libraries are used by the developers, and for what purpose, so that you can question their function as a tester.

Work to build credibility with people who matter. If you don't join a team with instant status, remember that this can be built by both direct and indirect means. Cultivating a good reputation with people in other roles may help create respect with those developers who are initially dismissive of you as a tester.


* By the way, Margaret has an excellent article on this topic in the upcoming October edition of Testing Trapeze.