Wednesday, 27 August 2014

How to create a visual test coverage model

Creating a visual test coverage model to show test ideas in a mind map format is not a new idea. However it can be a challenging change in paradigm for people who are used to writing test cases that contain linear test steps. Through teaching visual modelling I have had the opportunity to observe how some people struggle when attempting to use a mind map for the first time.

Though there is no single right way to create a visual test coverage model, I teach a simple approach to help those testers who want to try using mind maps but aren't sure where to begin. I hope that from this seed, as people become confident in using a mind map to visualise their test ideas, they would then adapt this process to suit their own project environment.

Function First

The first step when considering what to test for a given function is to try and understand what it does. A good place to start a mind map is with the written requirements or acceptance criteria.

Imagine a story that includes developing the add, edit, view, and delete operations for a simple database table. The first iteration of the visual test coverage model might look like this:


Collaborate

Next consider whether all the behaviour of this function is captured in the written requirements. There are likely to be items that have not been explicitly listed. The UI may provide a richer experience than was originally requested. The business analyst may think that "some things just go without saying". There may be application level requirements that apply to this particular function.

Collaboration is the key to discovering what else this function can do. Ask a business analyst and a developer to review the mind map to be sure that every behaviour is captured. This review generally doesn't take long, and a quick conversation early in the process can prevent a lot of frustration later on.

Imagine that the developer tells us that the default design for view includes sort, filter, and pagination. Imagine that the business analyst mentions that we always ask our users to confirm before we delete data. The second iteration of the visual test coverage model might look like this:


Think Testing

With a rounded understanding of what the function does the next thing to consider is how to test it.

For people that are brand new to using a mind map, my suggestion is to start by thinking of the names of the test cases that they would traditionally scope. Instead of writing down the whole test case name, just note the key word or phrase that differentiates that case from others. This is a test idea.

Test ideas are written against the behaviour to which they apply. This means that tests and requirements are directly associated, which supports audit requirements.

Imagine that the tester scopes a basic set of test ideas. The third iteration of the visual test coverage model might look like this:


Expand your horizons

When inspiration evaporates, the next challenge is to consider whether the test ideas captured in the model are sufficient. There are some excellent resources to help answer this question.

The Test Heuristics Cheat Sheet by Elisabeth Hendrickson is a quick document to scan through, and there is almost always a Data Type Attack that I want to add to my model. The Heuristic Test Strategy Model by James Bach is longer, but I particularly like the Quality Criteria Categories that prompt me to think of non-functional test ideas that may apply. Considering common test heuristics can help achieve better test coverage than when we think alone.

Similarly, if there are other testers working in the project team ask them to review the model. A group of testers with shared domain knowledge and varied thinking are an incredibly valuable resource.

Imagine that referring to test heuristic resources and completing a peer review provides plenty of new test ideas. The fourth iteration of the visual test coverage model would have a lot of extra branches!

Lift Off!

From this point the visual test coverage model can be used in a number of ways; a base for structured exploratory testing using session based testing, a visual representation of a test case suite, a tool to demonstrate whether test ideas are covered by automated checks or testing, or as a radar to report progress and status of test execution. Regardless of use, the model is likely to evolve over time.

I hope that this process encourages those who are new to visual test coverage modelling to try it out.

Tuesday, 12 August 2014

Context Driven Change Leadership

I spent my first day at CAST2014 in a tutorial facilitated by Matt Barcomb and Selena Delesie on Context Driven Change Leadership. I thoroughly enjoyed the session and wanted to share three key things that I will take from this workshop and apply in my role.

Change models as a mechanism for feedback

The Satir Change Model shows how change affects team performance over time.


Selena introduced this model at the end of an exercise designed to demonstrate organisational change. She asked us each to mark on the model where we felt our team were at by placing an X mark at the appropriate point on the line.

Most of the marks were consistently placed in the integration phase. There were a couple of outliers in new status quo and a single person in resistance. It was a quick and informative way to gauge the feeling of a room full of people that had been asked to implement change.

I often talk about change in the context of a model, but had never though to use it as a mechanism for feedback; this is definitely something I will try in future.

Systems Thinking

Matt introduced systems thinking by talking about legs. If we were to consider each bone or muscle in the leg in isolation, then they would mean less than if we considered the leg as a whole.

Matt then drew a parallel to departments within an organisation. Where people are focused on their individual pieces, but not the system as a whole, then there is opportunity for failure.


Matt spoke about containers, differences, and exchanges (the CDE model by Glenda Eoyang [1]). These help identify the opportunities to manipulate connections within a complex system.

Containers may be physical, like a room, but they can also be implicit. One example of an implicit container that was discussed in depth was performance reviews, which may drive behaviour that impacts connections between individuals, teams and departments in both positive and negative ways.

Differences may include obvious differences like gender, race, culture, or language. It also includes subtle differences like the level of skill within a team. To manipulate connections you could amplify a difference to create innovation, dampen a difference to remove harmful behaviour, or choose to ignore a difference that is not important.

Exchanges are the interactions between people. One example is how communication flows within the organisation. Is it hierarchical via a management structure, or freely among employees? Another is when someone comes to work in a bad mood they can lower the morale of those around them. Conversely, when one person is excited and happy they can improve the mood of the whole team.

In our end of day retrospective, Betty took the idea of exchanges further:


How will I apply all this?

I have spent a lot of time recently focused on my team. When I return to work, I'd like to step back and attempt to model the wider system within my own organisation. Within this system I want to identify what containers, differences, and exchanges are present. From this I have information to influence change through connections instead of solely within my domain.

Fantastic Facilitation

Matt and Selena had planned a full day interactive exercise to take a group of 30 people through a simulated organisational change.

We started by becoming employees of a company tasked with creating wind catchers. The first round of the exercise was designed to show the chaos of change. I was one of many who spent much of this period feeling frustrated at a lack of activity.

At the start of round two, Erik Davis pulled a bag of Lego from his backpack. He usurped the existing chain of command in our wind catcher organisation to ask Matt, in his role as "the market", whether he had any interest in wind catchers made from Lego. As a result, a small group of people who just wanted to do something started to build Lego prototypes.

Matt watched the original wind catcher organisation start to splinter, and came over to Erik to suggest that the market would also be interested in housing. Being a much more appropriate and easy item to build from Lego, there was a rapid revolt. Soon I was one of around seven people who were working in a new start up, located in the foyer of the workshop area, building houses from Lego.


There was a lot of interesting observations from the exercise itself, but as someone who also teaches I was really impressed by the adaptability of the facilitators. Having met Matt at TestRetreat on Saturday, I knew that he had specially purchased a large quantity of pipe cleaners for the workshop. Now here we were using Lego to build houses, which was surely not what he had in mind!

When I raised this during the retrospective, both Matt and Selena gave excellent teaching advice.

Selena said that when she first started to teach workshops, she wanted them to go as she had planned. What she had since discovered was that if she gave people freedom, within certain boundaries, then the participants often had a richer experience.

Matt expanded this point by detailing how to discover those boundaries that should not move. He tests the constraints of an exercise by removing and adding rules, thinking in turn about how each affects the ultimate goal of the activity.

As a result of this workshop I intend to challenge some of the exercises that I run. I suspect that I am not providing enough freedom for my students to discover their own lessons within the learning objective I am ultimately aiming to achieve.

Sunday, 3 August 2014

Creating a common test approach across multiple teams

I was recently involved in documenting a test strategy for a technology department within a large organisation. This department runs an agile development process, though the larger organisation does not, and they have around 12 teams working across four different applications.

Their existing test strategy document was over five years old and no longer reflected the way that testers were operating. A further complication was that every team had moved away from the original strategy in a different direction and, as a result, there was a lack of consistent delivery from testers across the department.

To kick things off, the Test Manager created a test strategy working group with a testing representative from each application. I was asked to take a leadership role within this group as an external consultant, with the expectation that I would drive the delivery of a replacement strategy document. After an initial meeting of the group, we scheduled our first one hour workshop session.

Before the workshop

I wanted to use the workshop for some form of Test Strategy Retrospective activity, but the one I had used before didn't quite suit. In the past I was seeking feedback from people with different disciplines in a single team. This time the feedback would be coming from a single discipline across multiple teams.

As preparation for the workshop, each tester from the working group was asked to document the process that was being followed in their team. Though each documented process looked quite different, there were some commonalities. Upon reading through these, I decided that the core of the new test strategy was the high-level test process that every team across the department would follow, and that finding this would be the basis of our workshop.

I wanted to focus conversation on the testing activities that made up this core process without group feeling that other aspects of testing were being forgotten. I decided to approach this by starting the workshop with an exercise in broad thinking, then leading the group towards specific analysis.

When reflecting on my own observations of the department, and reading though the documented process from each application, I thought that test activities could be categorised into four groups.

  1. Every tester, in every team in the department, does this test activity, or should be.
  2. Some testers do this activity, but not everyone.
  3. This test activity happens, but the testers don't do it. It may be done by other specialists within the team, other departments within the organisation, or a third party.
  4. Test activities that never happen.

I wrote these categories up on four coloured pieces of paper:



At the workshop

To start the workshop I stuck these categories across a large wall from left to right.

I asked the group to reflect on what they did in their roles and write each activity on an appropriately coloured post-it note. For example, if I wrote automated checks for the middleware layer, but thought that not everyone would do so, then I would write this activity on a blue post-it note.

After five minutes of thinking, everyone got up and stuck their post-it notes on the wall under the appropriate heading. We stayed on our feet through the remainder of the session.

The first task using this information was to identify where there were activities that appeared in multiple categories. There were three or four instances of disagreement. It was interesting to talk through the reasoning behind choices and determine the final location of each activity.

Once every testing activity appeared in only one place we worked across the wall backwards, from right to left. I wanted to discuss and agree on the areas that I considered to be noise in the wider process so that we could concentrate on its heart.

NEVER
The never category made people quite uncomfortable. The test activities in this category were being consistently descoped, even though the group felt that they should be happening in some cases. There was a lot of discussion about moving these activities to the sometimes category. Ultimately we didn't. We wanted to reflect to the business that these activities were consistently being considered as unimportant.

OTHERS
As we talked through what others were doing, we annotated the activities with those who were responsible for it. The level of tester input was also discussed, as this category included tasks happening within the team. For example, unit testing was determined to be the developer's responsibility, but the tester would be expected to understand the coverage provided.

SOMETIMES
When we spoke about what people might do, most activities ended up shifting to the left or the right. Either they were items that were sometimes completed by the tester when they should have been handled elsewhere, or they were activities that should always be happening.

EVERYONE
Finally we talked through what everyone was doing. We agreed on common terminology where people had referred to the same activities using different labels. We moved the activities into an agreed end-to-end test process. Then we talked through that process to assess whether anything had been forgotten.

After the workshop

At the end of the hour, the group had clarity of how their individual approach to testing would fit together in a combined vision. The test activities that weren't common were still captured, and those activities outside the tester's area of responsibility were still articulated. This workshop created a strong common understanding within the working group, which made the process of formalising the discussion in a document relatively easy. I'd recommend this approach to others tasked with a similar brief.

Saturday, 12 July 2014

#KWST4

I made it along to Day Two of the fourth Kiwi Workshop for Software Testing (#KWST4) this weekend. The theme was "How to speed up testing, and why we shouldn't". It was great to see many new faces, and hear some very interesting experiences. I'd like to share three topics of conversation that resonated with me.

Fast go, throw away stuff

The morning opened with Viktoriia Kuznetcova who spoke on her three strategies for speeding up testing in an environment with time pressure; cluster, prioritise, and parallelise.

Viktoriia spoke of an environment where release dates were often determined in advance, then extensively advertised to the existing user base. This meant that there was little opportunity to amend project timelines when understanding of the solution changed. She said that she felt, at times, the release dates were "not possible". Yet because the sales department had set an expectation in the market that the feature would be delivered on a certain date, the team would occasionally be asked to release on deadline without completing testing.

For me, this experience illustrated the perception that stakeholders outside of testing often have; to speed testing up, we just cut down the timeline and stop testing earlier. I feel that this view is born from observing that testers are naturally cautious, and experience with successful releases where testing was incomplete.

On the topic of "fast go, throw away stuff", people spoke about:

  • Are other teams are dependent on a certain level of testing being completed? It's important to consider this when prioritising your own test activities with a view to eliminating the least important. 
  • If testing a release for a specific client, and testing is being targeted to meet their specific needs, will the cost of regression testing for later releases with a general audience then be inflated?
  • If you can't quantify the change in quality associated with a change in test scope, then it's hard to offer evidence to support a longer testing time frame. Without this, managers may not see any difference in the delivery to the client.
  • For management, testing is often about building confidence. Once an expectation has been set that testing can be completed in X number of weeks, they may then take some confidence in future purely from the same period of time elapsing. "You've had four weeks of testing. We released after four weeks last time and nothing went wrong."
  • Where permitted, testers may choose to continue testing beyond a release date. By stating that testing will continue, doubt is introduced in the stakeholder decision making about releases. If the test team are planning to continue regardless, this sends a very clear message that testing is not done.


Bug reports are not the output of testing

The second experience report of the day was from Rachel Carson. She spoke about her organisation shifting from a waterfall to an agile development methodology, and how this had resulted in a faster test process.

Rachel talked about how her bug reporting style had changed. She used to raise every bug she found in the bug tracking system of her organisation. With the shift to agile, she found that her approach became a lot more pragmatic. With frequent conversation between developers and testers, Rachel didn't always need to use the tool. When she did have to raise a bug, she though pragmatically about what would realistically be fixed.

For me, this experience illustrated how we can speed up testing when we stop thinking that bug reports are our output. The measure of a good tester is not the number of tickets against their name in a bug tracking system.

On this topic, people spoke about:

  • It is a big mindset shift to step out of a blame culture, where the tester wants a written record of every issue they have observed in the system, to one in which problems are discovered and resolved without a paper trail.
  • As testers, our job is not to generate information, it's to provide useful information. More is not always better, especially for written bug reports.
  • In an agile environment, bug triage is owned by the team. Where there is a decision not to fix a problem, this doesn't necessarily need to be documented.


Testers becoming BAs

The third and final experience report of the day was by Adam Howard. He spoke of his experiences in implementing visual modelling and session based testing in a challenging project environment.

Adam spoke about working on a defect-fix release. The defects were focused in a specific area of the application, but were so high in number that they essentially represented a complete re-write of that piece of the system. Adam used visual test coverage models to build a holistic view of the collection of defects, as developing and testing each in defect isolation would have resulted in a fragmented end product.

For me, this experience illustrated how we can speed up testing by taking ownership of some business analysis activities. The tester should not be the first person to visualise a model of the solution, yet often we are. By leading a collaborative approach to create a visual document, the team develops a shared understanding of what is being built, which can the job of the tester much easier.

Check out

My final and most practical take-home was a tip from Sean Cresswell. He spoke of a useful method for determining whether there was shared understanding of a technical solution across a team. Place a developer and a tester on opposite sides of a free standing whiteboard and ask them to draw a diagram of how they think the system works. I thought this was a quick, easy and brilliant way to spot any discrepancies in thinking.

I enjoyed my day at KWST4. Special thanks to Oliver Erlewein for organising and Richard Robinson for facilitating.

Thoughts shared here are as a result of group discussion between all KWST4 attendees; (pictured below from left to right) Katrina Clokie, Chris Rolls, Aaron Hodder, Parvathy Muraleedharan, Joshua Raine, Adam Howard, Richard Robinson, Andrew Robins, Oliver Erlewein, Viktoriia Kuznetcova, Thomas Recker, Rachel Carson, James Hailstone, Ben Cooney, Till Neunast, Nigel Charman & Sean Cresswell.

Wednesday, 9 July 2014

Open to Feedback

I spoke at AgileWelly last night in order to practice my talk for CAST2014. It was the first time that I'd presented in an auditorium, to a very large audience, from behind a podium, under a spotlight, with a microphone. The environment was certainly intimidating.

Before beginning, I emphasised to the audience that I was keen to receive feedback on my presentation, so that I could improve it before repeating myself on an international stage. I was nervous about asking people to critique my work because I was worried about what they would say, but I was also worried that they might not say anything! Indifference is the worst reaction.

As I finished speaking, I had already self-identified a couple of areas that I wanted to improve.

I thought that my introduction and conclusion were weak. These were the areas in which I was least prepared, and it wasn't as easy as I had imagined to improvise the content.

Additionally, when I checked the clock at the end of my first section, I realised that I was speaking far too quickly. I was so nervous that I flown through my slides, and I had to consciously collect myself in order to continue at my planned, and more sedate, pace.

Given that it was so easy for me to identify these two changes, even in overwhelming-post-presentation-brain-overload, my feedback fears intensified. No longer did I think that people wouldn't have anything to say. Instead I thought that they'd have so much constructive criticism that I would be overwhelmed!

In the last 24 hours I have been privileged to receive a great deal of feedback from a number of different people. Thankfully, it's been largely positive. The suggestions offered have been constructive, and I've seen some consistent themes emerge.

Many people have endorsed my self assessment. But in addition to identifying these same issues, they've also offered helpful and specific suggestions as to how I might change my approach. These have included both general presentation skills, and ways to expand particular pieces of my content.

I've also had feedback that I never would have thought of myself. Great ideas for how I might add content to the presentation based on the questions I received at the end, tips to promote my associated blog posts, and terminology for concepts that I was describing.

The feedback I've been given is a reflection on the strong IT community in Wellington. Thank you Aaron, Sarah, Craig, Ben, Stu, Larrie, Nigel, William, Adrian, Shaun, Yvonne, and others.

Though the whole experience was really challenging, both presenting in a difficult environment and opening myself to critique, I really have learned a lot from it. I would have felt annoyed to be leaving the stage in New York thinking that I could have done it better. I want to deliver the best talk that I am capable of.

If you're yet to open your next presentation to feedback, I would encourage you to do so.

Wednesday, 2 July 2014

How to make a workshop hum

On Monday I co-facilitated a two hour workshop. It was good, but not great. I felt that the room was a bit flat, and wasn't entirely sure that my message had been conveyed effectively.

Today I had the opportunity to run the same content again in a different city. Thirty minutes before the first student arrived, I decided to tweak the material. I wanted this version of the workshop to hum.

And hum it did. Though I can't be sure whether it was the people in the room or the material, I thought I'd share the three things I changed. I certainly think they contributed to the shift in outcome.

Give and Take

The first change was near the beginning of the session. We had asked the participants to do a series of activities in fairly quick succession, which came about as a result of attempting to condense some old material into a smaller time frame. We had valued our interactive exercises above our static slides, which meant that there was no longer any significant presenter content between some of our activities.

On Monday, it felt like we were taking a lot from our students and giving very little in return. We wanted them to think about this, and then think about that, and then think about something else, with very little time in between for them to pause, digest and absorb. Though we ultimately wove all the pieces together, the balance felt wrong.

Today I re-instated one slide. Just one. Doing this created a few minutes of space between two activities that asked people to think pretty hard. By adding this piece of content, I gave something back as the presenter. I gave people enough time to feel that the output of the first activity was acknowledged, and gave their brains time to recover!

This change altered the attitude of participants. The second activity in this series was tackled with enthusiasm instead of reluctance.

Grow the Numbers

The second change was in the classroom dynamics for our exercises. In the Monday session we jumped between asking people to work as groups, as individuals, as groups, then as individuals again. Working individually makes people introspective, somber and comparatively withdrawn. Working in groups is collaborative, dynamic and engaging. By mixing the numbers in each of our activities, classroom participation was see-sawing.

Today I changed the activities specifically to create an increasing momentum through the module of material. I started with an informal group conversation, which worked as an icebreaker to have people comfortable with those around them. Then I ran an individual exercise, an exercise in pairs, then finally an exercise in larger groups.

Creating this progression significantly altered communication through the workshop. Student engagement evolved in a much more cohesive fashion; I had attention and participation to the very end.

Set up for Success

The third change was to our final exercise. It was migrated from another area of our training, but on Monday we found that it was a much harder problem in its new context than in its original one. In addition, we asked students to complete this last exercise alone. Having a final exercise that was both challenging and silent meant that the class finished on quite a flat, serious note.

Today I re-designed this exercise so that the students had a better chance of success. I left the answers from the previous problem on the whiteboard, as a prompt for their thinking. I provided an expanded mnemonic as a reference. I also switched to a group format so that they could use one another as resources, and actively encouraged them to collaborate.

These changes meant that the answers provided by each group at the end of the session reflected a real understanding of the concepts that I was trying to teach. Further, the students themselves recognised that they had grasped the material, and the room was buzzing with their shared success.


In the coming weeks I plan to revisit the rest of my training material and apply these same three principles across each module; give and take, grow the numbers, set up for success. Today makes me believe that this is how to make a workshop hum.

Sunday, 29 June 2014

The ratio myth

Recently I've been hearing a lot about the developer to tester ratio for agile teams. This is usually evangelised as an ideal of 4:1; four developers to one tester.

Having worked in many different agile teams, this ratio doesn't align with my experience. Here are a few examples of successful teams I have worked in with structures outside the norm.

More Developers [6:1]

In one of my earliest roles within agile development, I worked as the sole tester in a team with six developers. We were tasked with developing a system from scratch that would replace a legacy application.

The business had a clear vision, were engaged in what the team were doing, and sought to provide timely feedback on early versions of new features. Their involvement bought the users' expectations to any new features as soon as practical.

The developers had a lot of infrastructure oriented work to do, as they were setting up an entirely new application stack. In addition to technical stories, the team always carried at least one with a customer focus, to provide continuous delivery of business value. Features were small and well-defined from a business perspective, but usually involved the interaction of multiple systems behind-the-scenes, which meant they took time to develop.

As a tester, my primary focus was developing and maintaining a tidy suite of automated regression checks. The testing I did was either quick and sympathetic, so that simple issues were discovered and resolved before early engagement of the business, or aggressively sought out interesting problems.

In this team I often felt bored. Though I had six developers, it took a lot of programming to produce small pieces of front-end function. Fast feedback from the business reduced the amount of testing I had to do. I didn't feel challenged.

Less Developers [2:1]

In a different organisation, I worked in a team tasked with re-designing an existing purchasing workflow in an online marketplace. The team included two designers, two developers and myself as the sole tester.

In this team, the pace of development was determined by the rate at which the designers could agree on the updated user interface and workflow. Programming changes were to reflect paper prototypes; updating stylesheets, JavaScript and links between pages.

Though development was relatively simplistic, the testing required was significant. The existing workflow included a number of different business rules, which were recorded in varying detail across multiple documents. There were a vast number of variables to consider in purchasing an item, both obvious variables and those that were indirectly accessible. There was no automated checking available in this domain.

In this team I was busy. Due to the financial impact of failure in this workflow, testing had to be comprehensive and meticulous. Proportionally, much more time was spent in testing, so the output of two developers was enough to keep me occupied.

More Testers [4:2]

Finally, in a third organisation, I worked in a team asked to fix a production issue in a complex shared web service. This team included four developers from four specialist areas; mainframe, database, integration and front-end. I was one of two testers.

In this environment, the complexity of development varied dependent on the system. Most changes were isolated to a specific aspect of functionality, which could only be observed when users followed a specific workflow.

From a test perspective there was a lot of work to do. As in the previous team, the code change was in an important area of the application with complex business rules. Unlike the previous team, there were automated regression checks available at multiple layers of the architecture.

In this team I was challenged. Testing had to occur on each piece of development in isolation, then via multiple front-end consumers of the web service. In addition, due to their specialist focus, a large component of testing in this team was to facilitate communication among developers. With a tight deadline, and proportionally more testing than development, we needed two testers to get through everything.

*****

There are a lot of variables in determining the best ratio of developers to testers in an agile team. A ratio of 4:1 may be a good place to start, but inflexibility in resourcing is folly. There is no hard and fast rule.

If you encounter a scrum master that attempts to enforce a ratio at the expense of common sense, perhaps you can remind them of the first principle of the agile manifesto. Individuals and interactions over processes and tools. A ratio is just a tool, so be sure to listen to your people if they tell you it's not working.