Friday 20 December 2013

Presentation Purpose

As speakers, we get varying periods of time in which to make an impression. Over the past three months I have often been standing at the front of the room. When I speak I am usually talking about an approach to testing that most in the audience do not follow. I've been thinking about how to best present a new idea within a given time and what outcome I should try to achieve.

10 minutes - Doubt

This week I was invited to speak at a mini-conference that was run within a large organisation. Each speaker had 20 minutes; we were requested to present for 10 minutes then lead an open discussion on our chosen topic. I decided to frame my slides within the context of a problem that I was fairly confident at least some within the audience would be facing. My presentation included functional coverage models using mind maps, session based test management, automated checking, and how to combine the three effectively in a scrum development framework to pop the testing bubble.

On the day I was nervous. I wasn't sure that I had pitched the material at the right level to accommodate the varied audience. As I watched the first presenter I realised it wouldn't matter what I said. We didn't have enough time to explain a concept properly. My goal changed. It wouldn't be what I said, it would be how I said it.

People remember passion. It makes them wonder why they don't feel that way about their work. It makes them wonder what they're missing. This is one reason that trainers like James Bach and Brian Osman are often cited in the journey of software testers in New Zealand. A presenter who speaks with passion sows the seeds of doubt. As I stood up to present, I decided that was how I could win the room. Speak memorably and create uncertainty. It worked.

"Wow, I had no idea you were so passionate about testing".

"You gave me a lot to think about... and I'm glad my boss was here for that".

"We have that exact problem in our team right now. I'll be bringing this up in our next retrospective."

When you don't have long to make an impression, you want to give some key information and terminology that people can latch on to and research. But more importantly, be excited about what you are saying. Engage the audience and make them question what they are doing. If you only have 10 minutes, the best thing you can do is create doubt that leads people on their own journey of discovery.

60 minutes - Curiosity

During December, I have been teaching one hour testing sessions with a practical focus. These have included:

  • Hendrickson variables and combinatorial testing using Hexawise
  • Scenario touring and flow testing using state transition modelling
  • Using oracles to identify bugs

When you have an hour, you don't focus on what you'll present or even how you say it. When you have an hour, you get your audience to do. You want people to leave the room eager to repeat the activity in their own role. 'I want to do that again'.

I had an attendee from my first class show up in the second week with a Hexawise screenshot. His testing nightmare had been simplified from several thousand possible variable combinations to less than 50. "Everyone should use this tool!" he said. I was a little concerned that he'd latched on to the tool alone. However he had done the reading too and had a good understanding of pairwise testing and what benefits it could offer in order to speak about the practice to his boss.

In an hour, you can run a hands on activity. You can demonstrate a practice. You can make people think and discuss. But, most importantly, you can make people curious enough to repeat what they've been shown, to continue to discover and learn on their own.

A day - Comfort

A day is a gift (or a crushing responsibility). I try to use a day to take people on a journey. I still want to sow a seed of doubt and create curiosity, but I want to go further. I try to anticipate what people will want to know next so that I can lead the expedition of discovery. I answer a lot of questions, we work until the class feel comfortable with a new idea.

Yesterday morning I ran a session about test cases. We started with test case execution, 6 pairs of testers simultaneously executing an identical set of 8 test cases against an online auction site (Adam wrote about this). The test results were incredibly varied, coverage was intentionally patchy and a number of bugs were missed. We talked about inattentional blindness, losing sight of our mission and the limitations of test cases.

"But those test cases weren't detailed enough!" claimed one student. "If the test cases were better, then we would have been fine". Excellent. I was hoping you'd claim that.

The next exercise asked each student to write a test case to verify one particular function of our national weather forecasting website, which they could interact with as they wrote. Handwriting a single test case took a remarkably long time. We then rotated the test cases through the group. Each tester had 2 minutes to complete the test case they held and record whether it was passed, failed, or unable to be executed. As the test cases circled the room, the mood went from frenetic energy to barely contained boredom. The results were as baffling as in the first exercise. We talked about procedural memory and change blindness, but I could see that some were still not convinced.

I had one more trick up my sleeve. An alien meets you and asks how to brush its teeth. Hilarity ensued as people attempted to brush their teeth following the instructions of their peers. I wanted to re-iterate the message that better test cases would not solve the problems we were seeing. I talked about having an external locus of control and how we absolve ourselves of personal responsibility when we see incredibly detailed instructions.

When you have time, you can take your audience on a journey and solidify your point of view. You can present an idea from multiple angles and address concerns of those in the audience. You can give enough information to make people comfortable with a different opinion, creating a base from which they can action change.

Doubt, curiosity and comfort. Are your goals the same?

Friday 13 December 2013

Challenges, Puzzles & Games


It seems to me that one of the most difficult things about preparing training material is finding applications that students can explore; a set of public applications that can help us to teach software testing. These may be applications based on traditional testing exercises, applications written specifically to challenge or puzzle testers, games that highlight a particular aspect of software testing or an application with unintended and interesting behaviours.

Here are some of the things I have found or been pointed towards:

A simple parking calculator hosted by Adam Goucher for testing based on the original from Gerald R Ford Airport. The challenge I was given was to find the maximum dollar figure I could be charged for parking.

The horse lunge game, which is a nice application for state transition modeling.

The Triangle Tester exercise from Elisabeth Hendrickson, with background information that can point you to specific bugs for investigation.

The Black Box Software Testing machines from James Lyndsay has a number of flash games to practice exploratory testing.

A simple Javascript implementation of Escapa, linked from a (now closed) testing challenge to find the shortest and longest game times.

James Bach recommends the White Box exercise and Lye Calculator.

I'm very interested to learn what else is out there, please leave a comment.




Monday 9 December 2013

Call for Proposals

It seems to be conference proposal season. I decided to submit for three conferences last week; Agile 2014, CAST 2014 and Let's Test Oz. Each had a very different process for submission.

Agile 2014

Agile 2014 is a large conference and the call for proposals is designed to support a large response. There are clear guidelines as to what a submission should include, with an explanatory paragraph against each of:
  • Title
  • Introduction
  • Audience
  • Presentation format
  • History of the presentation
  • Experience of the speaker
  • Learning outcomes
In addition, there is a full review system in place. The earlier you submit, the more feedback cycles you hit to evolve your proposal into the best possible candidate for selection to the programme.

Surprisingly, access within the submissions system is open so that anyone who logs in can view all proposals. Thankfully I had submitted my attempt prior to reading a review comment that included these scathing words "The abstract, on the other hand, made my ears bleed. I strongly recommend rewriting it from scratch with clearer, simpler phrases. I felt trapped in a Dilbert cartoon as I read it."

Let's Test Oz

Let's Test Oz include a comparatively sparse level of detail. Proposals should have a:
  • Title
  • Summary of your presentation’s key points, including a brief explanation of the context
  • Key learning outcomes that you hope your audience will walk away with
The submission is sent via email, its receipt is not acknowledged and there is no review loop. You are notified when you are accepted or otherwise.

CAST 2014

CAST 2014 requests abstracts to align to a theme of "The Art and Science of Software Testing". There are no guidelines on what an abstract should contain; the submissions form requests the abstract as a file upload but doesn't specify what file format. Submissions are acknowledged via an automated mail receipt. There is no formal review process, though Paul Holland seems very friendly.



Why?

It seems to me that each call for proposals became less specific. I understand that some of these differences will be driven by the size of the audience, the size of the conference organising committee and cost, but I'm wondering why the context driven community can't steal a few leaves from the agile book.

Without direction there must be huge variation in the content of proposals received. Without review there is no opportunity to refine the quirky idea into a solid submission. Where rejection is uniform across all who are unsuccessful, where is the opportunity to learn and refine?

As someone with little experience in submitting to conferences it would be nice to feel that the system supported my venture in to the unknown. Twitter has been awash with pleas to submit, yet on reading the details for the proposal calls I had no idea where to begin. Without guidance on what a proposal should contain, or feedback on what to change, those attempting to enter the arena are blindfolded.

Though there is help around if you ask for it, perhaps we could improve the process too? What do you think?

Wednesday 20 November 2013

Mind Maps and Automation

I've written in the past about the risk of relying solely on automated checking in release decisions. On a recent project I had great success in changing how the automated test results were used by including mind maps in the reports generated.

Technical background

We were developing a web application using Java, which was built using maven. I created a separate module in the same source repository as the application code for the automated checks. It used Concordion to execute JUnit tests against the application in a browser using Selenium WebDriver. The suite executed from a maven target, or tests could run individually via the IDE. I had a test job in our Jenkins continuous integration server that executed the tests regularly and produced an HTML report of the results using the HTML Publisher Plugin.

What does it look like

Below are the results for a Sample Application*. The job in Jenkins is green and more information about this success can be found by hovering on the job and clicking the HTML report.



My past experience was that management engage with the simple green / red feedback from Jenkins more than any other type of test reporting. Rather than continuing to fight this, I decided to change what it could tell them. There will always be aspects of functionality where it does not make sense to add an automated check, bugs that fall beyond the reach of automation, and decisions about scope that cannot be easily captured by automation alone. I wanted to communicate that information in the only place that management were listening.

The report is designed to be accessible to a non-technical audience and includes a pre-amble to explain the report structure. The entry point is designed to provide a high level visual overview of what testing has occurred, not just the results of automation. I found that the scrum master, product owner and project manager didn't drill further in to the report than this. This is essentially a living test status report that does not contain any metrics.


Each feature was highlighted to reflect a high level status of the checks executed within that space (that's why Sample Feature above is green). It linked to living documentation, as we used Specification by Example to define a set of examples in the gherkin format for each story. 



Although written in plain English with a business focus, these specifications were rarely accessed by management. Rather they were used extensively by the business analysts and developers to review the behaviour of the application and the automated checks in place to verify it. The business analysts in particular would regularly provide unsolicited feedback on these examples, which is indicative of their engagement.

Having different levels of detail accessible in a single location worked extraordinarily well. The entire team were active in test design and interested in the test results. I realised only recently that this approach creates System 1 reporting for mind maps without losing the richer content. It's an approach that I would use again.

What do you think?

__

* This post was a while in coming because I cannot share my actual test reports. This is a doctored example to illustrate what was in place. Obviously...

Saturday 16 November 2013

Tell me quick

I've been thinking a lot about what stops change in test practice. One of the areas I think we encounter the most resistance is altering the type of reporting used by management to monitor testing.

There's a lot of talk in the testing community about our role being provision of information about software. Most agree that metrics are not a good means of delivering information, yet management seem to feel they have to report upwards with percentages and graphs. Why do they feel this way?

When testers start richer reporting, managers then have to make time to think about and engage with the information. By contrast, numbers are easy to draw quick conclusions from, be they correct conclusions or not. It doesn't take long to scan through a set of statistics then forward the report on.

I have recently switched to a role with dual focus in delivery and people management. I've been surprised by how many additional demands there are on my time. Where a task requires me to flip in to System 2 thinking, a slower and more deliberate contemplation, that task gets parked until I know I have a period of time to focus on it.

When I don't get that period of time, these types of task build up. I end up with emails from days ago that await my attention. They sit in my inbox, quietly mocking me. I don't enjoy it; it makes me feel uncomfortable and somewhat guilty.

Imagine those feelings being associated with a new test practice.

In my local community there is currently a focus on using mind mapping software to create visual test reporting (shout out to Aaron Hodder). Having used this approach in my delivery of testing, I find it a fantastic way to show coverage against my model and I feel a level of freedom in how I think about a problem.

For managers involved in delivery a mind map allows for fast and instinctive assessment on the progress and status of testing (System 1 Thinking). But for people not involved in the project day-to-day it is not that easy.

Every tester will use mind mapping software differently. The structure of their thinking will differ to yours. The layout won't be as you expect. The way they represent success, failure and blockers will vary. Further, if you haven't seen a mind map evolve and you don't know the domain, it's going to be a challenge to interpret. You'll need to think about it properly, and so the report gets filed in that System 2 pile of guilt.

I don't want to report misleading metrics, but I don't think we've found the right alternative yet. I don't want to change the new way that we are working, but one reason it is failing is that we don't have a strong System 1 reporting mechanism to external stakeholders.

To make change stick we need to make it accessible to others. We must be able to deliver test information in a way that it can be processed and understood quickly. How?

I'm thinking about it, I'd love your help.

Thursday 14 November 2013

BBST Business Benefits

Numerous people have blogged about their experiences doing the BBST Foundations course. I recently wrote a successful proposal for five testers in my organisation to complete the BBST Foundations course in 2014. Below are a couple of generic pieces from the proposal that may help others who need to make a similar request.

The executive summary is largely pulled from the AST website, references are included. The benefits section is a subset of all benefits I had listed, capturing only those that I believe may be applicable across many organisations. Hopefully this post will save somebody time in requesting the testing education of themselves or others.

Executive Summary

The Association for Software Testing offers a series of four week online courses in software testing that attempt to foster a deeper level of learning by giving students more opportunities to practice, discuss, and evaluate what they are learning. [ref] The first of these courses is Black Box Software Testing (BBST) Foundations.

BBST Foundations includes video lectures, quizzes, homework, and a final exam. Every participant in the course reviews work submitted by other participants and provides feedback and suggests grades. [ref]

Too many testing courses emphasize a superficial knowledge of basic ideas. This makes things easy for novices and reassures some practitioners to falsely believe that they understand the field. However, it’s not deep enough to help students apply what they learn to their day-to-day work. [ref]

[organisation] seek deep thinking testers with a breadth of knowledge. The BBST series of courses
is an internationally recognised education path supporting that goal.

Benefits 

The immediate benefits to [organisation], which are realised as soon as the course starts, include:

  • Supporting testers who are passionate about extending their learning, which keeps these people invested in [organisation]
  • Connecting [organisation] testers with the international software testing community 

In the three months following course completion, the benefits include:

  • Re-invigorate participants to think deeply about the testing service they offer 
  • Sharing knowledge with the wider organisation
  • Sharing knowledge with the market via thought pieces on the [organisation] website 

Within a year, the benefits of this proposal are supported by further investment, including:

  • Offer the BBST Foundations course to a second group of participants, to retain momentum for the initiative and continue the benefits above 
  • Extend the original BBST Foundations course participants to an advanced topic, to grow the skills of [organisation] testers to deliver high value testing services 



Saturday 9 November 2013

Evolution of Testing

I've been involved in a twitter discussion about the evolution of testing. Aleksis has been pushing me to think more about the comments I've made and now I find I need a whole blog post to gather my thoughts together.

Scary thoughts

The discussion began with me claiming it was a little scary that a 2008 Elizabeth Hendrickson presentation is so completely relevant to my job in 2013. Aleksis countered with a 1982 Gerald Weinberg extract that was still relevant to him. This caused me to rue the continued applicability of these articles. "I think testing is especially stuck. Other thinking professions must have a faster progression?"

It feels to me that innovative ideas in testing gain very little traction. Thoughts that are years or decades old are still referred to as new technique. The dominant practice in the industry appears to be one that has changed very little; a practice of writing manual test cases prior to interaction with the application then reporting progress as a percentage of test cases executed.

Changing programming

But we're not alone, right? Aleksis said "I don't know what thinking professions have progressed better. Programming?"

Yes, I think programming is evolving faster than testing. I'm not just talking about the tools or programming languages that are in use. I believe there has also been a shift in how code is written that represents a fundamental change in thinking about a problem.

Programming began in machine language at a very granular level, or even prior with punch cards. When I worked as a developer 10 years ago the abstraction had reached a point where there was focus on object oriented programming techniques that created siloed chunks that interacted with each other. Now I see a step beyond that in the extensive use of third party libraries (Boost C++) and frameworks (Spring). The level of abstraction has altered such that the way you approach solving a programming problem must fundamentally change.

Why do I call this evolution? Because I don't believe people are standing up and talking about object oriented technique at programming conferences anymore. There are swathes of literature about this, and many people have reached achieved mastery of these skills. The problem space has evolved and the content of programming conferences has changed to reflect this. The output from thought leaders has moved on.

Dogma Danger

I think "evolution is when the advice of 30 years ago is redundant because it has become practice and new ideas are required." Aleksis countered "That's how you end up growing up with dogma."

I don't want people to take ideas on as an incontrovertible truth. But I do want people to change how they go about testing because they think an idea is better than what they currently do and they are willing to try it. I am frustrated because it feels that in testing we aren't getting anywhere.

Why aren't we seeing the death of terrible testing practice? Why is a model of thinking conceived in 1996 still seen as new and innovative? Why is the best paper from EuroStar and StarWest in 2002 still describing a a test management technique that isn't in wide use?

Moving the minority

James Bach chimed in on Twitter saying "Testing has evolved in the same way as programming: a small minority learns". Fair, but then everyone else needs to catch up, so that the minority can continue their journey before we're worlds apart. I haven't been in testing long, and I'm already tired of having the same conversations over and over again. I want to move on.

Is anyone else frustrated about this? How can we create sweeping change? How can we be sure the test profession will be genuinely different in another 10 years?

Friday 25 October 2013

Catalyst for Curiosity

Yesterday I was asked to speak to a room of testers who all work at the same organisation. They were on a training course, and I was asked to visit for half an hour to speak about "Testing Mind Maps".

The room was varied. Four people did not know what a mind map was. One guy claimed to have been using mind maps for over 10 years and expressed some bitterness that the Bach brothers were internationally recognised for doing things that he had done first. With 30 minutes to speak and such a wide breadth of skill, it was difficult to say something valuable for everyone. So, I spoke for a bit, with a handful of people nodding along and others looked stricken.

Then my colleague posed a question to the group. "How is it that there's such a wide variety of knowledge in this room when you all work in the same place? What's your internal process for knowledge sharing?"

There was a moment of silence, followed by a flood of excuses.

"We used to do lunch time sessions, but then people got too busy and no one came"
"We have a wiki, but no one really uses it"
"We used to pair up junior and senior testers, but now our focus is delivery"
"We sit in project teams instead of a test team, so we don't see each other often"

This made me question my own experiences with knowledge sharing; pockets of knowledge spread across multiple repositories, accompanied by individual processes for access and sharing. Though rarely ignored entirely, this scatter-gun approach means that knowledge transfer is usually left to those who are eager to learn. A pull rather than a push. If we want to see better transfer of knowledge, we need to create curiosity.

When we talked about change at KWST3, there was a common experience among attendees of a catalyst that set them on a path of education. Curiosity comes from someone or something that makes you want to learn more. I like to imagine that a half an hour chat about mind maps will be a catalyst for some of my audience yesterday.

Who is creating the catalyst for curiosity where you work? How is your knowledge being shared?

Sunday 13 October 2013

TDD on Twitter

Sometimes people ask me things on Twitter and it's too tricky to answer in 140 characters. Even though I'm not entirely sure how I ended up in this particular discussion thread, nor am I an expert in any of the areas I'm being asked about, I do have thoughts to share (as I often do). So lets talk TDD, or test driven development.


Question for all. I have been hearing about TDD and unit testing a lot these days. Has the bug count reduced? I have to stream line the application I am working on. I am finding ways to do test automation. I am working on Selenium. But from the application design perspective I am also looking ways with TDD. So the question, do you see any change in the issue count, if you have worked with a team following TDD?

My first thought is to wonder what we're comparing the issue count to? Presumably this is the first time we've written the software that's being tested, so it's difficult to know for sure whether a practice that includes TDD has certainly resulted in fewer issues.

TDD is a practice that mandates developers writing tests before they write code. It is a fundamental mindset shift in the way someone goes about their development. I don't believe testers can practice TDD unless they are writing the actual application code.

TDD and unit testing are not the same thing. Unit tests are a result of TDD, but the practice of TDD is not the only way to get unit tests. Unit tests can be written after the code too. 

As testers, we should know, to a certain extent, what unit testing is in place, and what coverage these tests provide. And, generally, a good base of unit tests will give us a better starting point. The issues found in testing are fewer, because the developers have thought more about the code themselves. Generally.


I am implementing the best I can do regards to design and testing. Testing is worthless, if design is not correct. So design is what I am looking at.

I'm not quite sure why TDD is being cited as a design practice, unless specifically speaking to the design of code? If you want to look at "are we building it right?" then TDD is the practice for you. If you want to look at "are we building the right thing?" then it is not (verification and validation).


I have reservations with #TDD. Implementing TDD, the project deadlines also needs to be stretched. Right?

I think this depends on what your current development practices are. If unit tests are already in place, but they're being written after the code rather than before it, then implementing this practice shouldn't stretch out deadlines. If there's no unit testing, then development may start to take longer, but in theory at least you should see some corresponding benefits in testing as the code is delivered in a much better state.


Finally, if you're really curious about TDD, the Wikipedia page looks pretty comprehensive.

Experts of TDD, please voice any disagreement below...


Tuesday 8 October 2013

Speak easy

A junior tester at work asked if I would mentor her. It's pretty exciting to be on the experienced side of a mentoring relationship and, simultaneously, terrifying that someone trusts your opinion. It was a little bizarre the first time she took notes as I was speaking...

Our session this month took an interesting turn when she asked for some tips on communicating in her team. It's her first placement in a team environment, as opposed to working as an individual or in a pair. The team she has gone in to is primarily populated with men, all of whom are significantly older than her. As a recently graduated woman, she's a little uncertain. I remember the feeling well.

As I started to suggest strategies she could use in her attempts to communicate more often and with value, I realised how many processes I have for dealing with similar situations in my own environment. Here are some of my suggestions for communicating with people who have more experience, more confidence and a louder voice.

Post It Note Questions

Someone just said something I don't understand. I don't want to interrupt them immediately to ask what it means, but I also don't want to forget to ask. What to do?

I like to write the word or phrase on a post-it note. At the conclusion of the conversation, or meeting, I approach the individual who used the term and ask them to explain it to me. I then re-frame what they said, by writing my understanding of the term on a second post-it note, and have this checked by the person who delivered the explanation. If I've got it, I stick the definition on my monitor.

I find at the start of a project, my monitor gets pretty full. Sometimes the area between my keyboard and the monitor fills up too. But after a week or two, the terms have become familiar enough that I can start to weed out post-it notes; things I now know can go in the bin.

I like this as a process for discovering what things mean, and not having to ask twice.

Stupid Questions

No one likes asking a stupid question. "There's no such thing!" you cry. Sometimes it feels like there is.

One way I give myself confidence to start asking questions, particularly in meetings with a number of people in attendance, is to monitor how many of the questions that I wanted to ask end up being raised by someone else. I note down questions as they occur to me, and then listen as the topic evolves. Any of my questions that are asked by someone else, I put a star next to.

At the end of my first meeting I attend on a project, I may have 20 questions and only two have stars. This indicates to me that the majority of my questions are learning that can happen outside of a meeting environment.

At the end of my fourth meeting, I may have 30 questions and 20 have stars. I feel this is positive feedback from the team (though they don't know they're providing it!) that I'm ready to start asking questions that will assist understanding and provoke discussion, rather than frustrate attendees and derail the agenda. Sure, I still send things off-track every now and then, but I have more confidence at this point that asking immediately is the best option.

Dealing with Dismissal

It can be easy to dismiss those who are young and new to the industry. Dismissal can also feel like the default reaction when ideas are confronting or uncomfortable for people to consider. I can get quite worked up when I feel that my voice isn't heard, but can be quite terrible at thinking on my feet under pressure. There's nothing worse than thinking of the comeback two hours later.

One strategy I use to combat dismissal is to prepare my arguments in advance. I like to float an idea to one person first, and note down their criticisms. Then I retreat to my desk and think about how I can address those criticisms, either in the initial idea or in rebuttal, and go try out my refined spiel on someone else. By the time I'm voicing an opinion in a meeting, I've run it past several individuals and prepared responses to a variety of reactions. I've got the tools to express myself confidently.

After a few iterations of this, I can start to imagine the criticisms without visiting people at their desks. The way I present my ideas become more compelling as I learn how the people in my team are most receptive to hearing ideas.


Would you have any other tips?

Wednesday 25 September 2013

Process vs Purpose

I have just returned from a two day training experience that focused on the people management skills I will require in my new role. One of the interesting exercises within this was completing a Lominger sort; taking a set of 67 leadership competencies and sorting them in to categories based on what skills the attendees felt were required to succeed in our positions.

The process

We completed the activity in groups of three. Each group was asked to sort the Lominger card deck in to three categories; essential, very important, nice to have. Our 67 cards had to be split as 22 cards in the first category, 23 cards in the second category and 22 cards in the third category.

Upon completing the sort, each group was given 22 red and 22 blue dots. These were used for dot voting on a wall chart against the full list of key competencies. Blue marked what was essential and red what was nice to have (the top and bottom categories). When all six groups had completed this task, we gathered around this chart to talk through what was discovered.

The results

When looking at what people considered as essential, there were a number of competencies that everyone agreed upon; listening, understanding others, approachability, motivating others, managing diversity, integrity and trust. Our facilitator picked these from the chart and it was clear that with six votes against each there would be no argument.

The next item that the facilitator selected only had three votes. This caused a fair bit of confusion, as there were a number of other things that had five or four votes, yet we appeared to have jumped to discussing a much lower ranked competency. Why?

The purpose

Prior to this activity, the group had identified a set of key challenges in the role. A strong theme that emerged was the lack of time available to deliver on both our management responsibilities and our delivery responsibilities to clients. The competency with three votes was time management. Our facilitator argued that although the skill appeared to have been rated lower, the absence of any red dots meant that everybody considered the skill to be either essential or very important, plus it had been identified as a key challenge and was worthy of consideration.

It was interesting to me to observe how thrown everybody was by this shift. Our focus as a group was on identifying the competencies that had the most votes, but the purpose of the exercise was to identify the skills we felt were required to succeed in our positions. In executing the process we had lost our purpose.

In testing

When I test, I can find myself getting caught up in reporting things that I perceive as a problem, but the client sees as an enhancement. I do this because one outcome of my test process is logging defects and I want to record that these things were discussed. When I think about the wider purpose of my testing, I'm not sure whether this activity adds value. If the client accepts the behaviour, is this just noise?

When examining the outcomes of our test process, it's important that we remember to take a step back from what we have produced, or are expected to produce, and think about the purpose of what we're doing. What was the process put in place to achieve? Does the outcome meet the goal?

Saturday 21 September 2013

A Simple Service

Today I had the pleasure of speaking with Betty Zakheim for the first time, my mentor from Line at the Ladies Room. As we are both very busy women, working in entirely different time zones, today was the first opportunity we found to chat.

I wasn't sure what to expect from a mentor. I had a some things in mind that Betty might be able to help me with, but I didn't know how she wanted to approach our session. We started the call with introductions and Betty's friendly American accent calmed my nerves, then Betty asked what I'd like to talk about.

Service Offering

As I knew that Betty had a background in both Computer Science and Marketing, the first thing I wanted to ask her about was writing a service offering. I have recently been asked to lead an initiative in my organisation that means I have to define a new service and then write presentation material to share my vision with others. I've never had to do that before, so I asked Betty for some advice on how I could approach this. She gave me the following tips.

Elevator Pitch

Start by creating a crisp definition for the new service. What is it? How is it different to other testing? Why does it give a better outcome? Keep this definition short, quick and simple, so that a salesperson could hear it, remember it and repeat it without fault. This definition gets your foot in the door.

Speak Plainly

Write your pitch the same way you'd speak to somebody. Keep the language clear and avoid too much technical jargon. As you write, in cases where you can't think of an appropriate word, mark the point with ??? and move on. You can return to these points when the passage is complete, often the right word will appear given time.

Senior Management

When presenting to senior management, it's important to frame your argument in terms of cost and risk. This audience wants to know whether what you're offering is valuable to them. Often salespeople argue cost alone, but a proposition that reduces risk too is even more powerful.

Presentations

A good pitch caters to oral, visual and written learners, using PowerPoint slides and a strong script. As a rule of thumb, for a 1 hour meeting bring 30 minutes of prepared material that explain your services then be ready to answer questions for the remainder of the appointment. When you are selling professional services the product is you; be ready to prove your expertise.


Betty also gave me some great feedback on the content of my material, which was a real eye-opener for me. It made me realise that I need to differentiate between what I create to sell this service to testers and what I create to sell this service to managers. Betty really got my brain buzzing on how I can speak to the latter category successfully.

It was an incredibly helpful 45 minutes and I'm tremendously grateful to Betty for giving up her time on a Friday evening. I'm looking forward to our next session in a few weeks.

Thursday 5 September 2013

Community Question

Last night I slammed my finger in a door. It really hurt and it made me pretty grumpy. This happened as I was setting up a room for a WeTest Workshop that I was supposed to be facilitating. I decided to delegate facilitation to my friend Damian, who I was confident could do an excellent job of it. He did. 

A year ago, Damian had never participated in a LAWST style conference. Last night was the second time he facilitated one of our workshop events in Wellington. I believe this happened because he was asked; to attend, to present, to facilitate.

I am of the opinion that there are a number of people in software testing who are waiting for an opportunity. I hear people lamenting how difficult it is to create a community and this always makes me wonder, who have you asked?

When Aaron and I kicked off WeTest, we felt that there wasn't a strong community in Wellington. The opportunities available for testers didn't allow for passionate discussion on our industry; they were occasions that allowed presenters to escape unchallenged. We knew very few people to approach to change this situation. We started with the Wellington based folk who attended KWST2.

It took weeks for our first event to fill. We asked everyone we knew. Then we asked them to ask people. Then we asked on mailing lists and twitter. I remember how excited we were to finally find 20 people who wanted to talk about testing. 

After our first workshop the group started to grow. Those who attended recommended the event to others. A recommendation is not dissimilar to an invitation. In saying "you should come along to one of these" you're letting someone know that you believe they have the skills and potential to participate in, and enjoy, a challenging workshop environment. 

We now have over 100 members and our workshop events can fill within hours.

We're still asking, but now the questions have changed. Do you want to start a WeTest in Auckland? Do you want to lead a community of new testers? Do you want to console a community of battered warriors? Would you be willing to sponsor a new testing conference?

You can create a community. Ask someone to help you. Ask people to join you. Ask people to share their ideas. 

Wednesday 21 August 2013

Where to begin?

I recently exchanged a series of emails with an ex-colleague, who found them very helpful in starting to implement a different testing approach in her new organisation. She generously agreed to let me steal them for a blog post, as I thought they may also help others who aren't sure where to start. For context, this person comes from a formal testing background and is now wanting to implement a context-driven testing approach in an agile team that use SCRUM.


How do I use context-driven testing instead of structured formal testing? What tool do I use? How does this method fit in to each sprint?


I'd recommend looking at the Heuristic Test Strategy Model, specifically pages 8 - 10 of this PDF (General Test Techniques, Project Environments, Product Elements). Using these three pages as a guide, I'd open up FreeMind (or similar) and create a mind map of everything that you think you could test based on this, if time was unlimited and there were seven of you! You'll find that there are a number of questions asked in the Heuristic Test Strategy Model that you just don't know about. I'd include these in your mind map too, with a question mark icon next to them.

Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do.

Then you want to take all this information back to your desk and start processing it. I'd suggest that creating this huge mind map, having a meeting about it, and then deciding how to proceed, is at least the first day of a week long sprint, or the first two days of a fortnight long sprint.

Once you are comfortable that there's shared understanding between you, the product owner, and whoever else attended about what you will and won't be doing, then I'd start breaking up what you have in to do in to charters and using test sessions to complete the work; in agile there's really no need for scripted test cases. You can think of a charter like the one line title you'd use to describe a test case (or group of test cases). It's the goal of what you want to test. Something like "Test that the address form won't allow invalid input". I'd encourage you to assign yourself time-boxed testing sessions where you test to one goal. You can record what you've tested in a session report.

This probably all sounds totally foreign. This might help. I'd also definitely suggest reading this, and this.


Do you associate the user story to the identified features to be tested?  


I usually keep my test structure similar to the application structure, so that for a user of the application the tests all look familiar. For example, my current application has three top-level navigation elements; Apples, Oranges and Pears. The test suite starts with the same three level split.

I use mind maps to plan my testing in each space. So I have an Apples mind map, that has 7 branches for each type of apple we have. Then, because the children were too big, I have a separate mind map for each apple type where I actually scope their testing.

When we have a new user story go through the board, I assess which parts of my mind maps could be altered or added to. Then I update the mind maps accordingly to illustrate where the testing effort will occur (at least, where I think it will!)

I don't formally tie the story and features to be tested together, as this is rarely a 1-1 relationship, and there's some administrative overhead in tracking all this stuff that I don't think is very useful.


Currently our product owner provides very high-level business requirements, then the team create many user stories from this that are put in the backlog. So once I prepare the mind-map for what I can test based on the given requirement, I could take this to the product owner. Is it what you would do? When do you use this approach, do you normally get a relatively clear list of requirements?


If the product owner isn't helping create the stories, then I would definitely be asking lots of questions to be sure that what your team have guessed that they want is what they actually want. I'd suggest this might be a separate meeting to "what would you like me to test" though.

I think the first meeting is like "I can test that it works when users behave themselves. I can test that it handles input errors. I can test that network communications are secure. I can test that the record is successfully written to the backend database. I can test that a colour blind person can use this. What's important to you in this list?" and they might say "Just the first two" and you say "GREAT!" and cross out a whole bunch of stuff.

The second meeting is "Ok, you want me to test that is works when users behave themselves. Can we talk through what you think that means? So, if I added a record with a name and address, which are the only mandatory inputs, that would work?" and then the product owner might say "no, we need a phone number there too" and you start to flesh those things out. 

The second meeting is working from your test scope mind maps (in my case, Apples). The first meeting is working from a generic HTSM mind map (in my case, what do you want me to do here?)

With this approach I usually do get a relatively clear list of requirements at the end of step 2. Then I also ask the BAs to review what I'm testing by looking at the mind maps and seeing if there are business areas I've missed.


How do we integrate this context-driven approach to automation or regression testing?


I use concordion for my automated reporting, which is very flexible in what it allows you to include. I put mind map images in to the results generated by running automation, i.e. the apples mind map. I have little icons showing what, from all the things we talked about, have been included as automated checks, what I've tested manually, and what the team has decided is out of scope.

I find that in my team the Product Owner, Project Manager and BAs all go to the automated results when they want to know how testing is going. In order to show an overview of all testing in that single location, I pull all my mind maps in there too. I often find that the Product Owner and Project Manager don't drill down to the actual automated checks, they just look at the mind maps to get an idea of where we're at.


When you are doing time-boxed testing (session based?), do you record the all sessions? If so, do you normally attach the recorded session?


I don't record them with a screen recorder. I do record what I did in a document, using essentially the same structure as this.

Wednesday 14 August 2013

Communicating Kindly

Starting to blog has made me realise that testing makes me a ranty person. These rants are generally released in cathartic blog format, but occasionally an innocent tester will fall victim to a spiel as they touch upon a frustration that others have created. Here's how I try to keep my conversations friendly and my ideas accessible to others.

Answer the question

If you've been asked a question that has been phrased in a way that makes you want to start by correcting the question itself, stop. Take a deep breath. Just answer the question. There's a time and a place for correcting misconceptions and it's not at the start of the journey. In all likelihood, the mistake in asking is due to ignorance that you could help correct, if you chose to answer the question in the first place.

Share your experience

We test in different ways, interacting with different applications, in a variety of project teams. Though you may need the context of a situation to give good advice, you could start with some general advice by providing the basics. Share what has worked in your experience. Offer links to the good content that lurks in shadows of the internet. Sow the seeds for a beginning; give people something to start from.

It's not their fault

I have trigger words or phrases that really fire me up. I like to think this is a common foible. It's really important not to go in to Hyde mode, because I don't think anything productive comes of it. Your audience tune out and label you as unhelpful. You end up feeling guilty for releasing the beast on those who don't deserve it.

Stay tuned

If someone is asking a question, you're unlikely to be able to resolve anything in a single conversation. A question is like an iceberg, you only see 10% of what the person wants to know. Do your best to start a dialog where further questions are encouraged. Give yourself the opportunity to build a relationship, rather than attempting to solve every problem in one reply. The more sharing that occurs, the better the advice becomes.


What would you add?

Wednesday 7 August 2013

Stupid Silos

Division is part of identity. Finding a group of people to belong to, who share distinctive characteristics or thinking, helps to shape our view of our selves. But I get frustrated by the repercussions of how we group ourselves as testers.

Functional vs. Technical

A pet peeve of mine is the division creeping in to the New Zealand testing community between a test analyst and a test engineer, where the latter is used to distinguish those who are capable of writing automated checks or using automated tools. It annoys me because I feel the industry is lowering its expectations of testers. A test analyst is given permission to be "non-technical". A test engineer is given permission to switch off their brain; no analysis required, just code what you're told to!

Why has this distinction arisen? Because organisations place higher value on the automated checks produced by an engineer than they do on the information available from an analyst. How is this possible? Because many engineers genuinely believe that their checks are testing and many analysts are producing terrible information. How depressing.

I wish we hadn't created this divide. A tester should know the value and the limits of an automated suite, be capable of test analysis and provide quality information to the business for their decision making. A tester should be encouraged to have a breadth of skill. The silos created by our titles are preventing these type of people from developing.

Attaching the blinders

Similarly, I am annoyed by job titles that add specificity on the type of test activity occurring; security analyst, performance engineer, usability analyst, etc. Certainly these activities require specialist skill. But in deploying someone in to a project specifically to test one aspect of the application, you give them permission to ignore all others. Mandated inattentional blindness; count those passes!

All of these specialists will interact with the application in creative ways, which may result in execution of different functionality. If the tester is looking specifically at how fast the application responds or how usable the interface is, they may miss functional issues that their activities reveal. And even if they do see something strange, often these specialists are given little opportunity to understand how the application should behave, so they may not even understand that what they observe is a problem.

Barriers to collaboration

At CITCON in Sydney earlier this year, Jeff challenged us to think about whether we are truly collaborative in our agile teams; particularly across the traditional boundaries of developer, tester, operations and management. People rarely question the thinking of team members outside their role, because titles give people ownership of a particular aspect of the project. The challenge to establishing collaboration is in changing the professional identity of these people from the individual focus to the collective.

At the same conference, Rene shared a story about a team deployed to resolve high priority production issues. They were known as ‘the red team’ and included people from multiple professional disciplines; development, testing, etc. This team was focused on finding a solution and their collaboration was excellent. When asked about their role in the organisation, members of this team would proudly identify as being “on the red team”. This is a shift from how people usually identify themselves, by claiming a role within a group; “I’m a test analyst on Project Z”.

Testing is testing

Our challenge is to take the change in identity that occurs in this single focus, high pressure situation and apply it to our testing teams. By populating a team with people who have different skills, but asking them to think together, we can achieve better collaboration and create testers who have a broader skill set.

An engineer should not be permitted to build a fortress of checks that is indecipherable to other testers in the team. Analysts should be interrogating what is being coded, offering suggestions on what is included and demanding transparency in reporting. Similarly an analyst should not be allowed to befuddle with analysis and bamboozle with metrics without an engineer calling their purpose and statistics to account. Performance, security and usability specialists will all benefit from exposure to a broader set of interactions with the application, so that they can help the team by identifying issues they observe that our outside their traditional remit. Challenging each other allows transfer of knowledge, making us all better testers.

Titles acknowledge that we are different and that we have strengths in a particular area. But we should all try to identify as being part of a testing team rather than claiming an individual role within it. We should not allow a label to prevent us from delivering a good testing outcome to our client. We should not allow our titles to silo our thinking.

Thursday 1 August 2013

Metric Rhetoric

I meet very few testers who argue that there is value in metrics like number of test cases executed as a measure of progress. The rhetoric for metrics such as these seems to have changed from defiant oblivion to a disillusioned acceptance; "I know it's wrong, but I don't want to fight".

The pervasive view is that, no matter what, project management will insist upon metrics in test reporting. Testers look on in awe at reporting in other areas of the project, where conversations convey valuable information. We are right to feel sad when the reporting sought is a numeric summary.

But they want numbers...

Do they? Do your project management really want numbers? I hear often that testers have to report metrics because that is what is demanded by management; that this is how things are in the real world. I don't think this is true at all. Managers want the numbers because that's the only way they know to ask "How is testing going?". If you were to answer that question, as frequently as possible, making testing a transparent activity, much of the demand for numbers would disappear.

"Even if I convince my manager, they still need the numbers for reporting further up the chain". If your manager has a clear picture of where testing is at, then they can summarise this for an executive audience without resorting to numbers. Managers are capable communicators.

Meh. It's not doing any harm.

Everytime that you give a manager a number that you know could be misleading or misinterpreted you are part of the problem. You may argue that the manager never reads the numbers, that they always ask "How's it going?" as you hand them the report, that they'll copy and paste those digits to another set of people who don't really read the numbers either, so where's the harm in that? Your actions are the reason that I am constantly re-educating people about what test reporting looks like.

Snap out of it!

We need to move past disillusioned acceptance to the final stage and become determined activists. A tester is hired to deliver a service as a professional. We should retain ownership of how our service is reported. If your test reporting troubles your conscience, or if you don't believe those metrics are telling an accurate story, then change what you are doing. When asked to provide numbers, offer an alternative. You may be surprised at how little resistance you encounter.


Wednesday 31 July 2013

Ignoring the liver

I had the privilege of hearing Dr Ingrid Visser speak at the Altrusa International Convention this weekend. Dr Visser is a marine biologist who specialises in orca research.

Orca & Stingray

Orca hunt stingray in a group. When a stingray is successfully caught, the group gather at the surface to share the meal, ripping the stingray apart. Dr Visser observed that, when sharing a meal of stingray, the orca avoided eating the liver. She published a paper stating that New Zealand orca did not eat stingray liver, potentially due to the toxins the liver contains.

Since this publication, Dr Visser has observed New Zealand orca eating stingray liver. This revelation lead her to ask herself the following three questions of her previous research:

  • What did I miss?
  • What did I do wrong?
  • What has changed?

Dr Visser stated that being open to challenging what is believed to be true is essential for scientists; she uses these three questions to examine her thinking.

Challenging our thinking

As a tester, we too should feel obligated to question what we believe to be true. I believe the context driven school of testing is borne of frustration in those who have stopped asking these questions. Each acknowledge a degree of failure; in asking I acknowledge that I have made a mistake. They are difficult questions to pose, but those who stop asking questions are at risk of becoming zombies.

The ability to identify new ideas and then truly question whether they should supplant your own thinking is  necessary to grow our skills as testers regardless of school. Those who identify as context driven testers should not become complacent in our thinking, or restrict the opportunity for critique only to those who identify as being part of our community. It can be easy to dismiss ideas that challenge you as being from a different school of thought. But in doing so, we deprive ourselves of the opportunity to learn.

I believe that these questions should be called upon by every tester when they feel confronted. What did I miss? What did I do wrong? What has changed?


Monday 22 July 2013

The fly in the room

I'm not sure how I feel about the Line at the Ladies Room and the Women in Testing issue of Tea Time with Testers. The noise about women in IT isn't a new thing, but it's like a large blowfly in my lounge that I wish would either die or disappear, instead of making that persistent and annoying buzzing sound.

My initial reaction to these new noises stems from the fact that I don't like being told what to do. "Speak Up!" they say. This is, perversely, much more likely to make me purse my lips and refuse to say a word.

Yet in making the decision to contribute to the testing community I feel a little resentment at the implication by TTwT that I could only be heard in a forum specifically requesting the opinions of women. That my thoughts would be drowned or ignored in a general populace.

The mentoring program seems like a great idea, with benefits for both the aspiring speakers and mentors, but I still have not filled out a form to participate. The sales pitch starts with "We all have something to share with others", and that might be the crux of the issue. Perhaps we don't?

Just excuses and doubts perhaps.

Finally, I'm not convince that either measure is going to kill that blowfly in my lounge. We create a testing magazine with articles written by women, we orchestrate a 50/50 gender ratio in presenters at a testing conference, then what? Creating strong female role models is great, but who are we leading? 

Instead of preaching to the converted at testing conferences and in testing magazines, perhaps we should focus more attention on attracting women in to testing in the first place? Why aren't we making our voices heard at high schools, colleges and universities first and foremost?

Wednesday 10 July 2013

Bugs on Post-It notes

I like to write bugs on post-it notes. Simple ones, where I don't need a supporting screenshot or a log. It seems wasteful to put it in to a bug tracking system and nurse it through a process. I'd rather write it on a small, brightly coloured piece of paper, walk over to the developers and hand it to them, with a quick chat to check that they understand my scrawl.

The reaction to this approach from my project team has been mixed.

The developers seem to quite like it, as much as any developer likes receiving bugs, though they do joke of "death by a thousand paper cuts". Though it's harder to escape defects when they're lined up like soldiers along your desk, awaiting your attention, this approach feels more like a collaborative process rather than an combative one. Having to transfer the physical object increases the opportunity for conversation, and the communication between development and test is excellent. As the developers resolve each problem they take great pleasure in returning the associated post-it to me with a confident tick on the bottom. There's also a healthy level of competition in trying not to have the most post-it notes stuck to your desk.

Occasionally the developers will ask me for a defect number to include in their subversion commit comment for changes. "Oh, I didn't raise that one in the tool" I say, "I just wrote it down". Turns out this isn't a problem, they explain the change instead of referencing the defect ID, now the commit messages don't rely on a third party system.

The project manager was initially a bit miffed. "How many bugs did we fix in this sprint?" he would ask. Though this could be tracked by counting the number of post-it notes stuck on the visual management board as completed, as time passed he realised he didn't need that number as much as he thought. In an environment where bugs enter a tracking system it's important to track their life span, as they often live for a long time. It's pretty easy to ignore things that cannot be easily seen, however it's difficult for a developer to ignore a multi-coloured desk. My experience is that reported faults on post-it notes are fixed in a startlingly fast fashion, and as the answer to "How many bugs did we fix?" starts to become "All of them", the actual number loses significance.

In the small number of cases that I want to raise a problem that requires supporting evidence, I put this information in to the tracking system. I also write up the post-it with the defect ID on it, and go and give it to the developer. I want to keep as many of the good things about post-it defects as possible, while still providing enough detail to the developer that they can understand and resolve bugs. But the overhead in using the (very simple) tool convinces me that it's for special cases only.

Post-It notes are the best.

Saturday 6 July 2013

KWST3

I spent the last two days at KWST3 and I wanted to capture my thoughts from the event before they escape. I am unsure whether others will get any value from this brain dump!


Education

You can't teach how to test, but you can teach how to ask.

The path to education; Conflict -> Curiosity -> Critical Thinking -> Networking -> Community. When a tester hasn't experienced a conflict that sets them on this path how can we create a catalyst?


How do we offer experiential learning outside of a project context? In a classroom or training course the opportunities for hands on learning may be limited by the number of applications available for testing? Suggestions included basic applications like WordPad, online mazes / games / puzzles, or asking someone with an interest in coding to create a small application for this purpose.


There is risk associated with testing activities; should we allocate resource in a project based on this risk? Although in a scientific context the risk of failure can be great, such as experimenting with corrosive acid, in the world of software the risks can be smaller. When we assign low-risk tasks to juniors, are we robbing them of an opportunity to learn? Perhaps the benefit in them learning from failure far outweighs the risk? Have we become too risk averse at the expense of education?


BBST has a creative commons license, so not only is it a great course for learners it is also one that's accessible to educators for use in their own teaching.

uTest and Weekend Testing are worth investigating!



Behaviour

Passion and aggression are not the same thing. We need to be aware of how our rhetoric, and that of our community, is perceived by others, so it remains challenging without being off-putting.

How to escape a situation where you've reached a "quasi-agreement"; a manager stops fighting you, but instead of accepting your approach they choose to ignore it? When being ignored is jarring and you believe others could benefit from what you're doing, what can you do?

  1. Inception - present the idea to the person with whom you've reached an impasse and then make them think it's their idea. Get them to take ownership and champion further adoption. Requires an element of selflessness, as you will no longer get any credit for the idea taking hold.
  2. Challenging with kindness - question the person until they start to draw their own conclusions. Pull them towards the same answer as you, but get them to take the journey and reach the conclusion themselves, rather than presenting only the conclusion to them.
How to address a situation where you believe that a person is unaware of beliefs they hold that are holding them back or could be used to make them a better tester? Be kind, but confront them about it. Often sharing something about yourself is a good way of prompting honesty in others. Identifying these beliefs and challenging someone to dispel or harness them can be a way of breaking people out of their ruts and setting them on a path to learning.

Testers who do not challenge, question and criticize may be constrained by their culture.



Shifting to CDT

What's more important, maintaining the test scripts or doing the testing? When the answer is always b) then perhaps you need to focus on doing the best testing possible without the scripts!

When shifting to a CDT approach you may notice that:

  • Testing outcomes improve despite deterioration of testing scripts.
  • Testing without scripts finds the issues that actually get fixed.
  • Staff turnover drops.
Stay tuned for the case study...


These thoughts were formed with / stolen from the following amazing people: Aaron Hodder, Oliver Erlewein, Rich Robinson, Brian Osman, Anne Marie Charrett, Jennifer Hurrell, Erin Donnell, Katrina McNicholl, Andrew Robins, Mike Talks, Tessa Benzie, Alessandra Moreira, James Hailstone, Lee Hawkins, Damian Glenny, Shirley Tricker, Joshua Raine and Colin Cherry. A handful are my own.