Wednesday, 20 November 2013

Mind Maps and Automation

I've written in the past about the risk of relying solely on automated checking in release decisions. On a recent project I had great success in changing how the automated test results were used by including mind maps in the reports generated.

Technical background

We were developing a web application using Java, which was built using maven. I created a separate module in the same source repository as the application code for the automated checks. It used Concordion to execute JUnit tests against the application in a browser using Selenium WebDriver. The suite executed from a maven target, or tests could run individually via the IDE. I had a test job in our Jenkins continuous integration server that executed the tests regularly and produced an HTML report of the results using the HTML Publisher Plugin.

What does it look like

Below are the results for a Sample Application*. The job in Jenkins is green and more information about this success can be found by hovering on the job and clicking the HTML report.



My past experience was that management engage with the simple green / red feedback from Jenkins more than any other type of test reporting. Rather than continuing to fight this, I decided to change what it could tell them. There will always be aspects of functionality where it does not make sense to add an automated check, bugs that fall beyond the reach of automation, and decisions about scope that cannot be easily captured by automation alone. I wanted to communicate that information in the only place that management were listening.

The report is designed to be accessible to a non-technical audience and includes a pre-amble to explain the report structure. The entry point is designed to provide a high level visual overview of what testing has occurred, not just the results of automation. I found that the scrum master, product owner and project manager didn't drill further in to the report than this. This is essentially a living test status report that does not contain any metrics.


Each feature was highlighted to reflect a high level status of the checks executed within that space (that's why Sample Feature above is green). It linked to living documentation, as we used Specification by Example to define a set of examples in the gherkin format for each story. 



Although written in plain English with a business focus, these specifications were rarely accessed by management. Rather they were used extensively by the business analysts and developers to review the behaviour of the application and the automated checks in place to verify it. The business analysts in particular would regularly provide unsolicited feedback on these examples, which is indicative of their engagement.

Having different levels of detail accessible in a single location worked extraordinarily well. The entire team were active in test design and interested in the test results. I realised only recently that this approach creates System 1 reporting for mind maps without losing the richer content. It's an approach that I would use again.

What do you think?

__

* This post was a while in coming because I cannot share my actual test reports. This is a doctored example to illustrate what was in place. Obviously...

Saturday, 16 November 2013

Tell me quick

I've been thinking a lot about what stops change in test practice. One of the areas I think we encounter the most resistance is altering the type of reporting used by management to monitor testing.

There's a lot of talk in the testing community about our role being provision of information about software. Most agree that metrics are not a good means of delivering information, yet management seem to feel they have to report upwards with percentages and graphs. Why do they feel this way?

When testers start richer reporting, managers then have to make time to think about and engage with the information. By contrast, numbers are easy to draw quick conclusions from, be they correct conclusions or not. It doesn't take long to scan through a set of statistics then forward the report on.

I have recently switched to a role with dual focus in delivery and people management. I've been surprised by how many additional demands there are on my time. Where a task requires me to flip in to System 2 thinking, a slower and more deliberate contemplation, that task gets parked until I know I have a period of time to focus on it.

When I don't get that period of time, these types of task build up. I end up with emails from days ago that await my attention. They sit in my inbox, quietly mocking me. I don't enjoy it; it makes me feel uncomfortable and somewhat guilty.

Imagine those feelings being associated with a new test practice.

In my local community there is currently a focus on using mind mapping software to create visual test reporting (shout out to Aaron Hodder). Having used this approach in my delivery of testing, I find it a fantastic way to show coverage against my model and I feel a level of freedom in how I think about a problem.

For managers involved in delivery a mind map allows for fast and instinctive assessment on the progress and status of testing (System 1 Thinking). But for people not involved in the project day-to-day it is not that easy.

Every tester will use mind mapping software differently. The structure of their thinking will differ to yours. The layout won't be as you expect. The way they represent success, failure and blockers will vary. Further, if you haven't seen a mind map evolve and you don't know the domain, it's going to be a challenge to interpret. You'll need to think about it properly, and so the report gets filed in that System 2 pile of guilt.

I don't want to report misleading metrics, but I don't think we've found the right alternative yet. I don't want to change the new way that we are working, but one reason it is failing is that we don't have a strong System 1 reporting mechanism to external stakeholders.

To make change stick we need to make it accessible to others. We must be able to deliver test information in a way that it can be processed and understood quickly. How?

I'm thinking about it, I'd love your help.

Thursday, 14 November 2013

BBST Business Benefits

Numerous people have blogged about their experiences doing the BBST Foundations course. I recently wrote a successful proposal for five testers in my organisation to complete the BBST Foundations course in 2014. Below are a couple of generic pieces from the proposal that may help others who need to make a similar request.

The executive summary is largely pulled from the AST website, references are included. The benefits section is a subset of all benefits I had listed, capturing only those that I believe may be applicable across many organisations. Hopefully this post will save somebody time in requesting the testing education of themselves or others.

Executive Summary

The Association for Software Testing offers a series of four week online courses in software testing that attempt to foster a deeper level of learning by giving students more opportunities to practice, discuss, and evaluate what they are learning. [ref] The first of these courses is Black Box Software Testing (BBST) Foundations.

BBST Foundations includes video lectures, quizzes, homework, and a final exam. Every participant in the course reviews work submitted by other participants and provides feedback and suggests grades. [ref]

Too many testing courses emphasize a superficial knowledge of basic ideas. This makes things easy for novices and reassures some practitioners to falsely believe that they understand the field. However, it’s not deep enough to help students apply what they learn to their day-to-day work. [ref]

[organisation] seek deep thinking testers with a breadth of knowledge. The BBST series of courses
is an internationally recognised education path supporting that goal.

Benefits 

The immediate benefits to [organisation], which are realised as soon as the course starts, include:

  • Supporting testers who are passionate about extending their learning, which keeps these people invested in [organisation]
  • Connecting [organisation] testers with the international software testing community 

In the three months following course completion, the benefits include:

  • Re-invigorate participants to think deeply about the testing service they offer 
  • Sharing knowledge with the wider organisation
  • Sharing knowledge with the market via thought pieces on the [organisation] website 

Within a year, the benefits of this proposal are supported by further investment, including:

  • Offer the BBST Foundations course to a second group of participants, to retain momentum for the initiative and continue the benefits above 
  • Extend the original BBST Foundations course participants to an advanced topic, to grow the skills of [organisation] testers to deliver high value testing services 



Saturday, 9 November 2013

Evolution of Testing

I've been involved in a twitter discussion about the evolution of testing. Aleksis has been pushing me to think more about the comments I've made and now I find I need a whole blog post to gather my thoughts together.

Scary thoughts

The discussion began with me claiming it was a little scary that a 2008 Elizabeth Hendrickson presentation is so completely relevant to my job in 2013. Aleksis countered with a 1982 Gerald Weinberg extract that was still relevant to him. This caused me to rue the continued applicability of these articles. "I think testing is especially stuck. Other thinking professions must have a faster progression?"

It feels to me that innovative ideas in testing gain very little traction. Thoughts that are years or decades old are still referred to as new technique. The dominant practice in the industry appears to be one that has changed very little; a practice of writing manual test cases prior to interaction with the application then reporting progress as a percentage of test cases executed.

Changing programming

But we're not alone, right? Aleksis said "I don't know what thinking professions have progressed better. Programming?"

Yes, I think programming is evolving faster than testing. I'm not just talking about the tools or programming languages that are in use. I believe there has also been a shift in how code is written that represents a fundamental change in thinking about a problem.

Programming began in machine language at a very granular level, or even prior with punch cards. When I worked as a developer 10 years ago the abstraction had reached a point where there was focus on object oriented programming techniques that created siloed chunks that interacted with each other. Now I see a step beyond that in the extensive use of third party libraries (Boost C++) and frameworks (Spring). The level of abstraction has altered such that the way you approach solving a programming problem must fundamentally change.

Why do I call this evolution? Because I don't believe people are standing up and talking about object oriented technique at programming conferences anymore. There are swathes of literature about this, and many people have reached achieved mastery of these skills. The problem space has evolved and the content of programming conferences has changed to reflect this. The output from thought leaders has moved on.

Dogma Danger

I think "evolution is when the advice of 30 years ago is redundant because it has become practice and new ideas are required." Aleksis countered "That's how you end up growing up with dogma."

I don't want people to take ideas on as an incontrovertible truth. But I do want people to change how they go about testing because they think an idea is better than what they currently do and they are willing to try it. I am frustrated because it feels that in testing we aren't getting anywhere.

Why aren't we seeing the death of terrible testing practice? Why is a model of thinking conceived in 1996 still seen as new and innovative? Why is the best paper from EuroStar and StarWest in 2002 still describing a a test management technique that isn't in wide use?

Moving the minority

James Bach chimed in on Twitter saying "Testing has evolved in the same way as programming: a small minority learns". Fair, but then everyone else needs to catch up, so that the minority can continue their journey before we're worlds apart. I haven't been in testing long, and I'm already tired of having the same conversations over and over again. I want to move on.

Is anyone else frustrated about this? How can we create sweeping change? How can we be sure the test profession will be genuinely different in another 10 years?

Friday, 25 October 2013

Catalyst for Curiosity

Yesterday I was asked to speak to a room of testers who all work at the same organisation. They were on a training course, and I was asked to visit for half an hour to speak about "Testing Mind Maps".

The room was varied. Four people did not know what a mind map was. One guy claimed to have been using mind maps for over 10 years and expressed some bitterness that the Bach brothers were internationally recognised for doing things that he had done first. With 30 minutes to speak and such a wide breadth of skill, it was difficult to say something valuable for everyone. So, I spoke for a bit, with a handful of people nodding along and others looked stricken.

Then my colleague posed a question to the group. "How is it that there's such a wide variety of knowledge in this room when you all work in the same place? What's your internal process for knowledge sharing?"

There was a moment of silence, followed by a flood of excuses.

"We used to do lunch time sessions, but then people got too busy and no one came"
"We have a wiki, but no one really uses it"
"We used to pair up junior and senior testers, but now our focus is delivery"
"We sit in project teams instead of a test team, so we don't see each other often"

This made me question my own experiences with knowledge sharing; pockets of knowledge spread across multiple repositories, accompanied by individual processes for access and sharing. Though rarely ignored entirely, this scatter-gun approach means that knowledge transfer is usually left to those who are eager to learn. A pull rather than a push. If we want to see better transfer of knowledge, we need to create curiosity.

When we talked about change at KWST3, there was a common experience among attendees of a catalyst that set them on a path of education. Curiosity comes from someone or something that makes you want to learn more. I like to imagine that a half an hour chat about mind maps will be a catalyst for some of my audience yesterday.

Who is creating the catalyst for curiosity where you work? How is your knowledge being shared?

Sunday, 13 October 2013

TDD on Twitter

Sometimes people ask me things on Twitter and it's too tricky to answer in 140 characters. Even though I'm not entirely sure how I ended up in this particular discussion thread, nor am I an expert in any of the areas I'm being asked about, I do have thoughts to share (as I often do). So lets talk TDD, or test driven development.


Question for all. I have been hearing about TDD and unit testing a lot these days. Has the bug count reduced? I have to stream line the application I am working on. I am finding ways to do test automation. I am working on Selenium. But from the application design perspective I am also looking ways with TDD. So the question, do you see any change in the issue count, if you have worked with a team following TDD?

My first thought is to wonder what we're comparing the issue count to? Presumably this is the first time we've written the software that's being tested, so it's difficult to know for sure whether a practice that includes TDD has certainly resulted in fewer issues.

TDD is a practice that mandates developers writing tests before they write code. It is a fundamental mindset shift in the way someone goes about their development. I don't believe testers can practice TDD unless they are writing the actual application code.

TDD and unit testing are not the same thing. Unit tests are a result of TDD, but the practice of TDD is not the only way to get unit tests. Unit tests can be written after the code too. 

As testers, we should know, to a certain extent, what unit testing is in place, and what coverage these tests provide. And, generally, a good base of unit tests will give us a better starting point. The issues found in testing are fewer, because the developers have thought more about the code themselves. Generally.


I am implementing the best I can do regards to design and testing. Testing is worthless, if design is not correct. So design is what I am looking at.

I'm not quite sure why TDD is being cited as a design practice, unless specifically speaking to the design of code? If you want to look at "are we building it right?" then TDD is the practice for you. If you want to look at "are we building the right thing?" then it is not (verification and validation).


I have reservations with #TDD. Implementing TDD, the project deadlines also needs to be stretched. Right?

I think this depends on what your current development practices are. If unit tests are already in place, but they're being written after the code rather than before it, then implementing this practice shouldn't stretch out deadlines. If there's no unit testing, then development may start to take longer, but in theory at least you should see some corresponding benefits in testing as the code is delivered in a much better state.


Finally, if you're really curious about TDD, the Wikipedia page looks pretty comprehensive.

Experts of TDD, please voice any disagreement below...


Tuesday, 8 October 2013

Speak easy

A junior tester at work asked if I would mentor her. It's pretty exciting to be on the experienced side of a mentoring relationship and, simultaneously, terrifying that someone trusts your opinion. It was a little bizarre the first time she took notes as I was speaking...

Our session this month took an interesting turn when she asked for some tips on communicating in her team. It's her first placement in a team environment, as opposed to working as an individual or in a pair. The team she has gone in to is primarily populated with men, all of whom are significantly older than her. As a recently graduated woman, she's a little uncertain. I remember the feeling well.

As I started to suggest strategies she could use in her attempts to communicate more often and with value, I realised how many processes I have for dealing with similar situations in my own environment. Here are some of my suggestions for communicating with people who have more experience, more confidence and a louder voice.

Post It Note Questions

Someone just said something I don't understand. I don't want to interrupt them immediately to ask what it means, but I also don't want to forget to ask. What to do?

I like to write the word or phrase on a post-it note. At the conclusion of the conversation, or meeting, I approach the individual who used the term and ask them to explain it to me. I then re-frame what they said, by writing my understanding of the term on a second post-it note, and have this checked by the person who delivered the explanation. If I've got it, I stick the definition on my monitor.

I find at the start of a project, my monitor gets pretty full. Sometimes the area between my keyboard and the monitor fills up too. But after a week or two, the terms have become familiar enough that I can start to weed out post-it notes; things I now know can go in the bin.

I like this as a process for discovering what things mean, and not having to ask twice.

Stupid Questions

No one likes asking a stupid question. "There's no such thing!" you cry. Sometimes it feels like there is.

One way I give myself confidence to start asking questions, particularly in meetings with a number of people in attendance, is to monitor how many of the questions that I wanted to ask end up being raised by someone else. I note down questions as they occur to me, and then listen as the topic evolves. Any of my questions that are asked by someone else, I put a star next to.

At the end of my first meeting I attend on a project, I may have 20 questions and only two have stars. This indicates to me that the majority of my questions are learning that can happen outside of a meeting environment.

At the end of my fourth meeting, I may have 30 questions and 20 have stars. I feel this is positive feedback from the team (though they don't know they're providing it!) that I'm ready to start asking questions that will assist understanding and provoke discussion, rather than frustrate attendees and derail the agenda. Sure, I still send things off-track every now and then, but I have more confidence at this point that asking immediately is the best option.

Dealing with Dismissal

It can be easy to dismiss those who are young and new to the industry. Dismissal can also feel like the default reaction when ideas are confronting or uncomfortable for people to consider. I can get quite worked up when I feel that my voice isn't heard, but can be quite terrible at thinking on my feet under pressure. There's nothing worse than thinking of the comeback two hours later.

One strategy I use to combat dismissal is to prepare my arguments in advance. I like to float an idea to one person first, and note down their criticisms. Then I retreat to my desk and think about how I can address those criticisms, either in the initial idea or in rebuttal, and go try out my refined spiel on someone else. By the time I'm voicing an opinion in a meeting, I've run it past several individuals and prepared responses to a variety of reactions. I've got the tools to express myself confidently.

After a few iterations of this, I can start to imagine the criticisms without visiting people at their desks. The way I present my ideas become more compelling as I learn how the people in my team are most receptive to hearing ideas.


Would you have any other tips?