Wednesday, 20 November 2013

Mind Maps and Automation

I've written in the past about the risk of relying solely on automated checking in release decisions. On a recent project I had great success in changing how the automated test results were used by including mind maps in the reports generated.

Technical background

We were developing a web application using Java, which was built using maven. I created a separate module in the same source repository as the application code for the automated checks. It used Concordion to execute JUnit tests against the application in a browser using Selenium WebDriver. The suite executed from a maven target, or tests could run individually via the IDE. I had a test job in our Jenkins continuous integration server that executed the tests regularly and produced an HTML report of the results using the HTML Publisher Plugin.

What does it look like

Below are the results for a Sample Application*. The job in Jenkins is green and more information about this success can be found by hovering on the job and clicking the HTML report.



My past experience was that management engage with the simple green / red feedback from Jenkins more than any other type of test reporting. Rather than continuing to fight this, I decided to change what it could tell them. There will always be aspects of functionality where it does not make sense to add an automated check, bugs that fall beyond the reach of automation, and decisions about scope that cannot be easily captured by automation alone. I wanted to communicate that information in the only place that management were listening.

The report is designed to be accessible to a non-technical audience and includes a pre-amble to explain the report structure. The entry point is designed to provide a high level visual overview of what testing has occurred, not just the results of automation. I found that the scrum master, product owner and project manager didn't drill further in to the report than this. This is essentially a living test status report that does not contain any metrics.


Each feature was highlighted to reflect a high level status of the checks executed within that space (that's why Sample Feature above is green). It linked to living documentation, as we used Specification by Example to define a set of examples in the gherkin format for each story. 



Although written in plain English with a business focus, these specifications were rarely accessed by management. Rather they were used extensively by the business analysts and developers to review the behaviour of the application and the automated checks in place to verify it. The business analysts in particular would regularly provide unsolicited feedback on these examples, which is indicative of their engagement.

Having different levels of detail accessible in a single location worked extraordinarily well. The entire team were active in test design and interested in the test results. I realised only recently that this approach creates System 1 reporting for mind maps without losing the richer content. It's an approach that I would use again.

What do you think?

__

* This post was a while in coming because I cannot share my actual test reports. This is a doctored example to illustrate what was in place. Obviously...

Saturday, 16 November 2013

Tell me quick

I've been thinking a lot about what stops change in test practice. One of the areas I think we encounter the most resistance is altering the type of reporting used by management to monitor testing.

There's a lot of talk in the testing community about our role being provision of information about software. Most agree that metrics are not a good means of delivering information, yet management seem to feel they have to report upwards with percentages and graphs. Why do they feel this way?

When testers start richer reporting, managers then have to make time to think about and engage with the information. By contrast, numbers are easy to draw quick conclusions from, be they correct conclusions or not. It doesn't take long to scan through a set of statistics then forward the report on.

I have recently switched to a role with dual focus in delivery and people management. I've been surprised by how many additional demands there are on my time. Where a task requires me to flip in to System 2 thinking, a slower and more deliberate contemplation, that task gets parked until I know I have a period of time to focus on it.

When I don't get that period of time, these types of task build up. I end up with emails from days ago that await my attention. They sit in my inbox, quietly mocking me. I don't enjoy it; it makes me feel uncomfortable and somewhat guilty.

Imagine those feelings being associated with a new test practice.

In my local community there is currently a focus on using mind mapping software to create visual test reporting (shout out to Aaron Hodder). Having used this approach in my delivery of testing, I find it a fantastic way to show coverage against my model and I feel a level of freedom in how I think about a problem.

For managers involved in delivery a mind map allows for fast and instinctive assessment on the progress and status of testing (System 1 Thinking). But for people not involved in the project day-to-day it is not that easy.

Every tester will use mind mapping software differently. The structure of their thinking will differ to yours. The layout won't be as you expect. The way they represent success, failure and blockers will vary. Further, if you haven't seen a mind map evolve and you don't know the domain, it's going to be a challenge to interpret. You'll need to think about it properly, and so the report gets filed in that System 2 pile of guilt.

I don't want to report misleading metrics, but I don't think we've found the right alternative yet. I don't want to change the new way that we are working, but one reason it is failing is that we don't have a strong System 1 reporting mechanism to external stakeholders.

To make change stick we need to make it accessible to others. We must be able to deliver test information in a way that it can be processed and understood quickly. How?

I'm thinking about it, I'd love your help.

Thursday, 14 November 2013

BBST Business Benefits

Numerous people have blogged about their experiences doing the BBST Foundations course. I recently wrote a successful proposal for five testers in my organisation to complete the BBST Foundations course in 2014. Below are a couple of generic pieces from the proposal that may help others who need to make a similar request.

The executive summary is largely pulled from the AST website, references are included. The benefits section is a subset of all benefits I had listed, capturing only those that I believe may be applicable across many organisations. Hopefully this post will save somebody time in requesting the testing education of themselves or others.

Executive Summary

The Association for Software Testing offers a series of four week online courses in software testing that attempt to foster a deeper level of learning by giving students more opportunities to practice, discuss, and evaluate what they are learning. [ref] The first of these courses is Black Box Software Testing (BBST) Foundations.

BBST Foundations includes video lectures, quizzes, homework, and a final exam. Every participant in the course reviews work submitted by other participants and provides feedback and suggests grades. [ref]

Too many testing courses emphasize a superficial knowledge of basic ideas. This makes things easy for novices and reassures some practitioners to falsely believe that they understand the field. However, it’s not deep enough to help students apply what they learn to their day-to-day work. [ref]

[organisation] seek deep thinking testers with a breadth of knowledge. The BBST series of courses
is an internationally recognised education path supporting that goal.

Benefits 

The immediate benefits to [organisation], which are realised as soon as the course starts, include:

  • Supporting testers who are passionate about extending their learning, which keeps these people invested in [organisation]
  • Connecting [organisation] testers with the international software testing community 

In the three months following course completion, the benefits include:

  • Re-invigorate participants to think deeply about the testing service they offer 
  • Sharing knowledge with the wider organisation
  • Sharing knowledge with the market via thought pieces on the [organisation] website 

Within a year, the benefits of this proposal are supported by further investment, including:

  • Offer the BBST Foundations course to a second group of participants, to retain momentum for the initiative and continue the benefits above 
  • Extend the original BBST Foundations course participants to an advanced topic, to grow the skills of [organisation] testers to deliver high value testing services 



Saturday, 9 November 2013

Evolution of Testing

I've been involved in a twitter discussion about the evolution of testing. Aleksis has been pushing me to think more about the comments I've made and now I find I need a whole blog post to gather my thoughts together.

Scary thoughts

The discussion began with me claiming it was a little scary that a 2008 Elizabeth Hendrickson presentation is so completely relevant to my job in 2013. Aleksis countered with a 1982 Gerald Weinberg extract that was still relevant to him. This caused me to rue the continued applicability of these articles. "I think testing is especially stuck. Other thinking professions must have a faster progression?"

It feels to me that innovative ideas in testing gain very little traction. Thoughts that are years or decades old are still referred to as new technique. The dominant practice in the industry appears to be one that has changed very little; a practice of writing manual test cases prior to interaction with the application then reporting progress as a percentage of test cases executed.

Changing programming

But we're not alone, right? Aleksis said "I don't know what thinking professions have progressed better. Programming?"

Yes, I think programming is evolving faster than testing. I'm not just talking about the tools or programming languages that are in use. I believe there has also been a shift in how code is written that represents a fundamental change in thinking about a problem.

Programming began in machine language at a very granular level, or even prior with punch cards. When I worked as a developer 10 years ago the abstraction had reached a point where there was focus on object oriented programming techniques that created siloed chunks that interacted with each other. Now I see a step beyond that in the extensive use of third party libraries (Boost C++) and frameworks (Spring). The level of abstraction has altered such that the way you approach solving a programming problem must fundamentally change.

Why do I call this evolution? Because I don't believe people are standing up and talking about object oriented technique at programming conferences anymore. There are swathes of literature about this, and many people have reached achieved mastery of these skills. The problem space has evolved and the content of programming conferences has changed to reflect this. The output from thought leaders has moved on.

Dogma Danger

I think "evolution is when the advice of 30 years ago is redundant because it has become practice and new ideas are required." Aleksis countered "That's how you end up growing up with dogma."

I don't want people to take ideas on as an incontrovertible truth. But I do want people to change how they go about testing because they think an idea is better than what they currently do and they are willing to try it. I am frustrated because it feels that in testing we aren't getting anywhere.

Why aren't we seeing the death of terrible testing practice? Why is a model of thinking conceived in 1996 still seen as new and innovative? Why is the best paper from EuroStar and StarWest in 2002 still describing a a test management technique that isn't in wide use?

Moving the minority

James Bach chimed in on Twitter saying "Testing has evolved in the same way as programming: a small minority learns". Fair, but then everyone else needs to catch up, so that the minority can continue their journey before we're worlds apart. I haven't been in testing long, and I'm already tired of having the same conversations over and over again. I want to move on.

Is anyone else frustrated about this? How can we create sweeping change? How can we be sure the test profession will be genuinely different in another 10 years?