Commercial viabilityAt the Canterbury Software Summit in Christchurch, Shaun Maloney talked about how technical excellence does not guarantee commercial success. You may have a beautifully implemented piece of software, but if you can't sell it then it may all be for naught.
Shaun shared his method of determining commercial viability of an idea using the following questions. Is it busted? Can we fix it? Should we fix it?
If the answer to all three questions is 'yes' then he believes that there is merit in pursuing the idea. If, at any stage, the answer is 'no' then the idea is abandoned.
Shaun visualised this in a very kiwi way using stockyard gates:
Moments of doubt, desire and dissatisfactionAlso at the Canterbury Software Summit, Andy Lark spoke about reimagining business by looking to address moments of doubt, desire and dissatisfaction as experienced by customers. He gave an example of how Uber succeeded in the taxi market by solving a moment of doubt, showing users the location of their taxi. [ref]
I think testing is ripe for reinvention. If talking about how things are broken isn't working, perhaps we'd have better success in focusing on the areas where people experience moments of doubt, desire or dissatisfaction?
Strategic prioritiesAt the Agile Testing Alliance Global Gathering in Bangalore, Renu Rajani shared some information from the World Quality Report 2015-2016. This report is compiled by CapGemini, HP and Sogeti. This year they interviewed 1,560 CIOs and IT and testing leaders from 32 countries.
The results of one particular question struck me. When asked "What is the highest QA and testing strategic priority?" on a scale of 1 to 7, with 7 being the most important, they responded:
|Source: World Quality Report 2015-2016|
For this group, detecting software defects before go-live was the fifth highest priority.
When I reflect on how I frame change to management, I often talk about how:
- we will find more defects,
- the defects we discover will be more diverse and of a higher severity,
- testing will be faster, and
- we will reduce the cost of testing through a more pragmatic approach.
I have never spoken about corporate image, how we increase quality awareness among all disciplines, or how we improve end-user satisfaction.
My arguments have always been based on assumptions about what I believe to be important to my audience. This data from the World Quality Report has made me question whether my assumptions have been incorrect.
Sound bitesAt WeTest Weekend Workshops in Auckland, John Lockhart gave an experience report titled "Incorporating traditional and CDT in an agile environment". During his talk I was fascinated by the way he summarised the different aspects of his test approach. To paraphrase based on my notes:
"ISTQB gives you the background and history of testing, along with testing techniques like boundary analysis, state transition diagrams, etc. CDT gives you the critical thinking side. Agile gives you the wider development methodology."
Would "the critical thinking side" be something close to your single sentence statement about context-driven testing? What would you say instead? John's casual remark made me realise that I may be diminishing the value of my ideas when I summarise them to others.
Putting the pieces togetherI see an opportunity to create a more compelling narrative about change in testing.
I'm planning to stop arguing that testing is broken. Instead I'm going to start thinking about the moments of desire, doubt and dissatisfaction that exist in our test approach.
I'm planning to stop talking about bugs, time and money. Instead I'm going to start framing my reasoning in terms of corporate image, increasing quality awareness among all disciplines, and improving end-user satisfaction.
I'm planning to stop using impromptu sentences to summarise. Instead I'm going to start thinking about a sound bite that doesn't diminish or oversell.
Will you do the same?
Good post, very thoughtful.ReplyDelete
If you look at the report you'ĺl see the opinions of a specific audience. Groups mentioned here are senior management who have other interests than the people carrying out the testing. Plus they're likely to work in large organisations as Capgemini et al tend to work with those. So there's a filter we're looking through.
However this is where a lot of the money in Testing goes so we cannot ignore it.
What I find even more important than talking about testing in a different way is thinking about it from another perspective. Depending on our audience our language can and should differ.
I like your description about desire, doubt and dissatisfaction as triggers for new thoughts.
To answer your last question, I'll do not the same but something very similar.
Thanks for making me think.
As Thomas suggests, there are many conversations about testing, and many audiences.ReplyDelete
Critical analysis of the World Quality Report is one of my favourite annual tasks, and a source of continuing entertainment. It includes news like these pull-quotes: "We have decided to go for a structured approach in order to gain full control of all our testing data. —QA Manager, Telecom, Germany" and "We have taken huge efforts to generate a suitable testing environment. —IT Director, Retail, Germany". Wolfgang Pauli would describe stuff like this as "not even wrong". I'd treat the World Quality Report with a grain, nay, a substantial dose of salt and critical thinking. Ask "What would Laurent Bossavit or Edward Tufte make of this graphic?"
Notice that these are "executive management objectives with QA and testing", which conflates many different sets of ideas, mandates, and tasks. Notice the mischief in the illustration as it emphasizes the ranking of the six categories with the filled-in magnifying glasses, not the values beside the images—all of which are clustered within four percentage points (from 5.8 to 6.1 on a 1-7 scale), well within the statiscal margin of error for a sample of this size. In other words, the respondents to the survey are giving motherhood and apple pie answers; all six of the cited categories are, in essence, "really important" without differentiation between any of them. Note that all of the organizations surveyed employed 1,000 people or more—and the source of the data is "self-completed Web-based interviews" (so far as I can tell, questionnaires on a Web page) and "telephone interviews...assisted by a professional market researcher" ("one for completely agree, two for strongly agree..."). Notice that the companies that organize this survey are vendors of expensive tools and consulting services in the "QA and Testing" space—and are locked into the very kind of testing that I suspect you would agree is overdue for a change. In other words: the World Quality Report is a marketing document. That's Part One.
Here's Part Two. Now, it's true that CIOs care about things that will get the organization into the newspapers; that will affect profit and loss; or that will make them appear to be behind the competitive curve. At the same time, CIOs are rarely interested in the the problems and threats to value that line managers have to deal with. Jerry Weinberg, in Quality Software Management, Vol. 2: First Order Measurement (or the e-book version of the first half, entitled How to Observe Software Systems presents an interesting little exercise. He asked the same set of self-assessment survey questions (this set from the Software Engineering Institute) to VPs, directors, managers, and engineers. The difference in positive answers is remarkable, showing a profound difference in perspective on quality matters. On the self-assessment questions, the highest-level managers supplied 65 Yes (positive) answers; the engineers supplied 12. As Jerry points out, "the managers may believe, for example, that issuing orders for 'regular technical interchanges with the customer' means that those interchanges will actually take place. The engineers know whether these exchanges really happen."ReplyDelete
With reference to what you're planning to talk about: it seems to me that your categories are not mutually exclusive, and it's good to think and talk about them all. I can say this with great assurance, having been a project manager myself: on a project, bugs, time and money are specific and serious concerns of line management. So if you're a tester working on a project, it would probably be a good idea to keep talking about bugs. Everyone on the team has a role to play in producing a quality product. You've probably been hired for your skill in finding and reporting bugs that threaten the quality of the product. While talking about those, it might also be a good idea to talk about ways to find them early on, when they're not so serious, in order to prevent them from becoming more serious. It would probably be a really good idea in general to frame your reasoning about bugs in terms of corporate image, increasing quality awareness among all disciplines, and improving end-user satisfaction. Similarly, you can practice sound bites that don't diminish or oversell, and also practice using impromptu sentences to summarise; and you can provide evidence that testing is broken by talking about the moments of desire, doubt and dissatisfaction that exist in our test approach. And you can tailor all of this discourse to the appropriate audience: to your co-workers, your colleagues, your manager, and your executives.
Really good points. It's all about the context. There's a great presentation that Keith Klain does called 'How to Talk to a CIO About Testing...' that really brought that home to me. Different messages for different people.ReplyDelete
Saying that, I do also love a good read and critique of The World Quality Report as much as the next man :)