Wednesday 16 December 2015

Hiring for skills and team diversity

Over the past six months, I've been involved in a lot of recruitment activities. To illustrate, when I look at the length of time that our testers have been in their roles, just over half have been hired very recently:


The majority of this recruitment has been due to growth. In April 2015 we had 20 testers, while in February 2016 we'll have 29 testers. We've also had a bit of churn due to internal promotions, maternity leave and people leaving the organisation.

I enjoy getting involved in recruitment, because I enjoy shaping a team. There are a couple of community building strategies that I've advocated for during our recruitment over the past few months that I think have had a positive impact on our testing culture.


Skills Diversity

Prior to my arrival, the hiring managers had collaboratively created a list of skills and experience for a tester. These included a minimum number of years in a testing role, a minimum number of years experience using our specific automation tools, experience in an agile development team, and preferably banking domain knowledge too.

In the New Zealand market, people who fit all of these criteria are very rare. The kakapo of testing.

Though I agreed that we needed all the skills we were seeking as part of the testing competency, I didn't think we needed them all to sit with each individual. Instead I wanted to broaden our hiring scope and choose people with complementary strengths. I felt this would still achieve the same end goal of a cross-functional testing team.

I examined the testing capabilities within each tribe (a collection of agile teams who work on the same product) against four experience criteria: testing, automation, agile and banking. Through consultative subjective assessment of the existing testers, we identified the most important skills that we needed to look for in each hiring iteration. We've started to select candidates based on these priorities rather than looking for a person who has everything.



Team Diversity

As editor of Testing Trapeze and as a co-organiser of WeTest, I'm constantly conscious of accurately representing the community behind the forums through the voices we elevate. Similarly, in my coaching role, I feel a responsibility to foster diversity in the team that I support.

In February 2016, with all our new starters on board, our testing team should look something like this:




It may not be perfect, but I'm proud of the diversity that image reflects.

Prior to my arrival, there were very few testers with less than three years testing experience. I've been vocal in my arguments for hiring juniors as part of a balanced recruitment strategy. I'm happy with how the Testing Experience graph has shifted over the past six months, with approximately a quarter of the team now considered to be junior staff.

It's probably also worth noting that the testers aged in their 20s and the testers with 0 - 3 years of experience are not the same group of people, despite their identical pie graph segment sizing. We've hired inexperienced testers across a wide age range.

I don't use any conscious strategies for gender or ethnicity, and our ratios in these areas are largely unchanged, but I feel they're relatively representative of our local community.

*****

The biggest benefit that I've observed in hiring for skills and team diversity is that it promotes a culture of learning. Where people bring different strengths and different experiences, it creates opportunities to learn from one another.

The exchange of knowledge is personal and hands on, rather than learning online or from printed materials. This means that, in addition to imparting new skills or ideas, we improve collaboration between testers across teams and foster a sense of testing community.

I believe in hiring for a team, not a role. Look at the whole picture and consider the tenets of diversity.

Thursday 10 December 2015

Why you should hire junior testers

I am a vocal advocate for hiring junior testers into our team. By junior, I mean a person with no experience in testing, regardless of age or other work history.

I've been having a lot of conversations recently about why I believe we should hire juniors as part of a balanced hiring strategy. I'm curious to know how these points align to the thoughts of others who are involved in recruitment.

Attitude

I look for junior applicants who want to get into a testing role, in an agile environment, where they'll have the opportunity to pick up some automation skills, in the financial sector. In other words, I look for junior applicants who want the role that I'm advertising as opposed to any role at all.

These juniors distinguish themselves by being prepared for their interviews, by having a series of questions about the role that indicate they've considered what the position will require them to do, and by demonstrating the ways that they have started their own study towards entering the profession. They make their desire plain to see.

Hiring this sort of junior brings a driven individual into your organisation who is motivated and passionate about learning. In a supportive environment, this kind of attitude is infectious. Though juniors may not bring many skills, I believe they bring a unique thirst for knowledge that can revitalise the desire to learn in the people around them.

Challenge

Hiring a junior into a role that they don't yet have the skills to perform will obviously provide them with a huge number of challenges. However it also introduces challenge to the roles of those who will support the junior in their learning. I believe this is a good thing.

When a junior starts with the organisation, we pair them with a senior buddy. The pair usually work in the same agile development team, sitting alongside one another day-to-day, for easy and contextual knowledge transfer.

In a relatively flat organisation structure, being a buddy to a new starter is one way for our senior testers to get experience in mentoring and teaching. It's a responsibility that challenges our seniors to take ownership of developing a junior, by offering them new experiences and imparting their knowledge. Without juniors, we cannot offer this leadership challenge to our senior staff.

Loyalty

I don't like hiring people who can already do a job comfortably on Day One. I think these are the people who get bored and leave within a relatively short period of time or, worse, they are happy to stagnate and occupy their role without developing themselves.

The learning opportunities for a junior are the greatest of any type of hire. Their development path should be long and rewarding. It's growth that makes a role exciting, and creates loyalty between an employee and their organisation. A junior will feel grateful for being given the chance and support to shine.

I believe that loyalty, paired with a healthy work environment, leads to lower staff turnover. I want to retain our testers. Hiring juniors feels like a great way to extend the period of time that people stay with our team.

Potential

A junior candidate is, in many respects, an empty vessel. There are no bad habits to break, no misconceptions to correct, no terminology to redefine. Starting from scratch can be easier than cleaning up before you begin.

Juniors are comfortable asking questions, because they know that there's an expectation that they will have to ask to learn. They bring very few assumptions, because they don't have any prior experience that taints their perspective. They are great at thinking laterally, because they've never had their ideas confined to a particular box.

I believe that junior candidates have massive potential to be amazing testers. Providing an environment to turn this potential into a reality is essential, but their clean slate can be viewed as an asset for a testing role.


I'd like to see more organisations hiring junior testers, not as a strategy to cheapen or deskill their workforce but rather as an investment to develop the next generation of testers. There is potential for junior hires to have vast positive impact and shape the future. Let's give them the opportunity.

Tuesday 24 November 2015

Testing for Non-Testers Pathway

This pathway is designed to help non-testers tackle testing activities. If you're asked to test something in your team, this is a set of practical resources to help you.

There are a variety of steps that you may approach linearly or by hopping about to those that interest you most. It is a little different to the other testing pathways as it is:
  • Specifically for non-testers
  • Designed to help people with immediate questions like "How do I decide what to test?"
  • Does not include exercises, instead assuming that the provided resources will be applied in practical situations within your development team

STEP - Why non-testers should be involved in testing

Agile development teams generally seek shared ownership of quality. In order to achieve this, the tester may have to yield some control and others in the team may need to be more willing to pick up testing activities. These teams want to move away from the tester as the only person who tests, towards an environment where the tester leads testing and empowers others to contribute too. These articles describe the shift in thinking towards testing as a shared activity:

STEP - How to make an application testable

As developers and business analysts, the easiest first step to aid testing of the product is to understand the attributes that make it testable. Changing the way you specify solutions and design software can have significant impact on a tester's ability to verify and explore what is delivered. Learn to consider the various facets of testability:

STEP - How to decide what to test

When asked to pick up a testing task, the non-tester may wonder where to begin. Testers are often poor at explaining how they test an application, which can make testing seem like magic. In fact, testers will have their own set of testing heuristics, whether they can articulate them or not! Fortunately there are resources that provide common test heuristics to help you determine which tests you'd like to perform, or prompt you to think of your own ideas beyond these boundaries:

STEP - How to document your test ideas

One deterrent for a non-tester to test may be your perception of the amount of documentation required. Within your development team, the testers are likely to have an approach that you can adopt. However there may be freedom for you to utilize lightweight documentation to map out your thinking. Perhaps to simplify what you need to do, a tester might transfer your results into the wider ecosystem of test artifacts afterwards. These articles give practical examples of using mind mapping software for test planning and execution:

STEP - What to think about while testing

Testing isn't just about picking up and blindly applying the heuristics of others. When interacting with the software you may also want to consider what user persona to adopt, what bugs to raise, and what test evidence to collect. Though many of these decisions will be driven by collaborative interaction with your development team, these resources may help you understand the possible compromises being made and approaches that are possible:

STEP - How to debrief

A key aspect of getting non-testers to pick up test activities is following up with a post-testing debrief. This provides an opportunity for the tester and the non-tester to sit together and spend a few minutes discussing the testing activity that has occurred. These resources provide common questions that may be asked during a debrief:

STEP - Where to automate

Finally, you may be asked to contribute towards automation. Agile teams will usually require automated checks to support their rapid delivery cycles so that testers have time to understand new functionality, explore the application and find interesting problems. These articles describe factors to consider when determining what to automate, where to implement and how to interpret the results:

Friday 13 November 2015

Using strong-style pairing and a coding dojo for test automation training

At work we're implementing a brand new automation suite for one of our internet banking applications. This is the first framework that I've introduced from a coaching perspective as opposed to being the tester implementing automation day-to-day within a delivery team.

Aside from choosing tools and developing a strategy for automation, I've discovered that a large proportion of the coaching work required is to train the testers within the teams in how to install, use and extend the new suite.

I've done a lot of classroom training and workshops before, but I felt that these formats weren't well suited to teaching automation. Instead I've used two practices that are traditionally associated with software development rather than testing: strong-style pairing and a coding dojo.

I've been surprised at how well these practices have worked for our test automation training and thought I would share my experience.

Strong-style pairing

After a series of introductory meetings to explain the intent of the new suite and give a high-level overview of its architecture, each tester worked independently using the instructions on our organisation wiki to get the tests running on their local environment.

As the testers were completing their installations, I worked in parallel to create skeleton tests with simple assertions in different areas of the application, one area per tester. To keep the training as simple as possible I wanted to split out distinct areas of focus for individual learning and reduce the potential for merge conflicts of our source code.

As they were ready, I introduced an area to each tester via individual one hour pairing sessions using strong-style pairing. The golden rule of strong-style pairing is:

"for an idea to go from your head into the computer it MUST go through someone else's hands"

For these sessions I acted as the navigator and the tester who I was training acted as the driver. As the testers were completely unfamiliar with the new automation suite, strong-style pairing was a relatively comfortable format. I did a lot of talking, while the testers themselves worked hands-on, and together we expanded the tests in their particular area of the application.

As the navigator, I prepared for each pairing session by thinking up a series of objectives at varying degrees of difficulty to accommodate different levels of skill. My overarching goal was to finish the hour with a commit back to the repository that included some small change to the suite, which was achieved in two-thirds of the sessions.

As a coach, I found these sessions really useful to judge how much support the testers will require as we progress from a prototype stage and attempt to fulfill the vision for this suite. I now have a much more granular view of where people have strengths and where they may require some help.

I had a lot of positive feedback from the testers themselves. For me the success was that many were able to continue independently immediately following the session and make updates to the tests on their own.

Coding Dojo

At this point everyone had installed the suite individually, then had their pairing session to get a basic understanding of how to extend an existing test. The next step was to learn how to implement a new test within the framework.

I felt that a second round of individual pairing would involve a lot of needless repetition on my part, explaining the same things over and over again. Ultimately I wanted the testers in the team to start pairing with each other to learn collaboratively as part of our long-running pairing experiment.

I found a "how do you put on a coding dojo?" video and decided to try it out.

I planned the dojo as a two hour session for six testers. I decided to allow 90 minutes for coding, with 15 minutes on each side for introduction and closing activities. Within the 90 minutes, each of the six testers would have 15 minutes in the navigator/co-pilot role, and 15 minutes at the keyboard in the driver/pilot role.

I thought carefully about the order in which to ask people to act in these roles. I wanted to start with a confident pilot who would put us on the right course. I also wanted the testers to work in the pairs that they would work in immediately following the session to tackle their next task. So I created a small timetable. To illustrate with fictitious testers:



On the morning of the session I sent an email out to all the participants that reiterated our objective, shared the timetable, and explained what they would not require their own laptops to participate.

We started the session at 1pm. I had my laptop prepared, with only the relevant applications open and all forms of communication with the outside world (email, instant messaging, etc.) switched off. The laptop was connected to a projector and we had a large flipchart with markers to use a shared notes space.

I reiterated the content of the morning email and shared our three rules:

  • The facilitator asks questions and doesn't give answers
  • Everyone must participate in the code being written
  • Everyone must take a turn at the keyboard

Then I sat back and watched the team work together to create a new test!

Though I found it quite challenging to keep quiet at times, I could see that the absence of a single authority was getting the group to work together. It was really interesting to see the approach taken, which differed from how I thought they might tackle the problem. I also learned a lot more about the personalities and social dynamics within the team by watching the way they interacted.

It took almost exactly 90 minutes to write a new test that executed successfully and commit it back to the repository. Each tester had the opportunity to contribute and there was a nice moment when the test passed for the first time and the room collectively celebrated!

I felt that the session achieved the broader objective of teaching all the testers how to implement a new test, and provided enough training so that they can now work in their own pairs to repeat the exercise for another area of the application.

I intend to continue to use both strong-style pairing and coding dojos to teach test automation.







Tuesday 10 November 2015

How to develop into a great speaker

Your first conference talk will give you exposure to writing an abstract, marketing your ideas, creating engaging slides, structuring a talk, speaking clearly, keeping to time, and so on. The more talks you do, the more experience you gain in those same activities.

But developing as a speaker is not just about opportunity for repetition. I see a change in speakers when they stop thinking so much about what they're going to say during their talk and start concentrating on how they're going to say it.

By that I mean, when a great speaker is on the stage their content is almost on autopilot. They're not worried about the points they need to cover on the next slide. Rather they're more aware of their delivery. They're operating at a meta-level.

So, what sort of things are these great speakers thinking about?

The audience

A new speaker is learning to feel comfortable making eye contact with the audience. A great speaker is learning to anticipate mood, read body language, understand the response of the audience, and adapt their presentation to react to that environment.

Look at the schedule for the event. How long ago did people have food? How many concurrent sessions have the participants sat through? Whereabouts in the day are you? Whereabouts in the conference? Think about these factors as you watch the audience arrive in the room. Is there energy or will you have to create it?

How do people sit down? Do you see many people with their arms folded or their legs crossed? How many people have an open posture and are tilted forwards in their seats? Are people receptive to your ideas or will you have to establish your credibility and use persuasive language?

When you change the slide in your presentation, how many eyes travel to the slide and then back to you as the speaker? Who isn't making eye contact with you at all, looking downwards, or out the window? Are people engaged or do you have to ask for their participation to draw them in?

Are there people who are still reading a slide when you switch to the next one? Are people zoning out between slide transitions? Look for signs of frustration, like people sighing or pulling out their phones. These may be indicators that you need to change your pace.

The audience won't change the key messages of a presentation, but a great presenter will allow them to massively influence its delivery.

The commentary

A new speaker will often include remarks in their own presentations that are about their delivery rather than about their content. It's analogous to playing the director's commentary across a movie, however in a conference presentation these remarks are often apologetic or self-deprecating.

Do you start your talk by undermining your own credibility or wondering aloud whether you're really an authority on your topic? Do you question the choices of the organisers who granted you a spot on the stage, or the choices of the audience for selecting your session to attend over other alternatives? Do you tell people how inexperienced you are at speaking or how nervous you feel about this presentation? Do you apologise for fumbling content or narrate your disappointment in failed technology?

I believe that eliminating this sort of commentary from your presentation creates a perception that you're a very confident speaker, regardless of how confident you actually feel.

I have a heuristic to mute this doubt track within my own presentations. When thinking about whether or not to vocalise something I like to first consider who I'm saying it for. Often the commentary is stuff I say to make myself more comfortable, or to settle myself in to the beginning of my presentation. It's not for the audience. And if it's not for the audience, I shouldn't say it.

The language

Related to audience, the same presentation may differ wildly in delivery through choice of language. Can you interact informally, use colloquialisms, and make jokes? Or should you take a more professional tone?

Choice of words is influenced by environment. Are you speaking at a MeetUp event or a formal conference? It's also influenced by culture. Your references and examples may change between a talk in New Zealand versus a talk in India.

You might also consider whether jargon appropriate for your audience. What terms will they be familiar with and which should you explain beyond an acronym?

Adjusting to these factors can make a big difference in how accessible and relatable your presentation is. A great speaker can deliver the same slides twice, in two different contexts, with what may feel like an entirely different speech. The key messages don't change, yet the words are altered for the greatest impact.


If you're feeling ready to tackle the next challenge in public speaking, start practicing these meta skills. Think about your audience, your commentary and your language in your next presentation.

What would you add?

Monday 26 October 2015

How I write my blog

Over the past week, I've had a few conversations about writing a blog. As a result, I thought I'd record my approach to blogging, to try and encourage a wider group of people to write. Here's how I get from idea to tweet.

No backlog

I prefer to write about things that I care about right now rather than work from a backlog of ideas. I tried a backlog of ideas once, but it didn't really work for me to have a bucket of potential topics to write from. I found the choice to be paralyzing rather than enabling.

Write to a person

I like to think of a real person who would potentially get value from my thoughts, then write the post for them. This helps me to pitch the tone of my writing - not too condescending and not too complex. I also find writing to an individual easier than writing to a generic group e.g. Bob vs. "testers who want to start blogging".

Refining loop

I write my posts by paragraph. I'll type out my ideas, almost in a train of consciousness, then go back through the words and refine them into something that reads nicely. Realising that my thoughts don't have to come out perfectly the first time has really helped me to write more freely.

Proofread in context

When I finish a post, I read through it in my blog editor. Then I also read through it in the preview version to see it in the layout that will appear on my blog. Even when I think I'm done, looking at the words in a different format will often prompt me to change phrasing and pick up spelling mistakes. It's a fresh perspective for my brain.

Practice

When I look back at my earlier blog posts, I find them pretty embarrassing. I imagine that when I look back on this post, and others of this era, I will find them embarrassing too. I feel that my writing is improving the more that I blog, so I try to keep practicing to maintain this evolution.

Set goals

I have a self-imposed target of three blog posts per month. I don't always hit that target, but I find that it motivates me to write. Without this, I am prone to getting stuck in writing ruts where my head won't settle on a topic and I can't identify people who will care about what I have to say without doubt creeping in.

Pleasing everyone

What I write doesn't have to be something that everyone likes, or universally useful, or shared widely across the world. I figure, at a minimum, it's valuable to me to write my blog. I get better at writing, I work out how to articulate my ideas, I develop a voice. I find it easier to consider pleasing others as a bonus.

Sharing

I always tweet when I write a blog. It's difficult for people to get any value from what I'm writing if they don't know it's there. I also like getting feedback from the community. Because sharing is circular, I also try to promote writing from others by tweeting content that I enjoy and having a list of blogs I follow on my site.


If blogging is something that you'd like to start, or you'd like to do more of, it's likely that the only thing stopping you is yourself. I hope these tips encourage you to write. I look forward to reading what you have to say.

Thursday 22 October 2015

Feedback for Conference Speakers

I spoke at a number of conferences over the past month or so. After each talk I received a variety of feedback, from a variety of channels. The genesis for this post is two pieces of feedback I received for the same talk at the Canterbury Software Summit.

From the conference survey responses:
"Good coverage of the topic; however: agile teams/tribes should be self-sustained. Katrina's presentation though was explaining management activities. What I missed was how the agile approach really works for BNZ, how they constantly improve, what issue and challenges they faced and face etc. Missing enthusiasm. Average slide quality."

From a direct message on Twitter:
"One of the guys at work I talked to today, appreciated your talk at Canterbury software summit. We're thinking of now trying some of the ideas you talked about. Katrina please keep using your gift of inspiring the testing community, as it makes our jobs more enjoyable and fruitful."

As you can see, one person was utterly underwhelmed while the other felt inspired and motivated to make changes in his organisation.

As a new speaker I had no frame of reference for feedback, or any notion of what to expect back from the audience when I delivered a talk. Had I received that first piece of feedback for my first presentation I would have been entirely disheartened.

Now that I've presented a few times, I'm starting to see patterns in when I receive feedback, what type of feedback it is, and how I can use it. To illustrate, here's the feedback I received after my 'Diversify' keynote talk at the recent WeTest Weekend Workshops.

Verbal

I find public speaking a taxing activity. At the end of the talk, my adrenaline is racing - I know it's all over and I am looking to get away from people for a few minutes to calm down. However, there is usually at least one person who comes to the front of the room to speak to me. 

I like that people do this. The things they wish to say are usually positive and it's good to get immediate validation that it all went okay. Unfortunately I usually don't remember the nice things that they've said, because my brain isn't working properly yet!

Occasionally I get immediate feedback of a different kind. At WeTest Weekend Workshops someone approached to suggest how I could improve my use of the handheld microphone. Strangely I always remember this sort of feedback, the things that aren't entirely positive, despite being in the same agitated mental state.

I consider the number of people who come to the front of the room after my talk to be a loose indicator of the emotional response of my audience. The more people, the more I feel like I spoke about something that really resonated for them.

Social Media

After my talk I like to find a quiet spot, take a few deep breaths, and then check the reaction from Twitter. I see three broad categories for the feedback that appear in my Twitter timeline.

Announcements

The first tweets are the people who simply say that they are attending my talk. Announcement tweets contain no judgement and no content. Often they contain a photo from near the start of the presentation.



As I've started to gain a wider following on Twitter, I think the number of people who announce that they're at my talk has increased. As a new speaker, very few people got excited about merely attending my sessions! I consider announcement tweets a loose indicator of my reputation in the community behind the conference.

Ideas

The next tweets will be the ideas from my talk. These might be pieces of content that resonated with people, summaries of my main points, or tweets that let people who are not in the audience know that they've been mentioned.



I consider idea tweets a loose indicator of how engaged people are in the content. In some respects I prefer that there are fewer of these type of tweets, as I believe that most people find it difficult to actively listen while also composing the perfect 140 characters on Twitter.

Reaction

Finally there are the tweets that come at the end of the talk. Reaction tweets are all about judgement, though on Twitter you're usually just going to get the happy vibes from people who loved it and felt inspired.


Reaction tweets are about the buzz. I consider these a replacement to coming to the front of the room after the presentation, and so treat them as the same loose indicator of the emotional response of my audience. The more reaction tweets I get, the better. Even if they're not all positive, at least I touched a nerve!

Event

If the event information has been published online, via Meet Up, Facebook, or some other alternative, there is usually an opportunity to post feedback.

I find that the feedback I receive via social media comes from people who feel a connection to me as an individual, or who are confident about expressing their opinions to a wide audience. By contrast the feedback I receive via the event page comes from people who I do not know well, those who need longer to process their reaction to the presentation, or those who are not on Twitter!

There is also a shift in language. People have had time to reflect, so their reaction is less emotive and more analytical. On Twitter people "love" the presentation while on Meet Up it's "great".


Providing feedback via Meet Up requires effort beyond the time frame of the event itself. I consider this feedback a loose indicator of how I've improved my standing in the community behind the conference.

Survey

Many conferences send out a post-event survey to all the attendees to help them improve their format, content and structure for the following year.

Survey feedback is anonymous and, of all the forms of feedback, gives the widest spectrum. It seems that once there is no association between your feedback and your name, people become remarkably honest.

Here's a selection of survey comments about the speakers at the WeTest Weekend Workshops event to illustrate this:
  • Katrina's presentation was awesome. Very motivating 
  • Keynote was useful and aligned with the theme. 
  • Might have been even better if we had a more diverse speakers. 
  • Would be lovely to see more "activity" type events over the vanilla "here's a talk" type events 
  • The talks I attended were average from my perspective.
  • Did not find it as useful as I thought it would be.

Suddenly there's a much richer picture that includes those who had a less enjoyable experience. At conferences without a survey form, the only negative feedback you receive may be the absence of positive feedback.

I consider survey feedback a loose indicator of what I can improve in my presentations. I don't listen to everything, and where there are clearly other factors at play I take the criticism with a grain of salt, but overall I find it a valuable source of information to help me refine my content and delivery.

Blogs

Finally, there are people who want to share the talk with others. I take blogs, and other post-event activities of this nature, as a form of feedback. I treat these as a loose indicator of lasting impact.

After WeTest, the following resources have appeared that referenced my keynote:

The Big Picture

The volume and type of feedback I get varies greatly between presentations. It's taken time to establish my own interpretations of an influx of information that might otherwise feel overwhelming. I use the types of feedback I've described to determine:
  • The emotional response from my audience
  • How engaged people are in my content
  • My existing reputation in the community behind the conference
  • Whether I've improved my reputation in the community behind the conference
  • What I can improve in my presentations
  • Whether I've had a lasting impact

Sunday 18 October 2015

Changing the conversation about change in testing

Over the past couple of weeks I've been challenged to rethink how I advocate for change in testing. Here are four ideas, from three different conferences, that I hope will improve my powers of persuasion.

Commercial viability

At the Canterbury Software Summit in Christchurch, Shaun Maloney talked about how technical excellence does not guarantee commercial success. You may have a beautifully implemented piece of software, but if you can't sell it then it may all be for naught.

Shaun shared his method of determining commercial viability of an idea using the following questions. Is it busted? Can we fix it? Should we fix it?

If the answer to all three questions is 'yes' then he believes that there is merit in pursuing the idea. If, at any stage, the answer is 'no' then the idea is abandoned.

Shaun visualised this in a very kiwi way using stockyard gates:



This made me wonder, when I advocate for change, how often do I fail at this first gate? If the people I'm talking to just don't think that the testing we do now is busted, perhaps they mentally kill the conversation before it even begins?

Moments of doubt, desire and dissatisfaction

Also at the Canterbury Software Summit, Andy Lark spoke about reimagining business by looking to address moments of doubt, desire and dissatisfaction as experienced by customers. He gave an example of how Uber succeeded in the taxi market by solving a moment of doubt, showing users the location of their taxi. [ref]

I think testing is ripe for reinvention. If talking about how things are broken isn't working, perhaps we'd have better success in focusing on the areas where people experience moments of doubt, desire or dissatisfaction?

Strategic priorities

At the Agile Testing Alliance Global Gathering in Bangalore, Renu Rajani shared some information from the World Quality Report 2015-2016. This report is compiled by CapGemini, HP and Sogeti. This year they interviewed 1,560 CIOs and IT and testing leaders from 32 countries.

The results of one particular question struck me. When asked "What is the highest QA and testing strategic priority?" on a scale of 1 to 7, with 7 being the most important, they responded:

Source: World Quality Report 2015-2016

For this group, detecting software defects before go-live was the fifth highest priority.

When I reflect on how I frame change to management, I often talk about how:

  • we will find more defects,
  • the defects we discover will be more diverse and of a higher severity,
  • testing will be faster, and 
  • we will reduce the cost of testing through a more pragmatic approach.

I have never spoken about corporate image, how we increase quality awareness among all disciplines, or how we improve end-user satisfaction.

My arguments have always been based on assumptions about what I believe to be important to my audience. This data from the World Quality Report has made me question whether my assumptions have been incorrect.

Sound bites

At WeTest Weekend Workshops in Auckland, John Lockhart gave an experience report titled "Incorporating traditional and CDT in an agile environment". During his talk I was fascinated by the way he summarised the different aspects of his test approach. To paraphrase based on my notes:

"ISTQB gives you the background and history of testing, along with testing techniques like boundary analysis, state transition diagrams, etc. CDT gives you the critical thinking side. Agile gives you the wider development methodology."

Would "the critical thinking side" be something close to your single sentence statement about context-driven testing? What would you say instead? John's casual remark made me realise that I may be diminishing the value of my ideas when I summarise them to others.

Putting the pieces together

I see an opportunity to create a more compelling narrative about change in testing.

I'm planning to stop arguing that testing is broken. Instead I'm going to start thinking about the moments of desire, doubt and dissatisfaction that exist in our test approach.

I'm planning to stop talking about bugs, time and money. Instead I'm going to start framing my reasoning in terms of corporate image, increasing quality awareness among all disciplines, and improving end-user satisfaction.

I'm planning to stop using impromptu sentences to summarise. Instead I'm going to start thinking about a sound bite that doesn't diminish or oversell.

Will you do the same?

Tuesday 29 September 2015

Performance Testing in Continuous Delivery

I went to Test Professionals Network in Wellington tonight and heard Diana Omuoyo speaking about Performance Testing in Continuous Delivery. Diana shared a really interesting story of her time working at Expedia in the US where she was a member of a small performance testing team. She focused on the changes made in performance testing to help support a shift from monthly product releases towards continuous delivery.

The boundaries of performance testing

Before beginning the journey towards continuous delivery, the Expedia performance testing team had two conversations with the wider organisation to decide:

  1. When is the application ready for performance testing?
  2. What performance testing results mean the application is ready for production?

These questions generated healthy discussion as there was a lot of diversity in opinion. The agreed answers established the boundaries of focus for Diana's team to implement change in their approach to performance testing.

Looking at their existing approach, the performance testers felt that most of their time was being lost in the slow feedback loop between development and performance testing. These two activities are so far removed from one another in a traditional development lifecycle, with several phases of testing happening between them, that raising and remedying performance problems takes a long time. They felt that where performance testing delayed the release to production, it was often due to the time being spent in these interactions.

The performance team decided to introduce performance testing of individual components in a continuous integration pipeline to help reduce the number of problems being discovered during integrated performance testing later in the lifecycle. Diana observed that as the team established this continuous integration pipeline they started to work "more with developers than testers".

Creating a performance pipeline

The "functional folk" had already built a pipeline that compiled the web application, ran unit tests, deployed to a functional test environment, ran basic regression tests and created a release build. Diana's team decided to create a separate performance pipeline that logically followed on from the functional pipeline.

The performance pipeline took the release build from the functional pipeline and deployed it to an environment, then discovered the version of the same application that was currently in production and deployed it to a different environment with the same hardware specification. A two hour performance test was then run in parallel against both versions of the application and a comparative report generated.

The team was extremely fortunate to have access to a large pool of production-like servers on which to run their performance tests, so being able to run tests in parallel and at scale wasn't an issue. When asked whether the same approach would work on smaller test environments, Diana felt that it would so long as the production traffic profile was appropriate scaled to the test environment hardware specification.

The two hourly builds generated a lot of information, in fact too much to be useful. The performance team decided to save the data from the two hour performance tests and then run automated analysis that detected trending degradation across the combined result of three builds.

The thresholds at which to report performance decay were set in consultation with the business and were high enough to alleviate the risk of false positives in the report caused by developers simply adding functionality to the application. Diana noted that it was ultimately a business decision whether to release, and that where a new piece of functionality caused a performance degradation that resulted in failing performance tests the business could still opt to release it.

Even with trend based analysis, there was some balancing to get the email notifications from these two hourly builds correct. They were initially being sent to people within the development team who subscribed. As the performance analysis was improved and refined, the notifications became increasingly relevant and they started to be delivered to more people.

Performance testing the pieces

The performance pipeline was designed to test the deployed web application in isolation rather than in an integrated environment. It made use of a number of stubs so that any degradation detected would likely relate to changes in the application rather than instability in third party systems.

In addition to testing the web application, the performance team created continuous integration pipelines for the database and services layer underneath the UI. Many of these lower level performance tests used self-service Jenkins jobs that the developers could use to spin up a cloud instance suited to the size of the component, deploy the component in isolation, run a performance test, tear down the environment and provide a report.

Diana also mentioned A/B performance testing where the team would deploy a build with a feature flag switched on, and the same build with the same feature flag switched off, then run a parallel performance test against each to determine whether the flag caused any significant performance problems.

Integrated performance

The performance testing team retained their traditional integration performance tests as part of the release to production, but with the presence of earlier performance testing in the development process this became more of a formality. Fewer problems were discovered late in the release process.

Diana estimated that these changes were about 2 years of work for 2 - 3 people. She commented that it was relatively easy to set up the tests and pipelines, but difficult to automate analysis of the results.

Ultimately Diana was part of taking the Expedia release process from monthly to twice per week. I imagine that their journey towards continuous delivery continues!

Sunday 27 September 2015

Security Testing Pathway

This pathway is a tool to help guide your self development in security testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:
  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.

This pathway was developed in conjunction with Daniel Billing & Sarah Burgess


STEP - Introduction to oWASP

The Open Web Application Security Project (OWASP) is a worldwide not-for-profit charitable organisation focused on improving the security of software. Understand the breadth of information and resources available on the oWASP site:
EXERCISE
[1 hour] The volume of information available on the oWASP site can be overwhelming. The resources on the site are a product of thousands of active wiki users, however the aspects of security that your organisation prioritises will depend on the views of individuals. To bring your focus back to what is relevant for your context, talk to someone in your team about security. You may like to find out:
  • What security testing does your organisation currently prioritise? Why? 
  • Have you been attacked in the past? In what way? 
  • How are developers preventing vulnerabilities in your applications?


STEP - Threat Modelling

Understand how threat modeling can help clarify risks to the organisation:
EXERCISE
[1 hour] Use the STRIDE model to think about threats to your application. Try to get specific about the ways in which your organisation is vulnerable to each threat. Share your thinking with a security specialist or technical lead, see if you can add anything extra to your threat model with their help.


STEP - Approach to security testing

Learn about how others plan security testing and integrate it into their development process:
EXERCISE
[1 hour] Talk to your team about who is currently responsible for security testing and how it is integrated into your existing development process. Talk to a technical lead or coach about the opportunities for improving what you do now.


WARNING: In many countries it is illegal to use the following attacks. Please make sure that you practice your security testing skills in the demonstration environments specified in the exercises only.


STEP - SQL injection attack

Injection flaws occur when untrusted data is sent to an interpreter as part of a command or query. The attacker’s hostile data can trick the interpreter into executing unintended commands or accessing data without proper authorization. Understand more about SQL injection:
EXERCISES
[1 hour] Work through the exercises in how to exploit a SQL injection attack. Using this intentionally vulnerable demonstration site, you should learn how to gain unauthorised access to an application, find user account and password details, and discover details of the underlying database.

[1 hour] The Altoro Mutual website is published by Watchfire, Inc. for the sole purpose of demonstrating the effectiveness of Watchfire products in detecting web application vulnerabilities and website defects. Based on what you’ve learned, how many ways can you gain access to the application using SQL injection? If you’re unsure where to begin, you may wish to try the Online Banking Login form as a starting point for your attack.


STEP - Cross site scripting attack

Cross site scripting, or XSS, flaws occur whenever an application takes untrusted data and sends it to a web browser without proper validation or escaping. XSS allows attackers to execute scripts in the victim’s browser which can hijack user sessions, deface web sites, or redirect the user to malicious sites. Understand more about XSS:
EXERCISE
[2 hours] Gruyere is an application with a number of security vulnerabilities for use as a learning tool. Using a Firefox browser, start up your own instance of Gruyere and try to complete the XSS Challenges. Each challenge includes hints to help you expose the vulnerabilities independently, then explains one way to exploit and fix the problems in the application.


STEP - Exploiting authentication and session management

Application functions related to authentication and session management are often not implemented correctly, allowing attackers to compromise passwords, keys, or session tokens, or to exploit other implementation flaws to assume other users’ identities.
EXERCISE
[3 hours] For this exercise you’ll need a proxy application that allows you to capture, edit and resubmit network traffic. You may wish to download Fiddler, ZAP, Burp Suite, or use another tool of your choice.

Use the deliberately insecure Supercar Showdown website, complete the following challenges:
  1. Spoof another user’s session and perform actions against their user account
  2. Elevate your access privileges from a standard user to an administrative user
  3. Attempt to reset the password of another user
Some assistance for these challenges may be found in the Hack Yourself First course materials in the section on Account Management.


STEP - Practice makes perfect

The previous steps give information and exercises for the Top 3 attacks in the oWASP Top 10. All of the vulnerable learning environments provided so far are susceptible to these attacks in some way.

EXERCISE
[8 hours] Take the opportunity to practise the three types of exploit that have been introduced in this pathway: SQL injection attacks, XSS attacks, and authentication exploitation. Re-visit each learning environment and see how many additional vulnerabilities you can discover:


STEP - Serious security

Security testing is a rich, specialised discipline. Beyond this taster of what’s possible, there are a number of other aspects of application security to consider:
EXERCISE
[8 hours] If you’d like to get deeper into security testing, the next logical step is to run your own vulnerable learning environment. oWASP WebGoat aims to provide a de-facto interactive teaching environment for web application security. The application must be downloaded and installed on a local web server, the user guide includes instructions. When running the application, your machine will be very susceptible to attack and should disconnected from the internet. WebGoat is for educational purposes and it includes a number of lesson plans to teach the different aspects of application security.

Tuesday 15 September 2015

Continuous Delivery Testing Pathway

This pathway is a tool to help guide your self development in continuous delivery testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:
  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.


STEP - Removing release testing

Why does this pathway exist? Understand the key reasons to significantly shorten a release process, the arguments against release testing and why organisations aim to avoid batched releases in agile environments:
EXERCISE
[2 hours] Research your existing release process and talk to people within your organisation to find out whether there are any current initiatives to improve it.


STEP - Introduction to continuous delivery

What is the end goal? Discover the basics of continuous delivery and the theory of how it can be implemented in organisations.
EXERCISE
[1 hour] Based on what you've read, try to explain the theory of continuous delivery in your own words to someone in your team. Describe what appeals to you about continuous delivery, what you disagree with, and things that you think will be difficult to implement in your organisation. Afterwards, if you have any remaining questions, raise these with a technical lead or coach for further discussion.


STEP - Experiences in continuous delivery

How are other organisations doing continuous delivery? There is a lot of variance in implementation and differing opinions about how to approach the theory. Understand the realities of the people, processes and tools of teams doing continuous delivery:
EXERCISE
[2 hours] Compare the experiences shared in the links above and the theory of continuous delivery. Identify common themes, and areas where ideas or implementation details differ. Discuss your analysis with a technical lead or coach.


STEP - Starting with continuous integration

What is the first step? Understand the concept of continuous integration:
EXERCISE
[3 hours] At the start of this talk transcript, Jez Humble points out that most people aren't doing continuous integration. How does the approach to continuous integration in your team differ to the theory? Talk to a developer to confirm your understanding of your branching strategy, the way you use source control management tools, and how you manage merging to master. If you use a continuous integration tool, create a list of the jobs that are used by your team during development, and be sure that you understand what each one does. Reflect on how quickly your team respond to build failures in these jobs, and who takes ownership for resolving these. Discuss this exercise with a technical lead or coach to collaboratively identify opportunities for improvement, then raise these ideas at your next team retrospective.


STEP - Theory of test automation

Continuous delivery puts a lot of focus on test automation. In order to support development of an effective pipeline it's important to understand common strategies for automation, and the distinction between checking and testing:
EXERCISE
[1 hour] Read through the automation strategy for your product. How well does your existing strategy for automation support your delivery pipeline? What opportunities exist to improve this strategy? Discuss your thoughts with a technical lead or coach.


STEP - A delivery pipeline

Understand how to construct delivery pipeline and the role of automation:
EXERCISE
[3 hours] Create a visual representation of the current delivery pipeline for your product. Use a timeline format that shows the build jobs in your continuous integration tool at every stage from development through to production deploy, any test jobs that execute automated suites, and points where the tester is hands-on, exploring the product. Compare your pipeline to the simplified images by Yassal Sundman for continuous delivery and continuous deployment, then reflect on the following questions:
  1. How would your approach to testing change, or not, if we were able to deploy to production 10 times a day? How about 100 times a day?
  2. Does the coverage provided by your automation give you a degree of comfort or confidence? If not, what needs to change?
  3. Does your automation execute fast enough? How fast do you think it should be? How can you achieve this?
  4. Where in the pipeline would you want to retain hands-on testing? How would you justify this?
Discuss your ideas with a technical lead or coach. Work together to identify actions from your thinking and determine how to proceed in implementing change.


STEP - Non-functional testing in continuous delivery

Learn more about integrating security, performance, and other non-functional testing in a continuous delivery pipeline:
EXERCISE
[2 hours] Does your organisation have a non-functional testing "sandwich"? Having read more about organisations who integrate these activities earlier in the process, what opportunities do you see to improve the way that you work? What would the first steps be? Talk to a technical lead or coach about what you'd like to see change.


STEP - Cross-browser testing

For continuous delivery of a web application, it's important to include cross-browser testing in the delivery pipeline. Discover strategies for cross-browser testing and the tools available to support it:
EXERCISE
[8 hours] Learn more about the common cross-browser tools that are available, understand the advantages and disadvantages of each option, then select a tool to trial. Create a prototype to execute existing browser-based automation for your product across multiple browsers. If successful on your local environment, attempt to create a prototype job in your continuous integration tool to verify that your chosen solution works as part of your continuous integration. Discuss what you learned about the tool and the results of your experiment with a technical lead or coach.


STEP - Test data & databases

Discover the additional considerations around test data in continuous delivery:
EXERCISES
[1 hour] Data is a constant headache for testers. Consider the limitations of the test data in use by your automation. How could you improve the data within your delivery pipelines? How could you improve the way that you locate data for testing? Talk through your ideas with a technical lead or coach.


STEP - Configuration management & environments

An effective delivery pipeline is supported by multiple test environments. Learn more about configuration management, environments and infrastructure services in continuous delivery:
EXERCISES
[1 hour] Talk to your operations or support team about how they provide test environments for continuous integration, the infrastructure required to support a delivery pipeline, and what their plans are for future changes in this space.


STEP - Testing in production

Understand A/B testing and feature toggles:
EXERCISE
[1 hour] Talk to people in your organisation to find out how you currently use feature toggles and how you make decisions about what to keep based on user analytics. Could your approach be more responsive through targeted use of a monitoring tool like splunk? Share your thoughts with a technical lead or coach.