Sunday, 12 March 2017

How do you hire a junior tester?

I recently participated in a workshop that was co-hosted by the New Zealand Qualifications Authority (NZQA) and New Zealand IT Professionals (ITP). They're exploring the idea of creating a new qualification for software testing within our tertiary education system.

One of the topics of conversation that I found interesting was the question of how employers currently recruit junior testers. By junior, I mean a person with no experience in testing, regardless of age or other work history. As there are no dedicated testing qualifications* at present, where do new testers come from?

I sat in a discussion group with Aaron Hodder, Adam Howard, and Anna Marshall. We came up with a list of ways that we found the junior testers in our organisations that included, in no particular order:
  • the business - subject matter experts who show aptitude for testing
  • overseas - new arrivals to New Zealand may start in a junior role
  • internships - through programs like Summer of Tech
  • consulting firms - local consultancies who run their own graduate training and placement
  • graduates - those who are fresh from study

In my experience, finding candidates for a junior testing role is not a problem. When I've advertised a role that is suited to the wide audience defined above, I've had a lot of applicants. Testing is seen as a pathway into the IT industry, so there is a lot of interest.

I think that the workshop question is more interesting when it is considered in a slightly different way. As there are no dedicated testing qualifications at present, by what criteria do you recruit junior testers? In other words, how do you decide which person to hire?

I assess the testers who work in our agile delivery teams by six criteria that can be broadly summarised as:
  1. Testing knowledge and experience
  2. Automation knowledge and experience
  3. Agile knowledge and experience
  4. Domain knowledge
  5. People skills
  6. Growth mindset

I'm not looking to hire people who hit all of those criteria. I am looking to create testing teams with complementary individual strengths that mean we are collectively strong in all of those criteria. This applies across all testers, not just juniors.

When I hire a junior, I'm generally looking at the latter criteria in the list. While a junior may have knowledge of testing, automation and agile, it is likely to be entirely theoretical. I probably have strong testing, automation, and agile skills in the existing testing team anyway. The strengths that a junior might bring are their domain knowledge, people skills, and growth mindset.

I've talked previously about why you should hire junior testers. The benefits that I see in making junior testers part of the team, which largely focus on their attitude and behaviour, can be difficult to quantify. Similarly, the attributes that I am looking for in order to realise those benefits can be difficult to quantify, which means that they generally aren't assessed in an IT qualification.

I look for juniors who can communicate and work well in a team. People who are eager to learn, pick up new ideas quickly, and constantly search for a better way of doing things. Perhaps people with these traits are more likely to pursue higher education, but not necessarily. A person can pass a qualification without possessing these particular skills.

So, would it be useful to create a software testing qualification? Perhaps. If the qualification had a strong syllabus that was delivered in a practical manner, then hiring someone with this qualification might save some time when explaining basic concepts.

Would the presence of a software testing qualification change how I hire junior testers? Perhaps. The presence of such a qualification might increase the chance that I interview a candidate as I would spot it when screening CVs, but I'm not sure that it would have a significant bearing on the final criteria by which I hire.

Does New Zealand need a new software testing qualification? Perhaps. When I received the invitation to the workshop I thought it sounded like a great idea. As a trainer I was excited about the challenge of creating a syllabus. But the more that I've thought about it from the perspective of an employer, the less sure I become.

Now I'm curious. How do you hire a junior tester? By what criteria do you choose a candidate? Is there a software testing qualification in your country? Do you think there should be? I welcome your comments below.


*****

* ISTQB is a certification, not a qualification. A certification is an official document attesting to a status or level of achievement. A qualification is a pass of an examination or an official completion of a course, especially one conferring status as a recognized practitioner of a profession or activity.

Sunday, 5 March 2017

Test Leadership Breakfast

At our usual WeTest MeetUp events we get a wide variety of attendees from all levels of experience in testing, as well as those from other disciplines of software development with an interest in the topic of the day. This creates an environment for a varied conversation and questions, but often the discussion feels as though it dances across the surface of what it might be possible to talk about.

A diverse audience has advantages, but for a long time I've thought that there might be an opportunity for more focused conversation if we restricted the audience of an event. If we could create an audience with a specific similarity - whether that was years of experience, domain of employment, gender, or some other attribute - then people would have the opportunity to dig into topics of interest based on their shared experiences.

In 2013, our second year of WeTest, we tried to launch two initiatives to test our theory. WeTest Warriors was aimed at experienced testers and WeTest Green was aimed at those who were new to the industry. We scheduled an evening event for each group to socialise and share their stories over an informal pub dinner.

Both events were poorly attended and the initiatives were ultimately abandoned.

At the time I was a little perplexed by this. When I spoke with individuals during our normal events they would offer feedback and commentary that seemed to validate the idea of having separate discussions with a targeted audience. I thought that these people would jump at the chance to speak with their peers. They hadn't, but I wasn't sure why.

This year I decided to tackle the same idea in a slightly different way. Selfishly.

As I've moved through the testing profession into a leadership role, I have found that I have fewer and fewer peers within my organisation. Test leadership can become relatively lonely. I could see people out in the community who were tackling similar challenges to me, but I struggled to identify close colleagues who were doing the same.

I spoke to a few people in similar roles, then decided to launch a monthly Test Leadership Breakfast. I wanted to create a forum to bring together a specific collection of people that I was eager to have conversations with. By the event description:

This breakfast is exclusively for people who are currently working in Test Leadership roles to discuss their challenges and share their successes. It will be small, just 10 seats at the table, and will run in a Lean Coffee format.

The response was immediate. The ten seats were claimed for our first breakfast in January within a day of the event being advertised. The same happened in February and March too.

The feedback from events so far has been positive. Through the Lean Coffee format we've covered topics of conversation that have been useful to the people who have gathered. I've personally gained a lot of ideas and insight from the other attendees, and hope to have contributed to their thinking too.

I see this as proof of the value of gathering like-minds. In comparison to the Lean Coffee sessions I have attended with a varied audience, or the discussion at a usual WeTest event, every topic of conversation has had some relevance to my own role. Our discussion has been focused, productive and delved into exploring the details.

The contrast with our previous attempt to bring experienced testers together has been stark. Why did WeTest Warriors fail where the Test Leadership Breakfast has succeeded? On reflection, I see five key areas where our approach changed.

Naming is important. Test Leadership Breakfast clearly identifies the audience and nature of the event with simple, gender-neutral language. WeTest Warriors was alliterative, but in retrospect it was a rather intimidating and unclear name for what we were trying to do.

Evenings are busy. People have evening commitments to their family and friends, sports, hobbies, and other entertainment activities. Perhaps there are less intense demands on people in the early morning? A breakfast event might mean setting an earlier alarm and making alternative arrangements for family transport, but not forgoing another commitment.

Time is precious. For the evening events we had a relaxed approach to time. There was a start time listed, but no end time. Our assumption was that people would stay for however long they wished. By contrast, our breakfast events run for exactly an hour from 8am to 9am. People know the commitment that they are making.

Scarcity creates demand. WeTest Warriors was an open invite to a pub environment that could accommodate flexible attendance. By contrast, the breakfast event requires a booking at a local cafe who consider ten people to be a large group. It seems that the limited number of spaces drives a quicker response as people don't want to miss out.

Structure is reassuring. The Lean Coffee format offers a framework for productive conversation with strangers in a relatively informal setting. It helps focus the topics and manage individual contributions. I can see how this might be less intimidating than navigating totally informal conversations in a pub environment.

Beyond these details, the community evolved within those four years. Many WeTest attendees received promotions, which changed the group of people who made up the target audience of these events. We have grown and now have over 700 members in Wellington, which is significantly larger than in 2013. This means that a smaller proportion of members need to attend an event to make it successful.

This experience has also been a general reminder that a solution that fails doesn't mean that the premise is invalid. Approaching the same problem in two different ways can yield entirely different outcomes. Sometimes you have to step back, think about an alternative, then try again.

Sunday, 19 February 2017

Test Manager vs. Test Coach

I've been working as a Test Coach for almost two years and I will soon be seeking to hire people into the role. One common attitude that I hear is:

"Test Coach. Oh, that's just the agile name for a Test Manager."

It's true that if you work in an organisation that uses a traditional delivery model with separate design, analysis, development and testing teams, then you are more likely to see the title Test Manager for the person who leads testing. If you work in an agile model with cross-functional teams, then you are more likely to see the title Test Coach for the person who leads testing. But the two roles are not synonymous; it isn't simply a rebranding exercise.

Here are some key differences from my experience.

A Test Manager has a team of testers who report directly to them. They are responsible for hiring the team, determining and recording individual development goals, approving time sheets and leave requests.

A Test Coach has no line management responsibilities. The testers will be part of a cross-functional team that is managed by a team leader or delivery manager. A coach has a limited ability to lead through authority, instead their role is a service position. The Test Coach can influence hiring decisions and support testers to achieve their individual development goals. The language of coaching is different and it requires a different approach.

A Test Manager leads a group of testers who primarily identify as testers. If asked what they would do, the tester's response would be that they're a tester. The testing community is inherent, created by co-location and day-to-day interaction with testing peers.

A Test Coach leads a group of testers who primarily identify as contributors to an agile team. If asked what they do, the tester's response would be that they're part of a particular delivery team. They might mention that their main role in that team is testing, but to identify their place in the organisation it's often the delivery team that is mentioned first. The testing community must be fostered through planned social and knowledge sharing activities for testers who work in different areas, which are often activities lead or championed by the coach.

A Test Manager has ownership of the testing that their team undertake. They are likely to be accountable for test estimates, test resourcing, the quality of test documentation, and may be involved in release governance or sign-off procedures. 

A Test Coach has none of these responsibilities. In an agile environment they are owned by the delivery team who estimate together, review each others work, and collectively determine their readiness for release. The decision making sits outside of the Test Coach role, though they might be called on for counsel in the event of team uncertainty, disagreement, or dysfunction.

A Test Manager drives their testers. They're active participants in their day-to-day work, with hands-on involvement in tracking and reporting testing activities.

A Test Coach serves their testers. They usually won't get involved in specific testing activities unless they are asked to do so. Coaching interactions are driven by the person who needs support with testing, which is a wider group than only testers. The coach is proactive if they identify a particular need for improvement, but their intervention may be with a softer approach than that of a manager.

The Test Manager will know the solution under test inside-out. In order to properly meet their accountabilities they need to be involved in some degree of detail with the design and build of the software. Test Managers are also adept in identifying opportunities for improvement within the processes and practices of their team, or the products that they work with.

The Test Coach is unlikely to be an expert in the product under development or the wider system architecture. They will have some knowledge of these aspects, but as they are removed from the day-to-day detail their understanding is likely to be shallower than the testers who are constantly interacting with the system. A coach generally has a more holistic view for identifying opportunities for improvement that span multiple teams and disciplines.

A Test Manager is the escalation point for testers. Regardless of the problem that a tester is unable to resolve, the Test Manager is the person who will support them. The issues may span administrative tasks, interpersonal communication, professional development, delivery practices, project management, or testing.

A Test Coach is an escalation point for testing-related problems only. The types of issues that come to a coach are generally those that impact multiple delivery teams e.g. refactoring of test automation frameworks or stability of test environments, or those within a team that require specialist testing input to solve e.g. improving the unit test review process or fostering a culture where quality is owned by the team not the tester. These issues may not be raised by a tester, but can come from anyone within the delivery teams, or beyond them. A Test Coach might also be asked to contribute to resolution of non-testing of problem, but these discussions are usually lead by another role.

A Test Manager will identify training opportunities that are aligned with the development goals of their staff and arrange their attendance. They will be abreast of workshops and conferences in the area that may be useful to their team.

A Test Coach will do the same, but they are more likely to identify opportunities to deliver custom training material too. The coach has the capacity to create content, the knowledge to make the material valuable, along with some understanding of teaching to engage participants effectively e.g. learning styles, lesson planning, and facilitation.

Both roles are leading testing in their organisation. The roles are different because the context in which they operate is different. In a nutshell, a Test Manager leads testers and a Test Coach leads testing. The focus shifts from people to discipline.



I hope that this explanation offers clarity, both for leaders who are looking to change their role and for testers who are working within a different structure.

These observations are based on my own experience. If your experience differs, I would welcome your feedback, questions or additions in the comments below.

Wednesday, 1 February 2017

Not right now

I spent an hour of my Wednesday morning participating in Tuesday Night Testing, an online Lean Coffee discussion run by Simon Tomes who is based in the UK. There were a number of great conversations, but one in particular has stayed with me through the day.

Can you think of a time that you've been really busy at work? A day when it seems like every task you complete generates two more? Where you're ruthlessly prioritising and snatching every available moment between meetings?

Now imagine one of the rare occasions on that day where you're at your desk with a solid hour to focus. From the corner of your eye, you see one of your colleagues approaching and you realise that it's because they need something from you. It may be a quick question, it may be a request for a longer piece of your time.

Your heart sinks.

You want to help them, but you also want to finish what you were doing. You don't want to be rude, but you also don't want to allow them to interrupt your train of thought.

"Do you have a minute?" they ask.

What do you do?

This scenario was posed during our discussion and there were a few different strategies shared. As a person who struggles to say 'no', this topic made me realise that I've evolved an alternative approach to an outright refusal. Here's how you might approach this situation in the same way that I do.

Don't say 'no'. Instead say 'not right now'. You're still being upfront and stopping the interaction in its tracks. But instead of refusing to help, you're simply deferring the conversation to a later time.

Follow up the 'not right now' by taking ownership of resuming the conversation yourself. Don't ask your colleague to come back later. That creates another opportunity for them to interrupt you at an inopportune moment and force you to context-switch. Instead, offer to go to their desk or office.

Provide specific information about when that later time will be, so that it's clear to your colleague that you're not just attempting to avoid the discussion entirely. You might get back to them within the next hour, after lunch, before the end of the day, or the end of the week. Whatever the period is, be specific about when you're available to chat.

"Do you have a minute?" they ask.

"Not right now. I'll come and find you after lunch, will you be at your desk about 1pm?" you reply.

I don't use this type of response all the time. Often I am interruptible and, at these times, I really enjoy having spontaneous conversations with colleagues. But on occasions where I am particularly busy, I find this method of deferring, along with ownership of rescheduling, is one that works well for me.

Thank you Amit Wertheimer, Andrew MortonCassandra LeungClaire Reckless, Tracey Baxter and our organiser Simon Tomes for an interesting discussion. I'd recommend getting involved in future Tuesday Night Testing sessions.

Monday, 23 January 2017

Sometimes you are wrong

I quite often get asked the same type of question in the Q&A sessions at the end of my talks:

"How can I convince my manager that we should be doing test automation?"

"My developer used to be a tester, how can I convince them that my test approach is right?"

"How can I convince my team to allow more time for testing?"

What each of these boils down to is persuasion. How can I persuade someone else to adopt my viewpoint? How can I turn them from a hindrance to a helper? How can I make them see the light?

I think there's value in learning how to construct a persuasive argument. A tool like SPIN selling can help you structure what you're saying, to stop you from jumping straight into solutions or getting mired in explaining problems repetitively.

I think there's value in learning to be mindful of how you're presenting yourself. Being conscious of your tone, body language, eye contact etc. can help you improve how you convey your message. I occasionally make use of a self-assessment worksheet to reflect on conversations that haven't been successful and identify opportunities to improve my delivery:



I think there's value in learning some basic influence techniques: the rule of reciprocation, reject and retreat, social proof, commitment and consistency, etc. [ref] These strategies can help you to position your argument in the best possible environment for success.

But there's another side to persuasion that I think testers don't talk about enough.

Sometimes you are wrong.

If you continually focus on how you can convince another person of your viewpoint, then you might have become blind to that possibility. I'm sure you'll agree that obstinate people are frustrating to work with. Have you become one of them?

Before you ask for help on how to convince someone, consider how long you have been trying to persuade them without success. Is it starting to feel like a never-ending battle? Do you feel like you've presented your position in depth, but it has fallen on deaf ears?

Perhaps its time to stop asking questions about being persuasive. Instead, start asking how you can understand their viewpoint and accept their opinions. Turn the conversation around.

What might this person know that you don't? Are they active in a different layer of the organisation hierarchy that might give them visibility of information that you don't have? Do they come from a different background that might give them skills or perspective that you lack? What questions can you ask to help discover these differences?

Who else is part of making this decision? Perhaps the person who you are trying to persuade isn't the sole decision maker? If they are deciding individually, do you know who else is influencing them? Can you talk with a wider group of people to understand a broader range of perspectives on the matter?

Why is their solution the best? Step into their shoes and look just for the positive outcomes in what they are proposing. How does it address the current problems? What constructive implications does it have, both for your personally and in a wider scope? If you've been stuck in a persuasive argument, you've probably formed a habit of seeking out the holes in what they propose. Switch your mindset to focus on the benefits.

What have you got to lose? We can get bogged down in arguments that, in the greater scheme of things, are not that important. Ask yourself, if you were to accept the other person's position what would you lose? It may not be as great a loss as you imagine, particularly if they're offering to compromise.

Sometimes you are wrong.

Don't forget to question whether now is that time.

Sunday, 8 January 2017

Writing a book

For a while now, I've thought about writing a book.

I've seen it as an intimidating endeavour. A blog post might take an hour or two, but a book is a long term commitment. I didn't think it would be impossible, but I wondered whether I had enough to say or whether it would be of value to others.

For a while now, I've really wished there was a book about testing in a DevOps environment.

My current organisation is starting this culture shift and the material available for testers is limited. I have formed opinions on how to approach testing in DevOps, based on my own experience and heavily influenced by following the experiences of others in the industry. I thought that I could coach my team toward my thinking, but I wished for a book. Something that looked official and had weight. Something that aligned to my views and expanded on them. Something written by an authoritative source.

The New Year rolled around and a friend of mine, Lisa, posted about New Year's Resolutions. Her resolutions were brave, humorous and shared openly on social media. "What about you?" she asked, which made me realise that I hadn't made any.

Lisa got me thinking, and as I thought I remembered how a former manager, Noel, used to challenge me to dream in bold goals. I thought that it was time for a bold New Years Resolution: "the thing that you don't really want to voice because there's a risk you won't achieve it".

I decided to write a book. I decided that if I wanted a book on testing in DevOps, then I should write it. I decided to make myself accountable by telling people that I was going to do this:


And now it's happening.

I am still in the honeymoon phase of deciding to write a book, but so far it has been unexpectedly positive. People have been very supportive, both in my personal and professional life. I've worked out how to navigate LeanPub. My brain is cooperatively serving up plenty of material that I want to include, although occasionally at unhelpful hours of the day:


I'm less nervous about failure too. Will I have enough to say? As I start to write, I've realised that I'll stop once it is enough for me. Will it be valuable to others? If no one else, I hope it will be useful to my team, which is why I wanted a book.

I've also remembered that "if you want something to exist, you should create it". This is a mantra I've used in the past. I wished there was an a discussion-based testing community, so we made WeTest. I wished there was a quality testing magazine for New Zealand and Australia, so we made Testing Trapeze. I wished there was a book about testing in DevOps, so I'm writing one.

It's scary to make a bold goal, but its also rewarding. I've already learned a lot, in just a few days, and I'm excited to see what the rest of this adventure will be like.

Writing a book. That's my New Year's Resolution. What about you?

Friday, 16 December 2016

The Testing Pendulum: Finding balance in exploration

How detailed should exploratory testing be?

I spotted this question in the Cambridge Lean Coffee topics that James Thomas collated in his blog. It's a question that I'm often asked, and I consistently use the same analogy in my response: the testing pendulum.

The Testing Pendulum

Imagine a pendulum at rest. The space in which the pendulum can swing is our test approach. At the left apex we are going too deep in our testing and at the right apex we are staying too shallow. Initially, the testing pendulum sits directly in the middle of these two extremes:

The Testing Pendulum

When a tester starts in a new organisation or a new project, we apply the first movement to our testing pendulum. Think of it as lifting the pendulum up to the highest point and letting go. The swing starts from the shallow side of the spectrum to reflect the limited knowledge that we have as we enter a new situation. As our experience deepens, so too does our testing.

Starting the pendulum

"When given an initial push, [the pendulum] will swing back and forth at a constant amplitude. Real pendulums are subject to friction and air drag, so the amplitude of their swings declines." [Ref]

Similarly in testing, the pendulum will swing backwards and forwards. When our testing has become too intensive, we switch direction. When our testing has become ineffectual, we switch again.

Pendulum changing direction at the peak of its swing

The skill in knowing how detailed testing should be is to recognise the indicators that tell you when the pendulum is at the top of its swing. You need to be able to identify when you've gone too deep or stayed too shallow, so that you can adjust your approach appropriately.

Indicators

Indicators help us determine the position of our pendulum in the testing spectrum. I see three categories of indicator: bug count, team feedback and management feedback.

Bug count

If you are not finding many bugs in your testing, but your customers are reporting a lot of problems in production, then your testing may be too shallow. On the flip side, if you raise a lot of bugs but not many are being fixed, or your customers are reporting zero problems in production, then your testing may be too deep.

As a caveat, the zero problems in production measure may not apply in some industries. A web-based application may allow some user interface problems to be released where the economics of fixing these does not make sense, but a company that produces medical hardware may seek to release a product that they believe is perfect no matter how long it takes to test. Apply the lens of your own organisation.

Team feedback

Whether you're working in waterfall testing team or an agile delivery team, you are likely to receive feedback from those around you. Be open to those opinions and use them to adjust your behaviour.

If your colleagues are frequently adding scope to your testing, questioning whether you've spent enough time doing your testing, or perform testing themselves that you think is unnecessary, then your testing may be too shallow. On the flip side, if your colleagues are frequently removing scope from your testing, questioning what else you could be doing with the time that you spend testing, or do not perform any testing themselves, then your testing may be too deep.

On the point of colleagues doing testing, this is a particularly useful indicator in agile teams. In the extreme case, if no unit tests are being written and your developers are outsourcing their testing to you, or if the business trust you to do their user acceptance testing, then it's likely that you're testing too deeply. If you want to cultivate an environment with shared ownership of quality then you have to allow room for that to happen.

Management feedback

If your testing pendulum is sitting at a point in the spectrum where your team are unhappy, it's likely that your manager will have a direct conversation with you about your test approach.

If you're testing too much your manager will probably feel comfortable about telling you this directly. If your testing is too shallow, you might be asked for more detail about what you're testing, be questioned about bugs reported by users, or have to explain where you're spending your time.

Indicators are heuristics, not rules. They include subjective language, i.e. "many", "not many", "frequently" or "a lot", which will mean different things in different situations. As always, apply the context of your organisation to your decision making.

The indicators that I've described can be summarised by opposing statements that represent the extremes of the pendulum swing:


Finding equilibrium

Eventually, most testers will end up at an equilibrium where the pendulum hangs at rest, halfway between the two extremes. The comfort of this can be problematic. Once we have confidence in our approach we can become blind to the need to adjust it.

I believe the state that we want to strive for lies slightly towards the left of the spectrum, or too deep. In order to keep a pendulum positioned off-center, we have to regularly apply small amounts of pressure: a bump!

Pushing the boundaries

If you've been testing a product for a long time and wish to avoid becoming stale, give your testing a bump towards greater depth. Apply some different test heuristics. Explore using different customer personas. Alter your test data. Any variation that could provide you with a fresh crop of problems to explore the validity of with your team.

You can use the outcome of a bump to calibrate your test approach within a smaller range of pendulum movement. Which changes are too much? Which are welcome and should be permanent added to your testing repertoire? Continued small experiments help to determine that you are in the right place.

Conclusion

The testing pendulum is a useful analogy for describing how to find balance in a test approach. When entering a new team or project, it can be used to illustrate how we experience large initial swings in the depth of our testing as we learn the indicators of our environment. For those who have been testing the same product for a while, it can remind us to continuously verify our approach through bumps.

There is no one right answer to "How detailed should exploratory testing be?", but I hope the testing pendulum will help you to determine and describe the right level of detail for you.