Saturday, 12 July 2014

#KWST4

I made it along to Day Two of the fourth Kiwi Workshop for Software Testing (#KWST4) this weekend. The theme was "How to speed up testing, and why we shouldn't". It was great to see many new faces, and hear some very interesting experiences. I'd like to share three topics of conversation that resonated with me.

Fast go, throw away stuff

The morning opened with Viktoriia Kuznetcova who spoke on her three strategies for speeding up testing in an environment with time pressure; cluster, prioritise, and parallelise.

Viktoriia spoke of an environment where release dates were often determined in advance, then extensively advertised to the existing user base. This meant that there was little opportunity to amend project timelines when understanding of the solution changed. She said that she felt, at times, the release dates were "not possible". Yet because the sales department had set an expectation in the market that the feature would be delivered on a certain date, the team would occasionally be asked to release on deadline without completing testing.

For me, this experience illustrated the perception that stakeholders outside of testing often have; to speed testing up, we just cut down the timeline and stop testing earlier. I feel that this view is born from observing that testers are naturally cautious, and experience with successful releases where testing was incomplete.

On the topic of "fast go, throw away stuff", people spoke about:

  • Are other teams are dependent on a certain level of testing being completed? It's important to consider this when prioritising your own test activities with a view to eliminating the least important. 
  • If testing a release for a specific client, and testing is being targeted to meet their specific needs, will the cost of regression testing for later releases with a general audience then be inflated?
  • If you can't quantify the change in quality associated with a change in test scope, then it's hard to offer evidence to support a longer testing time frame. Without this, managers may not see any difference in the delivery to the client.
  • For management, testing is often about building confidence. Once an expectation has been set that testing can be completed in X number of weeks, they may then take some confidence in future purely from the same period of time elapsing. "You've had four weeks of testing. We released after four weeks last time and nothing went wrong."
  • Where permitted, testers may choose to continue testing beyond a release date. By stating that testing will continue, doubt is introduced in the stakeholder decision making about releases. If the test team are planning to continue regardless, this sends a very clear message that testing is not done.


Bug reports are not the output of testing

The second experience report of the day was from Rachel Carson. She spoke about her organisation shifting from a waterfall to an agile development methodology, and how this had resulted in a faster test process.

Rachel talked about how her bug reporting style had changed. She used to raise every bug she found in the bug tracking system of her organisation. With the shift to agile, she found that her approach became a lot more pragmatic. With frequent conversation between developers and testers, Rachel didn't always need to use the tool. When she did have to raise a bug, she though pragmatically about what would realistically be fixed.

For me, this experience illustrated how we can speed up testing when we stop thinking that bug reports are our output. The measure of a good tester is not the number of tickets against their name in a bug tracking system.

On this topic, people spoke about:

  • It is a big mindset shift to step out of a blame culture, where the tester wants a written record of every issue they have observed in the system, to one in which problems are discovered and resolved without a paper trail.
  • As testers, our job is not to generate information, it's to provide useful information. More is not always better, especially for written bug reports.
  • In an agile environment, bug triage is owned by the team. Where there is a decision not to fix a problem, this doesn't necessarily need to be documented.


Testers becoming BAs

The third and final experience report of the day was by Adam Howard. He spoke of his experiences in implementing visual modelling and session based testing in a challenging project environment.

Adam spoke about working on a defect-fix release. The defects were focused in a specific area of the application, but were so high in number that they essentially represented a complete re-write of that piece of the system. Adam used visual test coverage models to build a holistic view of the collection of defects, as developing and testing each in defect isolation would have resulted in a fragmented end product.

For me, this experience illustrated how we can speed up testing by taking ownership of some business analysis activities. The tester should not be the first person to visualise a model of the solution, yet often we are. By leading a collaborative approach to create a visual document, the team develops a shared understanding of what is being built, which can the job of the tester much easier.

Check out

My final and most practical take-home was a tip from Sean Cresswell. He spoke of a useful method for determining whether there was shared understanding of a technical solution across a team. Place a developer and a tester on opposite sides of a free standing whiteboard and ask them to draw a diagram of how they think the system works. I thought this was a quick, easy and brilliant way to spot any discrepancies in thinking.

I enjoyed my day at KWST4. Special thanks to Oliver Erlewein for organising and Richard Robinson for facilitating.

Thoughts shared here are as a result of group discussion between all KWST4 attendees; (pictured below from left to right) Katrina Clokie, Chris Rolls, Aaron Hodder, Parvathy Muraleedharan, Joshua Raine, Adam Howard, Richard Robinson, Andrew Robins, Oliver Erlewein, Viktoriia Kuznetcova, Thomas Recker, Rachel Carson, James Hailstone, Ben Cooney, Till Neunast, Nigel Charman & Sean Cresswell.

Wednesday, 9 July 2014

Open to Feedback

I spoke at AgileWelly last night in order to practice my talk for CAST2014. It was the first time that I'd presented in an auditorium, to a very large audience, from behind a podium, under a spotlight, with a microphone. The environment was certainly intimidating.

Before beginning, I emphasised to the audience that I was keen to receive feedback on my presentation, so that I could improve it before repeating myself on an international stage. I was nervous about asking people to critique my work because I was worried about what they would say, but I was also worried that they might not say anything! Indifference is the worst reaction.

As I finished speaking, I had already self-identified a couple of areas that I wanted to improve.

I thought that my introduction and conclusion were weak. These were the areas in which I was least prepared, and it wasn't as easy as I had imagined to improvise the content.

Additionally, when I checked the clock at the end of my first section, I realised that I was speaking far too quickly. I was so nervous that I flown through my slides, and I had to consciously collect myself in order to continue at my planned, and more sedate, pace.

Given that it was so easy for me to identify these two changes, even in overwhelming-post-presentation-brain-overload, my feedback fears intensified. No longer did I think that people wouldn't have anything to say. Instead I thought that they'd have so much constructive criticism that I would be overwhelmed!

In the last 24 hours I have been privileged to receive a great deal of feedback from a number of different people. Thankfully, it's been largely positive. The suggestions offered have been constructive, and I've seen some consistent themes emerge.

Many people have endorsed my self assessment. But in addition to identifying these same issues, they've also offered helpful and specific suggestions as to how I might change my approach. These have included both general presentation skills, and ways to expand particular pieces of my content.

I've also had feedback that I never would have thought of myself. Great ideas for how I might add content to the presentation based on the questions I received at the end, tips to promote my associated blog posts, and terminology for concepts that I was describing.

The feedback I've been given is a reflection on the strong IT community in Wellington. Thank you Aaron, Sarah, Craig, Ben, Stu, Larrie, Nigel, William, Adrian, Shaun, Yvonne, and others.

Though the whole experience was really challenging, both presenting in a difficult environment and opening myself to critique, I really have learned a lot from it. I would have felt annoyed to be leaving the stage in New York thinking that I could have done it better. I want to deliver the best talk that I am capable of.

If you're yet to open your next presentation to feedback, I would encourage you to do so.

Wednesday, 2 July 2014

How to make a workshop hum

On Monday I co-facilitated a two hour workshop. It was good, but not great. I felt that the room was a bit flat, and wasn't entirely sure that my message had been conveyed effectively.

Today I had the opportunity to run the same content again in a different city. Thirty minutes before the first student arrived, I decided to tweak the material. I wanted this version of the workshop to hum.

And hum it did. Though I can't be sure whether it was the people in the room or the material, I thought I'd share the three things I changed. I certainly think they contributed to the shift in outcome.

Give and Take

The first change was near the beginning of the session. We had asked the participants to do a series of activities in fairly quick succession, which came about as a result of attempting to condense some old material into a smaller time frame. We had valued our interactive exercises above our static slides, which meant that there was no longer any significant presenter content between some of our activities.

On Monday, it felt like we were taking a lot from our students and giving very little in return. We wanted them to think about this, and then think about that, and then think about something else, with very little time in between for them to pause, digest and absorb. Though we ultimately wove all the pieces together, the balance felt wrong.

Today I re-instated one slide. Just one. Doing this created a few minutes of space between two activities that asked people to think pretty hard. By adding this piece of content, I gave something back as the presenter. I gave people enough time to feel that the output of the first activity was acknowledged, and gave their brains time to recover!

This change altered the attitude of participants. The second activity in this series was tackled with enthusiasm instead of reluctance.

Grow the Numbers

The second change was in the classroom dynamics for our exercises. In the Monday session we jumped between asking people to work as groups, as individuals, as groups, then as individuals again. Working individually makes people introspective, somber and comparatively withdrawn. Working in groups is collaborative, dynamic and engaging. By mixing the numbers in each of our activities, classroom participation was see-sawing.

Today I changed the activities specifically to create an increasing momentum through the module of material. I started with an informal group conversation, which worked as an icebreaker to have people comfortable with those around them. Then I ran an individual exercise, an exercise in pairs, then finally an exercise in larger groups.

Creating this progression significantly altered communication through the workshop. Student engagement evolved in a much more cohesive fashion; I had attention and participation to the very end.

Set up for Success

The third change was to our final exercise. It was migrated from another area of our training, but on Monday we found that it was a much harder problem in its new context than in its original one. In addition, we asked students to complete this last exercise alone. Having a final exercise that was both challenging and silent meant that the class finished on quite a flat, serious note.

Today I re-designed this exercise so that the students had a better chance of success. I left the answers from the previous problem on the whiteboard, as a prompt for their thinking. I provided an expanded mnemonic as a reference. I also switched to a group format so that they could use one another as resources, and actively encouraged them to collaborate.

These changes meant that the answers provided by each group at the end of the session reflected a real understanding of the concepts that I was trying to teach. Further, the students themselves recognised that they had grasped the material, and the room was buzzing with their shared success.


In the coming weeks I plan to revisit the rest of my training material and apply these same three principles across each module; give and take, grow the numbers, set up for success. Today makes me believe that this is how to make a workshop hum.