Wednesday, 31 July 2013

Ignoring the liver

I had the privilege of hearing Dr Ingrid Visser speak at the Altrusa International Convention this weekend. Dr Visser is a marine biologist who specialises in orca research.

Orca & Stingray

Orca hunt stingray in a group. When a stingray is successfully caught, the group gather at the surface to share the meal, ripping the stingray apart. Dr Visser observed that, when sharing a meal of stingray, the orca avoided eating the liver. She published a paper stating that New Zealand orca did not eat stingray liver, potentially due to the toxins the liver contains.

Since this publication, Dr Visser has observed New Zealand orca eating stingray liver. This revelation lead her to ask herself the following three questions of her previous research:

  • What did I miss?
  • What did I do wrong?
  • What has changed?

Dr Visser stated that being open to challenging what is believed to be true is essential for scientists; she uses these three questions to examine her thinking.

Challenging our thinking

As a tester, we too should feel obligated to question what we believe to be true. I believe the context driven school of testing is borne of frustration in those who have stopped asking these questions. Each acknowledge a degree of failure; in asking I acknowledge that I have made a mistake. They are difficult questions to pose, but those who stop asking questions are at risk of becoming zombies.

The ability to identify new ideas and then truly question whether they should supplant your own thinking is  necessary to grow our skills as testers regardless of school. Those who identify as context driven testers should not become complacent in our thinking, or restrict the opportunity for critique only to those who identify as being part of our community. It can be easy to dismiss ideas that challenge you as being from a different school of thought. But in doing so, we deprive ourselves of the opportunity to learn.

I believe that these questions should be called upon by every tester when they feel confronted. What did I miss? What did I do wrong? What has changed?


Monday, 22 July 2013

The fly in the room

I'm not sure how I feel about the Line at the Ladies Room and the Women in Testing issue of Tea Time with Testers. The noise about women in IT isn't a new thing, but it's like a large blowfly in my lounge that I wish would either die or disappear, instead of making that persistent and annoying buzzing sound.

My initial reaction to these new noises stems from the fact that I don't like being told what to do. "Speak Up!" they say. This is, perversely, much more likely to make me purse my lips and refuse to say a word.

Yet in making the decision to contribute to the testing community I feel a little resentment at the implication by TTwT that I could only be heard in a forum specifically requesting the opinions of women. That my thoughts would be drowned or ignored in a general populace.

The mentoring program seems like a great idea, with benefits for both the aspiring speakers and mentors, but I still have not filled out a form to participate. The sales pitch starts with "We all have something to share with others", and that might be the crux of the issue. Perhaps we don't?

Just excuses and doubts perhaps.

Finally, I'm not convince that either measure is going to kill that blowfly in my lounge. We create a testing magazine with articles written by women, we orchestrate a 50/50 gender ratio in presenters at a testing conference, then what? Creating strong female role models is great, but who are we leading? 

Instead of preaching to the converted at testing conferences and in testing magazines, perhaps we should focus more attention on attracting women in to testing in the first place? Why aren't we making our voices heard at high schools, colleges and universities first and foremost?

Wednesday, 10 July 2013

Bugs on Post-It notes

I like to write bugs on post-it notes. Simple ones, where I don't need a supporting screenshot or a log. It seems wasteful to put it in to a bug tracking system and nurse it through a process. I'd rather write it on a small, brightly coloured piece of paper, walk over to the developers and hand it to them, with a quick chat to check that they understand my scrawl.

The reaction to this approach from my project team has been mixed.

The developers seem to quite like it, as much as any developer likes receiving bugs, though they do joke of "death by a thousand paper cuts". Though it's harder to escape defects when they're lined up like soldiers along your desk, awaiting your attention, this approach feels more like a collaborative process rather than an combative one. Having to transfer the physical object increases the opportunity for conversation, and the communication between development and test is excellent. As the developers resolve each problem they take great pleasure in returning the associated post-it to me with a confident tick on the bottom. There's also a healthy level of competition in trying not to have the most post-it notes stuck to your desk.

Occasionally the developers will ask me for a defect number to include in their subversion commit comment for changes. "Oh, I didn't raise that one in the tool" I say, "I just wrote it down". Turns out this isn't a problem, they explain the change instead of referencing the defect ID, now the commit messages don't rely on a third party system.

The project manager was initially a bit miffed. "How many bugs did we fix in this sprint?" he would ask. Though this could be tracked by counting the number of post-it notes stuck on the visual management board as completed, as time passed he realised he didn't need that number as much as he thought. In an environment where bugs enter a tracking system it's important to track their life span, as they often live for a long time. It's pretty easy to ignore things that cannot be easily seen, however it's difficult for a developer to ignore a multi-coloured desk. My experience is that reported faults on post-it notes are fixed in a startlingly fast fashion, and as the answer to "How many bugs did we fix?" starts to become "All of them", the actual number loses significance.

In the small number of cases that I want to raise a problem that requires supporting evidence, I put this information in to the tracking system. I also write up the post-it with the defect ID on it, and go and give it to the developer. I want to keep as many of the good things about post-it defects as possible, while still providing enough detail to the developer that they can understand and resolve bugs. But the overhead in using the (very simple) tool convinces me that it's for special cases only.

Post-It notes are the best.

Saturday, 6 July 2013

KWST3

I spent the last two days at KWST3 and I wanted to capture my thoughts from the event before they escape. I am unsure whether others will get any value from this brain dump!


Education

You can't teach how to test, but you can teach how to ask.

The path to education; Conflict -> Curiosity -> Critical Thinking -> Networking -> Community. When a tester hasn't experienced a conflict that sets them on this path how can we create a catalyst?


How do we offer experiential learning outside of a project context? In a classroom or training course the opportunities for hands on learning may be limited by the number of applications available for testing? Suggestions included basic applications like WordPad, online mazes / games / puzzles, or asking someone with an interest in coding to create a small application for this purpose.


There is risk associated with testing activities; should we allocate resource in a project based on this risk? Although in a scientific context the risk of failure can be great, such as experimenting with corrosive acid, in the world of software the risks can be smaller. When we assign low-risk tasks to juniors, are we robbing them of an opportunity to learn? Perhaps the benefit in them learning from failure far outweighs the risk? Have we become too risk averse at the expense of education?


BBST has a creative commons license, so not only is it a great course for learners it is also one that's accessible to educators for use in their own teaching.

uTest and Weekend Testing are worth investigating!



Behaviour

Passion and aggression are not the same thing. We need to be aware of how our rhetoric, and that of our community, is perceived by others, so it remains challenging without being off-putting.

How to escape a situation where you've reached a "quasi-agreement"; a manager stops fighting you, but instead of accepting your approach they choose to ignore it? When being ignored is jarring and you believe others could benefit from what you're doing, what can you do?

  1. Inception - present the idea to the person with whom you've reached an impasse and then make them think it's their idea. Get them to take ownership and champion further adoption. Requires an element of selflessness, as you will no longer get any credit for the idea taking hold.
  2. Challenging with kindness - question the person until they start to draw their own conclusions. Pull them towards the same answer as you, but get them to take the journey and reach the conclusion themselves, rather than presenting only the conclusion to them.
How to address a situation where you believe that a person is unaware of beliefs they hold that are holding them back or could be used to make them a better tester? Be kind, but confront them about it. Often sharing something about yourself is a good way of prompting honesty in others. Identifying these beliefs and challenging someone to dispel or harness them can be a way of breaking people out of their ruts and setting them on a path to learning.

Testers who do not challenge, question and criticize may be constrained by their culture.



Shifting to CDT

What's more important, maintaining the test scripts or doing the testing? When the answer is always b) then perhaps you need to focus on doing the best testing possible without the scripts!

When shifting to a CDT approach you may notice that:

  • Testing outcomes improve despite deterioration of testing scripts.
  • Testing without scripts finds the issues that actually get fixed.
  • Staff turnover drops.
Stay tuned for the case study...


These thoughts were formed with / stolen from the following amazing people: Aaron Hodder, Oliver Erlewein, Rich Robinson, Brian Osman, Anne Marie Charrett, Jennifer Hurrell, Erin Donnell, Katrina McNicholl, Andrew Robins, Mike Talks, Tessa Benzie, Alessandra Moreira, James Hailstone, Lee Hawkins, Damian Glenny, Shirley Tricker, Joshua Raine and Colin Cherry. A handful are my own.