The processWe completed the activity in groups of three. Each group was asked to sort the Lominger card deck in to three categories; essential, very important, nice to have. Our 67 cards had to be split as 22 cards in the first category, 23 cards in the second category and 22 cards in the third category.
Upon completing the sort, each group was given 22 red and 22 blue dots. These were used for dot voting on a wall chart against the full list of key competencies. Blue marked what was essential and red what was nice to have (the top and bottom categories). When all six groups had completed this task, we gathered around this chart to talk through what was discovered.
The resultsWhen looking at what people considered as essential, there were a number of competencies that everyone agreed upon; listening, understanding others, approachability, motivating others, managing diversity, integrity and trust. Our facilitator picked these from the chart and it was clear that with six votes against each there would be no argument.
The next item that the facilitator selected only had three votes. This caused a fair bit of confusion, as there were a number of other things that had five or four votes, yet we appeared to have jumped to discussing a much lower ranked competency. Why?
The purposePrior to this activity, the group had identified a set of key challenges in the role. A strong theme that emerged was the lack of time available to deliver on both our management responsibilities and our delivery responsibilities to clients. The competency with three votes was time management. Our facilitator argued that although the skill appeared to have been rated lower, the absence of any red dots meant that everybody considered the skill to be either essential or very important, plus it had been identified as a key challenge and was worthy of consideration.
It was interesting to me to observe how thrown everybody was by this shift. Our focus as a group was on identifying the competencies that had the most votes, but the purpose of the exercise was to identify the skills we felt were required to succeed in our positions. In executing the process we had lost our purpose.
In testingWhen I test, I can find myself getting caught up in reporting things that I perceive as a problem, but the client sees as an enhancement. I do this because one outcome of my test process is logging defects and I want to record that these things were discussed. When I think about the wider purpose of my testing, I'm not sure whether this activity adds value. If the client accepts the behaviour, is this just noise?
When examining the outcomes of our test process, it's important that we remember to take a step back from what we have produced, or are expected to produce, and think about the purpose of what we're doing. What was the process put in place to achieve? Does the outcome meet the goal?