ContextI was the test manager of a project for five half days, or four hours each day. The project was a learning exercise and the product under test was the TradeMe Sandbox, a test version of the most-popular online auction site in New Zealand.
I had a team of nine testers, six of whom were recent graduates. They were split in to three groups of three testers and each group was assigned a specific area of functionality; the red team for selling an item, green team for feedback on a purchase, and purple team for search.
The test process was straight-forward. Each trio developed a visual model of their piece of the system and I reviewed it. They then estimated how much testing would be required by breaking their model in to 60 minute sessions with a specific charter. Testing was divided among the team, with each tester using a basic session based test management report to record their activities. At the end of each test session this report was reviewed during a short debrief.
The entire project can be summarised at a high-level by this timeline:
Task ManagementDue to the tight timeframe, I didn't want to support 3M by creating a physical visual board for task management. Instead I set up a virtual board in Trello for the team to work from. The board showed which pieces of functionality were being tested, and the number of charters awaiting execution, in progress, and completed. All cards on the board were colour coded to the trio of testers responsible for that area.
By using this tool it was easy for me to see, as a test manager, what my team were doing. At the end of each day of the project I archived the Tasks - Done list as Charters Complete - YYYY/MM/DD. By doing this I retained a separate list of the cards that were completed on any given day, and kept the board usable for the team.
DashboardThe first high-level report that I wanted to create was a daily dashboard. I started with the Low Tech Testing Dashboard by James Bach and the Test Progress Report by Leah Stockley. Each day I shared the dashboard with a group of people who were not involved with the project. Their feedback from a senior management perspective altered the content and format of the report from its origins to what you see below.
The first column presents the functional area under test with a star coloured to match the associated card in Trello. Each piece of the system has a priority. The quality assessment is exactly as described by James Bach in his Low Tech Dashboard, the key at the top of the column expands to explain each value that may appear. The progress indicator is a simple pie graph to reflect how close to complete testing is in each area. Bugs are presented in a small mind map where the root node includes a count and each branch within has an indication of severity and the bug number. A brief comment is provided where necessary.
The dashboard is designed to present information in a succinct and visual fashion. It was created in xMind, the same tool that was used by the testers for their visual modeling. This allowed the dashboard to integrate directly with the work of the testers, making a very useful one-stop-shop from a test management perspective.
TrackingThe last level of reporting I wanted was a way to anticipate the future; the crystal ball of session based test management. I found a great resource on Michael Kelly's blog from which I pulled the following two spreadsheets.
The first tracked the percentage of charters completed by each team:
This gave me some idea of the effort remaining, which would be familiar to a test manager who tracks based on test case count. The nice thing here is that the unit of measure is consistent. Each charter is one hour, as opposed to test cases that can vary in duration. The only annoyance I found with this spreadsheet was that the end goal of total charters changed each day as scope was discovered.
This brings me to charter velocity:
The blue line on the graph showed me that our scope was settling as the charters created dropped each day. As the team settled into a rhythm of execution the green line leveled out. The orange line shows work remaining, by extending the trend to where it crosses the X-axis we might guess when testing will be complete.
I found both of these spreadsheets necessary as a test manager to feel comfortable that the project team were heading in the right direction in a timely fashion. That said, I did not share these measures with my senior management team for fear they would be misinterpreted. They received the dashboard alone.
Is this approach similar to your own? Do you have any thoughts?