Saturday, 22 March 2014

Finding a path

I've had a number of conversations recently with people who are new to testing and feel that learning to code is the only possible career progression. It prompted this tweet:


I'm not saying that coding is bad and we shouldn't do it. I'm saying that it isn't the only way to diversify and upskill as a tester. If you don't enjoy coding, then why take that career path? There are so many opportunities to choose from and learning is much easier when you're truly interested in the topic that you study. Try to find something that you're excited about.

Critical Thinking

There is nothing more frustrating than missing a bug; watching a colleague discover something that you feel you should have found first. I think the first path for every tester is learning about how they and others think.

Test Approach

Why do you test the way that you do? Are there other test approaches that you could learn and attempt to implement in your organisation? What are the benefits and pitfalls of each?

Mobile

Software for mobile platforms is increasingly popular and demands a particular set of testing skills. Thinking about the environmental factors that can influence mobile application users is an interesting problem.

Usability

If your application is frustrating or difficult to use, then it doesn't matter how brilliant the code is underneath. Yet few people dedicate study to the aspects of an application that make it appealing and engaging for users.

Performance

Do you like to push at the limits of an application? Do you enjoy statistical analysis? Performance testing examines the behaviour of software under specific use scenarios.

Security

Is your application safe from malicious or accidental mis-use? Have you thought about vulnerabilities and how they might be exploited? Security testers look for the loose threads that they can pull to create a hole.

Integration

We may have a perfect application in an isolated environment, but when we deploy it to interact with others there are any number of issues. Testers who understand how applications speak to each other, and how to test these connections, are very valuable. Knowledge required for integration testing will vary based on architecture.

Leadership or Management

Are you interested in motivating a team? Do you enjoy communicating and being a conduit for information? Are you happy to run interference so that your testers can concentrate on testing?

Writer or Speaker

Share your experiences so that others can learn from them. The act of preparing an article or presentation will solidify your own learning, and offering your ideas to others is a great way to broaden your education with feedback given from the wider testing community.

Consulting

Consulting is about learning to eloquently communicate your thinking. It's about understanding a problem or situation, then suggesting ideas that may help address it. Good consultants think inventively and laterally.

Teaching or Coaching

Once you have established skill in some area, you might enjoy teaching it to others. Planning, preparing and delivering education material can be both challenging and incredibly rewarding. Learn how to create interactive classrooms and present information in an accessible way.

Community Leader

Some people enjoy creating community by organising MeetUp groups, or conferences, or special interest groups, or weekend testing events, or crowd-sourced testing, or any number of other activities. If no one in your area has established a community event, why not create one?


What other paths do you see? Do you have other resources I could add here?

Tuesday, 11 March 2014

Benefits of session-based test management

When selling session-based test management to those who are unfamiliar with the concept, you may be asked for evidence of the benefits of the approach. It makes sense that those responsible for implementing change would first want to know what they might stand to gain from it.

I agree with James Bach when he says:
There are no reasonable numbers; no valid studies of testing productivity that compares ET with scripted testing. All we have are anecdotes.
Bach, J. (2003). Exploratory Testing Explained.

However, there are many anecdotes available in our industry that, when assembled in one place,  form a chorus of common opinion on the benefits that an organisation may observe by adopting session-based test management. These are people who speak not to the what, but to the why. Once you have read the theory, why might you be compelled to try it yourself?

Here are a series of referenced quotes detailing the benefits observed in session-based test management.

Better communication and increased collaboration

The methods above gave us the tools to communicate within and outside the team. By improving communication, we felt that we reduced the number of misunderstandings. Communication also helped to increase trust, which both improved personal relations, but also helped facilitate solutions
Lyndsay, J. & van Eeden, N. (2003). Adventures in Session-Based Testing.

In this case, the testing team changed how they documented their testing - cutting most of the documentation - increased collaboration with those outside of the testing team, and continued to deliver high-quality defects and metrics
Kelly, M. (2011). Session-Based Test Management. In M. Heusser & G. Kulkarni (Eds.), How to Reduce the Cost of Software Testing.

The act of removing this documentation task from my team allowed them to become more involved in the design and requirements clarifying meetings with the developers and the Product Manager and Software Architect.  This enhanced their understanding of what was required and what could be validated.  This led to a reduction in the number of bugs that we entered and had to disqualify as “performs as designed” from nearly seventy on the previous release to just three on this last release.  That is a metric that clearly demonstrates how much better the design and behavior of the application was understood by the developers and the testers as a shared vision.  This allowed us to recoup the time that would have been spent discussing and documenting these bugs and apply that to more testing.
Petty, K. (2005). Reflections on the Use of Session-Based Exploratory Testing As the Primary Test Methodology for Software in an Agile Environment. Presented at the Indianapolis Workshops on Software Testing (IWST).

SBTM allows us to evaluate how the software actually behaves.  It allows us to then compare the documented requirements with the behavior.  Sometimes we find bugs in the software, sometimes we find gaps in the requirement that were presumed or, usually, not recognized as requirements - and that forces the conversation where the testers can help facilitate.  So, the developers and designers can talk about what their vision was and the business reps/experts can talk about theirs.
Walen, P. (2014).

Higher proportion of testing spent in execution rather than documentation

Test sessions aren’t a new way of testing, but a different way to plan and track testing. One that provides a stronger connection between testing and test objectives. They increase the time available for testing, by improving flexibility and responsiveness, whilst reducing the time spent on test planning and documentation. That makes them valuable for both traditional and Agile testing.
Prentice, A. (2011). Testing’s In Session.

The amount of time spent on testing vs. how much time you spend on planning is a lot different from the regular test scripts approach. This is the core of Exploratory testing and this is where SBTM shines.
Jansson, M. (2010). An analysis of Session-based test management.

Test execution can start immediately 

The test team had a lot of knowledge of the system to be migrated so a mind map on functionality and test areas could be easily determined. We used the mind maps to generate test ideas, so we could almost directly start with the test execution.
Schrijver, S. (2014). Where to Use The Session Based Test Approach - A Single Project (different in size).

Information for decision making is available earlier

Rather than wait until the end of testing to find out how good the system was, the decision could be assessed earlier, allowing warning of problems and re-prioritisation of effort.
Lyndsay, J. & van Eeden, N. (2003). Adventures in Session-Based Testing.

More tests are executed

The indication that I see is that you will do a whole lot more tests using session-based testing than when you are focused on running specific test cases with predefined steps you should take.
Jansson, M. (2010). An analysis of Session-based test management.

When we started to work with SBTM we noticed how much more we could do. We could dig a lot deeper into the system
Forsberg, J. (2014).

Higher number of bugs found

I helped one large company start an exploratory test team in a department surrounded by scripted testers. They found that the ET people, unencumbered by the maintenance associated with test artifacts, were able to pursue more leads and rumors about risk. They found more bugs, and more important bugs. Three years later, the team was still in place.
Bach, J. (2003). Exploratory Testing Explained.

… the rate at which significant bugs were found stayed the same on the introduction of these methods, and increased for the next five months …
Lyndsay, J. & van Eeden, N. (2003). Adventures in Session-Based Testing.

Further, session-based testing provides a cost-effective means of identifying defects. The method is simple to understand, easy to implement, and has demonstrated effectiveness in ensuring the viability of medical device software.
Wood, B. & James, D. (2003). Applying Session-Based Testing to Medical Software.

Faster response when retesting bug fixes

Test sessions allowed a faster response to the arrival of a fix, and served as effective proof that the fix had been well implemented and tested. The team found that looking at the session for the test when the problem had been found helped plan the retest.
Lyndsay, J. & van Eeden, N. (2003). Adventures in Session-Based Testing.

Empower testers to make decisions in response to changing risk

with each test executed the tester learns more about the product and uses that information to design the next best test to be executed. It's a process that allows the tester to respond to each test in a way that should maximize focus on the most relevant risks.
Kelly, M. (2009). The benefits of exploratory testing in agile environments.

The test team felt in control of their work. They could see the size of it, see how much they had done, and what was left. They could decide what to do next, and back up those decisions.
Lyndsay, J. & van Eeden, N. (2003). Adventures in Session-Based Testing.

With a test script you are following a predetermined path on your way to an expected outcome. You don´t write down any thing unless you see something unexpected. ... With Session Based Exploratory Testing, when you are busy with a test session, you observing the behavior of the application under test. You write down what your experiences are, which steps you have executed, the test data you have used. When something unexpected occurs, you can investigate further. 
Schrijver, S. (2013). My "elevator pitch" for Session Based Exploratory Testing.


Exploratory testing has multiple benefits – one of the greatest being that it doesn't lock down your testing into the forms of tests you can imagine before you've even had “first contact” with a prototype version of software.
Talks, M. (2014). Learning to use exploratory testing in your organisation.

Improve tester morale and job satisfaction

[Testers] were encouraged to measure their own progress and their estimates were trusted. Morale improved, and the test team was seen as an interesting and valuable place to work.
Lyndsay, J. & van Eeden, N. (2003). Adventures in Session-Based Testing.

I believe that you become more satisfied in as a tester when you use the session-based test management instead of running hardcore test scripts.
Jansson, M. (2010). An analysis of Session-based test management.

Experience Reports

What‘s really great about it? 
  • It gets everyone involved! 
  • It focuses everyone involved! 
  • It allows testers variation in their work! 
  • It gets the job done quickly! QA is not the bottleneck! 
  • It‟s fun, and creates team spirit! (Important for embedded QA!) 
  • It allows multiple perspectives simultaneously on the task at hand! 
  • It shows everyone what QA does!


Womack, M. (2011). Meeting Agile Deadlines with Session Based Testing. Presented at Swiss Testing Night Zurich.


What Happened
  • Morale improved significantly
  • Defect rates for NEW development were kept on par with maintenance projects.
  • More “Critical” bugs found faster.
  • Major bugs found after code freeze reduced.
  • Almost steady defect ratios of 50% Critical, 28% Moderate, and 22% Minor across all releases
  • 90% reduction in “Non-Bug” Bugs.
  • 300% increase in number of test cases performed
  • Fewer late nights for SQA.


Petty, K. (2009). Transitioning from Standard V&V to Rapid Testing Practices in a Chaotic Project Environment. Presented at the Conference of the Association for Software Testing 2009.


If you would like to add to this list, please leave a comment below.

Sunday, 2 March 2014

Reporting session based testing

I was recently asked about the reporting available for session based testing at a test management and senior management level. By discarding the metric that many managers use to track their teams, test case count, I had created uncertainty about what they would use instead. Though I could demonstrate the rich information available from visual modeling and session based test reports, I was caught short by a lack of example management reporting artifacts. I set out to fix this by creating a set of high-level reports for a sample project.

Context

I was the test manager of a project for five half days, or four hours each day. The project was a learning exercise and the product under test was the TradeMe Sandbox, a test version of the most-popular online auction site in New Zealand.

I had a team of nine testers, six of whom were recent graduates. They were split in to three groups of three testers and each group was assigned a specific area of functionality; the red team for selling an item, green team for feedback on a purchase, and purple team for search.

The test process was straight-forward. Each trio developed a visual model of their piece of the system and I reviewed it. They then estimated how much testing would be required by breaking their model in to 60 minute sessions with a specific charter. Testing was divided among the team, with each tester using a basic session based test management report to record their activities. At the end of each test session this report was reviewed during a short debrief.

The entire project can be summarised at a high-level by this timeline:


Task Management

Due to the tight timeframe, I didn't want to support 3M by creating a physical visual board for task management. Instead I set up a virtual board in Trello for the team to work from. The board showed which pieces of functionality were being tested, and the number of charters awaiting execution, in progress, and completed. All cards on the board were colour coded to the trio of testers responsible for that area.


By using this tool it was easy for me to see, as a test manager, what my team were doing. At the end of each day of the project I archived the Tasks - Done list as Charters Complete - YYYY/MM/DD. By doing this I retained a separate list of the cards that were completed on any given day, and kept the board usable for the team.

Dashboard

The first high-level report that I wanted to create was a daily dashboard. I started with the Low Tech Testing Dashboard by James Bach and the Test Progress Report by Leah Stockley. Each day I shared the dashboard with a group of people who were not involved with the project. Their feedback from a senior management perspective altered the content and format of the report from its origins to what you see below.


The first column presents the functional area under test with a star coloured to match the associated card in Trello. Each piece of the system has a priority. The quality assessment is exactly as described by James Bach in his Low Tech Dashboard, the key at the top of the column expands to explain each value that may appear. The progress indicator is a simple pie graph to reflect how close to complete testing is in each area. Bugs are presented in a small mind map where the root node includes a count and each branch within has an indication of severity and the bug number. A brief comment is provided where necessary.

The dashboard is designed to present information in a succinct and visual fashion. It was created in xMind, the same tool that was used by the testers for their visual modeling. This allowed the dashboard to integrate directly with the work of the testers, making a very useful one-stop-shop from a test management perspective.

Tracking

The last level of reporting I wanted was a way to anticipate the future; the crystal ball of session based test management. I found a great resource on Michael Kelly's blog from which I pulled the following two spreadsheets.

The first tracked the percentage of charters completed by each team:


This gave me some idea of the effort remaining, which would be familiar to a test manager who tracks based on test case count. The nice thing here is that the unit of measure is consistent. Each charter is one hour, as opposed to test cases that can vary in duration. The only annoyance I found with this spreadsheet was that the end goal of total charters changed each day as scope was discovered.

This brings me to charter velocity:


The blue line on the graph showed me that our scope was settling as the charters created dropped each day. As the team settled into a rhythm of execution the green line leveled out. The orange line shows work remaining, by extending the trend to where it crosses the X-axis we might guess when testing will be complete.

I found both of these spreadsheets necessary as a test manager to feel comfortable that the project team were heading in the right direction in a timely fashion. That said, I did not share these measures with my senior management team for fear they would be misinterpreted. They received the dashboard alone.

Is this approach similar to your own? Do you have any thoughts?