Wednesday, 26 August 2015

Accessibility & Usability Testing Pathway

This pathway is a tool to help guide your self development in accessibility and usability testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:
  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.


STEP - Understand the impact of accessibility

Learn what accessibility testing is and discover why it is needed:
EXERCISE
[1 hour] Developing an accessible product requires commitment from the entire development team, not just the tester. To share your new found appreciation of the impact of accessibility, challenge your team to spend 30 minutes without a mouse. As well as day to day work, ask them to check out the applications you develop. Afterwards, reflect as a team on the difficulties you encountered and the number of changes required to make your applications more accessible.


STEP - Accessibility standards

These formal documents are relatively difficult to read exhaustively, but it's worth browsing through the standards to get an understanding of what they contain so they can be used as a reference:
EXERCISE
[30 mins] Can you locate information in the standards to answer the following questions:
  1. What is the minimum contrast for large scale text?
  2. What are the restrictions for time limited content e.g. form submission timeout?
  3. If an error is automatically detected, what are the accessibility considerations for displaying error messages?


STEP - Accessibility testing heuristics and strategy

Investigate how we test whether applications are compliant. These practical resources include heuristics, mnemonics, test strategy, demonstrations, etc.
EXERCISE
[2 hours] Select a set of heuristics or test ideas that appeal to you. Talk to your business analyst or business lead about which aspects of accessibility they feel are most important. Conduct a 60 minute tool-free accessibility tour of the application that you work on, prioritising the testing that your business representative indicated. Note any problems that you encounter. Share the results of your tour with the same person and discuss how improvements to accessibility might be incorporated into your development process.


STEP - Accessibility testing tools

Learn about the tools that are available to help test accessibility in applications:
EXERCISES
[2 hours] There are a lot of tools to help assess accessibility properties of a site, many of which integrate within the browser. Evaluate the tools on offer and discover which you prefer. Compare the results of the automated assessment against the results of your own accessibility tour. What type of problems did the tool discover that you missed? What type of problems did you discover that the tool missed?

[2 hours] Download and trial the JAWS screen reader. See which areas of your applications perform well, and which are incomprehensible. Discover the main features of JAWS and learn how to navigate our sites as a non-sighted person would.


STEP - Developing accessible software

Tips to develop an accessible application, to prevent problems before they are caught by testing:
EXERCISE
[1 hour] Talk to the developers in your team about the techniques that they use to write accessible applications. Share what you've learned and investigate together whether there are any changes that could be made to development to help improve accessibility.


STEP - What is usability?

An introduction to usability and some short experiences from testers:
EXERCISE
[1 hour] To cement your understanding of the basic principles of usability testing, explain the concept to another tester.


STEP - Usability testing with real users

Discover different methods for tackling usability testing with real users of your software, from structured user sessions to guerilla usability testing:
EXERCISE
[4 hours] Talk to a user experience specialist about how customer sessions are run in your organisation. Attend some customer sessions and observe. Debrief with the user experience specialist about the process for seeking customer input and the feedback that was provided. Reflect on these experiences and try to align your approach to the theory and experiences you've read.


STEP - Usability testing for testers

Read about some techniques for performing preliminary usability testing within your development process:
EXERCISE
[2 hours] Select a resource that you'd like to try. Conduct a 60 minute usability test session of the application that you work on. Note any problems that you encounter. Discuss your approach and the validity of your results with your user experience specialist. Reflect on where opportunities exist for you to improve your development process to create a more usable application.


STEP - Agile accessibility & usability

How can we effectively embed accessibility and usability in our agile development process:
EXERCISE
[1 hour] Discuss with your team how your Definition of Done could be altered to reflect accessibility and usability requirements. Determine the importance of these attributes as a group and make a shared commitment to considering them, to whatever degree you are comfortable, in future work.


STEP - Mobile accessibility & usability

There are different tools and user expectations for accessibility and usability on mobile:
EXERCISE
[1 hour] Switch on some of the accessibility features of your mobile handset, e.g. inverted colours, voice over, larger font sizes, etc. Complete a simple tour of the features in one of your mobile applications. Note any problems you encounter in using the application with these accessibility options enabled. If possible, compare your experience on an Apple handset vs. and Android handset.


STEP - Introduction to user experience

Testing supports a positive user experience with your application. Read through the basics of user experience and learn about how user experience is distinguished from other terms:
EXERCISE
[30 mins] Chat to your designers about their views on user experience. Discover what they want your customers to say when they use your products. Discuss how testers can support the UX vision.

Friday, 21 August 2015

Mobile Testing Pathway

This pathway is a tool to help guide your self development in mobile testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:
  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.


STEP - Mobile testing ideas using checklists & mnemonics

When testing mobile applications, we need to switch our thinking. When planning our testing, instead of generating functional ideas about how the software works we should think of mobile-specific test ideas. This change in thinking will test the unique aspects of mobile while also covering the functionality of the application through a different lens.

There are a number of practical resources to help generate test ideas for mobile applications:

EXERCISE
[2 hours] Select a mobile application. Spend 30 minutes using these resources to come up with a set of test ideas for the application you have chosen. If you're unfamiliar with the application you may need to explore its features in parallel to generating test ideas, but try to tour rather than test. Once you have a set of ideas, spend 30 minutes testing the application. Prioritise execution of the mobile test techniques that you have never tried before. Note any questions you have or problems that you discover. After you complete your testing, spend 30 minutes debriefing with a mobile tester. Discuss your plan, your testing and what you discovered; ask questions and develop your understanding.

This is a good exercise to repeat against different mobile applications as you progress through this pathway. Practice makes perfect.


STEP - Mobile test approach

At a level above test ideas, the mobile tester has to develop an approach to the problem of testing an application. Learn more about how other people test, and get a broader understanding of the challenges and considerations of testing on mobile:

EXERCISE
[1 hour] Reflect on the articles you have read and research any areas that you're interested in learning more about. Consider whether there are any gaps in the checklists and resources to generate test ideas based on these new resources. Look back at your previous mobile testing, how would you re-prioritise your activities now that you have a slightly broader understanding of the approach to mobile?


STEP - Device fragmentation

Device fragmentation is a common challenge in mobile testing. The number of physical handsets and diversity in mobile operating systems means that there are a large number of different devices that could be running your mobile application.

Here are some starter posts that illustrate device fragmentation and explain the problems it creates:

EXERCISE
[1 hour] Find a copy of your latest organisation analytics pack to understand device fragmentation in your customer base. Look at the differences in device use between different mobile applications offered by your organisation, and between your responsive public website and mobile applications. Which devices do you think you should test against in each area? Once you have done some research, take your lists to someone in your mobile team. Discuss how analytics drive decisions about device fragmentation.


STEP - Emulators

One of the ways we can tackle device fragmentation is through the use of emulators. There are pros and cons to using emulators instead of real devices. Get some insight into the arguments and use a common emulator:

EXERCISE
[1 hour] Access a responsive public website using the Chrome dev tools device mode. Investigate the features of the emulator, see which aspects of mobile it represents effectively, and identify cases where the emulation fails to reflect reality. As you investigate the site across different devices and resolutions, note any problems that you discover. In addition to test coverage of the site, try to explore all the options of the emulator tool bar.


STEP - Mobile automation strategy

Another response to device fragmentation and the rapid pace of mobile development is the use of automation. Here are some varying opinions on what types of automation are useful and why you might use them:

EXERCISES
[2 hours] Spend some time investigating the mobile automation in place for the iOS and android versions of an existing mobile application in your organisation. Read any associated documentation for these suites, or overviews of how they work. To get closer to the code and see it in action, ask someone to give you a demonstration.

[1 hour] The "device wall of awesome" is an ambitious goal for mobile continuous integration. Investigate which continuous integration practices are currently in place for your mobile applications. Research some other options for continuous integration in mobile. Talk to your mobile team and share your ideas.


STEP - Testing mobile design

There are significant differences between web and mobile user interfaces with design for mobile devices considering smaller screen size, environments for use, etc.

Here are some starter posts for mobile design and usability:

EXERCISE
[2 hours] Considering the devices that you test against, apply a thumb zone heat map to one of the screens, in one of your mobile applications, for each device. Look at the placement of your user interface elements within the map. How has the design considered the placement of widgets to accommodate use of the application on different devices? Take your maps to a mobile designer, talk about what you've discovered and learn about the other design considerations for mobile.


STEP - Mobile First

We talk about becoming "mobile first", but what does that actually mean and how will we implement it?

EXERCISE
[1 hour] Reflect on what you've read and consider how "mobile first" might alter your existing development approach. Talk to a someone who sets strategy in your organisation, share your thoughts and discover their opinions about what "mobile first" will mean for you.


STEP - Wearables

The release of the Apple Watch has accelerated the growth of wearable technology. In this emerging field there are interesting opinions about the influence of wearables:

EXERCISE
[2 hours] Find out whether your organisation has an application for Apple Watch, what features are included, how many people use it, who developed it, and how it has been tested. Try and find somebody with your application installed on their very own Apple Watch!


STEP - Screen capture, mirroring & recording

Tools that capture the screen may be useful, especially in situations where a bug is observed by cannot be reproduced. There are a number of tools available:

EXERCISE
[2 hours] Try installing some of these tools. Explore their functionality. Determine whether there are any that you prefer. Talk to someone in your mobile team about which tools they prefer and why.


STEP - A/B testing on mobile

A/B testing is a method for testing in production where we present alternate builds to our users and use the data about the differences in how people react to each build to make informed decisions on what changes to make or keep:

EXERCISE
[2 hours] Talk to someone in your organisation to find out more about how you've used A/B testing within your applications. Discover application flags and how you interpret user analytics. Talk to the a designer to learn about the situations they might recommend as appropriate for A/B testing in mobile. Talk to someone in your mobile team about how they've used A/B testing within your mobile applications in the past.


STEP - Performance & Stress

As in your desktop applications, performance on mobile is important.

EXERCISE
[1 hour] Talk to someone in your mobile team about how they deal with performance and stress testing now. Based on what you've read, share any ideas you have on how the process might be improved.


STEP - Following the future

Mobile testing is a vibrant field with new articles and trends emerging regularly. Here are some suggestions of people to follow on Twitter who are active in mobile testing:


Tuesday, 18 August 2015

Elastic Role Boundaries

How do you explain roles in an agile team?

In this short presentation Chris Priest and Katrina Clokie explain a model of Elastic Role Boundaries to highlight the difference between flexible ownership of small activities and the enduring commitment of a role.

This presentation stemmed from collaborative discussion at the fifth annual Kiwi Workshop for Software Testing (KWST5) with James Bach, Oliver Erlewein, Richard Robinson, Aaron Hodder, Sarah Burgess, Andy Harwood, Adam Howard, Mark Boyt, Mike Talks, Joshua Raine, Scott Griffiths, John Lockhart, Sean Cresswell, Rachel Carson, Till Neunast, James Hailstone, and David Robinson.



Friday, 7 August 2015

How do you become a great tester?

At the fifth annual Kiwi Workshop for Software Testing (KWST5) that happened earlier this week, James Bach asked a seemingly simple question during one of the open season discussions that I have been thinking about ever since.

"How do you know you're a good tester?"

Since the conference I've had a number of conversations about this, with testers and non-testers, in person and on Twitter. During these conversations I've found it much easier to think of ways to challenge the responses provided by others than to think of an answer to the original question for myself.

Today I asked a colleague in management how they knew that the testers within the team they managed were good testers. We spent several minutes discussing the question in person then, later in the morning, they sent me an instant message that said "... basically a good tester knows they are not a great tester." 

This comment shunted my thinking in a different direction. I agree that most of the people who I view as good testers have a degree of professional uncertainty about their ability. But I don't think that it is this in isolation that makes them a good tester, rather it's the actions that are driven from this belief. And this lead me to my answer.

"How do you know you're a good tester?"

I know I'm a good tester because I want to become a great tester. In order to do this, I actively seek feedback on my contribution from my team members, stakeholders and testing peers. I question my testing and look for opportunities to improve my approach. I imagine how I could achieve better outcomes by improving my soft skills. I constantly look to learn and broaden my testing horizons.





What would you add?

Wednesday, 5 August 2015

Formality in open season at a peer conference

I attended the fifth annual Kiwi Workshop for Software Testing (KWST5) this week. Overall, I really enjoyed spending two days discussing the role of testing with a group of passionate people.

I took a lot from the content. But it's not what we discussed that I want to examine here, instead it's how we discussed it. As I was sharing the events of the final day with my husband he made a comment that troubled me. I took to Twitter to gauge how other people felt about his statement:


Since this tweet created a fair amount of discussion, I thought I would take the time to gather my thoughts and those from others into a coherent narrative, and share some of the ways in which I would approach the same situation differently next time.

Who was there?

I found the dynamic at this year's conference different to previous years. It felt like I was going to spend two days with my friends. Among the attendees, there were only two people who I had never met before. Most of the people in the room were frequent attendees at KWST, or frequent attendees at WeTest, or people who help create or contribute to Testing Trapeze, or current colleagues, or former colleagues, or simply friends who I regularly chat to informally outside of work.

This meant that it was the first year that I wasn't nervous about the environment. It was also the first year that I didn't feel nervous about delivering a talk. Though I was a little anxious about the content of my experience report overall I would say that I felt relatively relaxed.

So, who exactly was in the room? James Bach, Oliver Erlewein, Richard Robinson, Aaron Hodder, Sarah Burgess, Andy Harwood, Adam Howard, Mark Boyt, Chris Priest, Mike Talks, Joshua Raine, Scott Griffiths, John Lockhart, Sean Cresswell, Rachel Carson, Till Neunast, James Hailstone, David Robinson and Katrina Clokie.

What happened?

I was the first speaker on the second day of the conference. My experience report was the first set in an agile context. The topic of the role of testing in agile had been touched on through the first day, but not explored.

I knew that there was a lot of enthusiasm for diving in to a real discussion, and was expecting a robust open season. In fact, the passion for the topic far exceeded my expectations. The particular exchanges that my husband questioned were in one particular period of the open season of my experience report.

Oliver proposed a model to represent roles in agile teams that kicked off a period of intense debate. During this time the only cards in use by participants were red, the colour that indicates the person has something urgent to say that cannot wait. I believe this spell of red cards exceeded 30 minutes, based on a comment from Mike who, when called as the subsequent yellow card, said "hooray, I've been waiting almost 40 minutes".

During this period of red cards, there were several occasions where multiple people who were waiting to speak were actively waving red cards. There were people interrupting one another. There were people speaking out of turn, without waiting to be called upon.

There were specific exchanges within this particular period that my husband questioned. I'm going to share four examples that relate specifically to my own behaviour.

The first happened relatively early in the red card period. Aaron made a statement that I started to respond to. When he attempted to interrupt my response, and he was not the first to interrupt me, I replied by raising my voice and telling him to shut up, so that I could finish what I was saying.

Perhaps halfway through the red card period, I had stopped responding to the people who were raising the red cards and the conversation was flowing among the participants themselves. Rich asked, in his role as facilitator, whether I agreed with what people were saying. I replied that no, I thought they were full of sh*t.

Near the end of the exchange I was asked whether I believed, on reflection, that I had behaved as a moron during the first experience I shared in my experience report. As a caveat my interpretation of this comment has been refuted in subsequent Twitter discussions.

Finally, there was a case where three people were speaking at once and none had used a card. I interjected with a comment that "we have cards for a reason" to shut down their conversation.

Was it a problem?

At the time, I didn't think there was a problem. I shared James' view that "it was an intense exchange done in a good and healthy spirit". I found that red card period of open season incredibly challenging, but I never felt unsafe.

On reflection though, I do think there was a problem.

Why?

My behaviour during open season contributed to an atmosphere where people were talking over one another and behaving informally. The lack of discipline in the heat of these exchanges meant that several people in the room withdrew from the discussion.

This goes directly against the spirit of a peer conference, which is designed for everyone to be included equally. I now feel that I was part of an exchange that excluded those who were unable or unwilling to voice an opinion in this atmosphere.

What would I do differently?

In future, I think that I need to remember to respect the formality of a peer conference. I felt that I was among friends and, because of this, I bought an informal attitude to my exchanges.

I believe this reflection is shared by some others who were present. On Twitter, Aaron said "I shouldn't interact with people I know during formal exchanges differently, and open season is a formal exchange". Sean said "Maybe we need to be more conscious of those relationship biases we bring to peer conferences? I'm guilty of it".

In future, if I felt overwhelmed by interruptions, I would stop and ask for support from the facilitator. On reflection, the very first time I felt compelled to raise my voice and start participating in the culture of talking across people would have been a good opportunity to pause and reset expectations for the discussion.

What do other people think?









What do you think? How formal are your peer conferences? How formal should they be?