Tuesday, 8 September 2015

API, Web Services & Microservices Testing Pathway

This pathway is a tool to help guide your self development in API, web services and microservices testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:
  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.


STEP - Distinguishing APIs and web services

An API (Application Programming Interface) is the means by which third parties can write code that interfaces with other code. A Web Service is a type of API that:
  • is used to exchange data between applications,
  • uses a standard defined by W3C, 
  • has an interface that is depicted in a machine-processable format usually specified as a WSDL (Web Service Description Language), and 
  • almost always operates over HTTP.
Example web service protocols include SOAP, REST, and XML-RPC. An example of an API that is not a web service is the Linux Kernel API, which is written in C for use on a local machine.

References: API vs Web Service, Difference between web API and web service, Difference between API and web service

EXERCISE
[1 hour] Once you feel that you understand the difference between APIs and web services, talk to a developer. Ask what APIs and web services exist within the application that you're working on. Work with your developer to draw a simple architecture diagram that shows whereabouts in your application these interfaces are located. Be sure you can distinguish which are APIs and which are web services, and that you know which protocols each interface uses.


STEP - Understanding SOAP and REST

Learn more about two common implementations of web services and the differences between them:
EXERCISE
[1 hour] Find out whether you have any services with both a SOAP and a REST implementation. This means that the same business operation can be served in two different formats through two different APIs. Talk to a developer or technical lead and ask them to demonstrate a request in each implementation. Discuss the differences between these two interfaces and some of the reasons that both exist.


STEP - API and web service testing

Discover the tools available and some common mnemonics to approach web service testing:
EXERCISES
[3 hours] Repeat the 53rd Weekend Testing Europe session by running some comparative tests on the SongKick API and associated website. SongKick is a service that matches users to live music events taking place near them. Use your web browser to make API requests as you would a website URL. Alongside the links from Amy Phillips and Alan Richardson above, you can refer to the SongKick API and the full transcript of the weekend testing Europe session for guidance. Experiment with locating different test data and using different API requests until you understand how the API functions. Please abide by all terms of use and do not experiment with load or security testing on this API.

[3 hours] Install Postman and use it to test the TradeMe Sandbox API. TradeMe is the leading online marketplace and classified advertising platform in New Zealand. Public, unregistered, access to their developer API is restricted to catalogue methods. Experiment with retrieving information and compare your results against the TradeMe Sandbox site. Please abide by all terms of use and do not experiment with load or security testing on this API.

[3 hours] Explore the Predic8 online REST web services demo using the advanced REST client Chrome extension or PAW - the ultimate REST client for Mac. You will need to install your chosen software and read the supporting documentation for the demonstration REST service. Explore the different functions provided. In addition to retrieving information you should be able to modify data using POST, PUT and DELETE requests. Please abide by all terms of use and do not experiment with load or security testing on this API.

[3 hours] Select an API or web service within your application. Seek out the reference material to discover what requests are allowed. Apply what you've learned through testing the third party APIs to compare the behaviour of your internal interfaces and application. Use the tools you've tried before, or select a different tool to explore. Afterwards, discuss your testing with a developer or another tester within your team, share what you found and ask how this interface is tested now.


STEP - Technical implementation of REST API

Get a deeper understanding of REST APIs by understanding how they are designed and implemented:
EXERCISE
[3 hours] Create a set of requests using a REST API within your organisation. Investigate how resources are modeled, e.g. resource URL, HTTP verbs (GET PUT DELETE POST). Talk to a developer or technical lead to check your understanding and ask questions about your REST implementation.


STEP - Security testing APIs

Explore the basics of security testing APIs:

EXERCISE
[8 hours] Repeat the 56th Weekend Testing Europe session by investigating the deliberately insecure API for the Supercar Showdown website, which forms the basis of Troy Hunt’s Pluralsight Course Hack Your API First. Alongside the write-up from Dan Billing above, you can refer to the Hack Your API First course materials and the full transcript of the Weekend Testing Europe session for guidance. Alongside the course materials, conduct your own experiments with the different facets of API security.

[3 hours] Apply what you've learned to assess the security of one of your APIs in a development or test environment, not production. Document any vulnerabilities that you discover to discuss with your development team. Talk to an architect about additional protection that is in place in your production environments to prevent attacks.


STEP - Service virtualization

Discover service virtualization and how it can be used in testing:
EXERCISE
[2 hours] Determine whether any of your test suites use service virtualization. Draw a detailed architecture diagram that reflects your understanding of where services are virtualized and how this has been implemented. Check your understanding with a developer or another tester and make sure you understand the reasons that the tests use service virtualization.


STEP - Introduction to microservices

These articles give an introduction to microservices and share practical experiences from organisations who use them:
EXERCISE
[1 hour] Talk to a developer or technical lead to check your understanding of microservices, then discuss the benefits and drawbacks of switching to a microservices architecture.


STEP - Microservices testing

Discover how to test in a microservices world:
EXERCISE
[1 hour] Demonstrate your understanding of microservices testing by describing to another tester or test lead, in your own words, the types of testing that are possible in a microservices architecture.


STEP - A broader look at APIs

A brief introduction to API management and APIs within IoT, hypermedia, machine learning, etc.
EXERCISE
[1 hour] Talk to a developer or technical lead about the future direction for our API implementation. Discuss how your organisation might be impacted by these ideas, or other innovations.

Wednesday, 26 August 2015

Accessibility & Usability Testing Pathway

This pathway is a tool to help guide your self development in accessibility and usability testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:
  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.


STEP - Understand the impact of accessibility

Learn what accessibility testing is and discover why it is needed:
EXERCISE
[1 hour] Developing an accessible product requires commitment from the entire development team, not just the tester. To share your new found appreciation of the impact of accessibility, challenge your team to spend 30 minutes without a mouse. As well as day to day work, ask them to check out the applications you develop. Afterwards, reflect as a team on the difficulties you encountered and the number of changes required to make your applications more accessible.


STEP - Accessibility standards

These formal documents are relatively difficult to read exhaustively, but it's worth browsing through the standards to get an understanding of what they contain so they can be used as a reference:
EXERCISE
[30 mins] Can you locate information in the standards to answer the following questions:
  1. What is the minimum contrast for large scale text?
  2. What are the restrictions for time limited content e.g. form submission timeout?
  3. If an error is automatically detected, what are the accessibility considerations for displaying error messages?


STEP - Accessibility testing heuristics and strategy

Investigate how we test whether applications are compliant. These practical resources include heuristics, mnemonics, test strategy, demonstrations, etc.
EXERCISE
[2 hours] Select a set of heuristics or test ideas that appeal to you. Talk to your business analyst or business lead about which aspects of accessibility they feel are most important. Conduct a 60 minute tool-free accessibility tour of the application that you work on, prioritising the testing that your business representative indicated. Note any problems that you encounter. Share the results of your tour with the same person and discuss how improvements to accessibility might be incorporated into your development process.


STEP - Accessibility testing tools

Learn about the tools that are available to help test accessibility in applications:
EXERCISES
[2 hours] There are a lot of tools to help assess accessibility properties of a site, many of which integrate within the browser. Evaluate the tools on offer and discover which you prefer. Compare the results of the automated assessment against the results of your own accessibility tour. What type of problems did the tool discover that you missed? What type of problems did you discover that the tool missed?

[2 hours] Download and trial the JAWS screen reader. See which areas of your applications perform well, and which are incomprehensible. Discover the main features of JAWS and learn how to navigate our sites as a non-sighted person would.


STEP - Developing accessible software

Tips to develop an accessible application, to prevent problems before they are caught by testing:
EXERCISE
[1 hour] Talk to the developers in your team about the techniques that they use to write accessible applications. Share what you've learned and investigate together whether there are any changes that could be made to development to help improve accessibility.


STEP - What is usability?

An introduction to usability and some short experiences from testers:
EXERCISE
[1 hour] To cement your understanding of the basic principles of usability testing, explain the concept to another tester.


STEP - Usability testing with real users

Discover different methods for tackling usability testing with real users of your software, from structured user sessions to guerilla usability testing:
EXERCISE
[4 hours] Talk to a user experience specialist about how customer sessions are run in your organisation. Attend some customer sessions and observe. Debrief with the user experience specialist about the process for seeking customer input and the feedback that was provided. Reflect on these experiences and try to align your approach to the theory and experiences you've read.


STEP - Usability testing for testers

Read about some techniques for performing preliminary usability testing within your development process:
EXERCISE
[2 hours] Select a resource that you'd like to try. Conduct a 60 minute usability test session of the application that you work on. Note any problems that you encounter. Discuss your approach and the validity of your results with your user experience specialist. Reflect on where opportunities exist for you to improve your development process to create a more usable application.


STEP - Agile accessibility & usability

How can we effectively embed accessibility and usability in our agile development process:
EXERCISE
[1 hour] Discuss with your team how your Definition of Done could be altered to reflect accessibility and usability requirements. Determine the importance of these attributes as a group and make a shared commitment to considering them, to whatever degree you are comfortable, in future work.


STEP - Mobile accessibility & usability

There are different tools and user expectations for accessibility and usability on mobile:
EXERCISE
[1 hour] Switch on some of the accessibility features of your mobile handset, e.g. inverted colours, voice over, larger font sizes, etc. Complete a simple tour of the features in one of your mobile applications. Note any problems you encounter in using the application with these accessibility options enabled. If possible, compare your experience on an Apple handset vs. and Android handset.


STEP - Introduction to user experience

Testing supports a positive user experience with your application. Read through the basics of user experience and learn about how user experience is distinguished from other terms:
EXERCISE
[30 mins] Chat to your designers about their views on user experience. Discover what they want your customers to say when they use your products. Discuss how testers can support the UX vision.

Friday, 21 August 2015

Mobile Testing Pathway

This pathway is a tool to help guide your self development in mobile testing. It includes a variety of steps that you may approach linearly or by hopping about to those that interest you most.

Each step includes:
  • links to a few resources as a starting point, but you are likely to need to do your own additional research as you explore each topic.
  • a suggested exercise or two, which focus on reflection, practical application and discussion, as a tool to connect the resources with your reality.

Take your time. Dig deep into areas that interest you. Apply what you learn as you go.


STEP - Mobile testing ideas using checklists & mnemonics

When testing mobile applications, we need to switch our thinking. When planning our testing, instead of generating functional ideas about how the software works we should think of mobile-specific test ideas. This change in thinking will test the unique aspects of mobile while also covering the functionality of the application through a different lens.

There are a number of practical resources to help generate test ideas for mobile applications:

EXERCISE
[2 hours] Select a mobile application. Spend 30 minutes using these resources to come up with a set of test ideas for the application you have chosen. If you're unfamiliar with the application you may need to explore its features in parallel to generating test ideas, but try to tour rather than test. Once you have a set of ideas, spend 30 minutes testing the application. Prioritise execution of the mobile test techniques that you have never tried before. Note any questions you have or problems that you discover. After you complete your testing, spend 30 minutes debriefing with a mobile tester. Discuss your plan, your testing and what you discovered; ask questions and develop your understanding.

This is a good exercise to repeat against different mobile applications as you progress through this pathway. Practice makes perfect.


STEP - Mobile test approach

At a level above test ideas, the mobile tester has to develop an approach to the problem of testing an application. Learn more about how other people test, and get a broader understanding of the challenges and considerations of testing on mobile:

EXERCISE
[1 hour] Reflect on the articles you have read and research any areas that you're interested in learning more about. Consider whether there are any gaps in the checklists and resources to generate test ideas based on these new resources. Look back at your previous mobile testing, how would you re-prioritise your activities now that you have a slightly broader understanding of the approach to mobile?


STEP - Device fragmentation

Device fragmentation is a common challenge in mobile testing. The number of physical handsets and diversity in mobile operating systems means that there are a large number of different devices that could be running your mobile application.

Here are some starter posts that illustrate device fragmentation and explain the problems it creates:

EXERCISE
[1 hour] Find a copy of your latest organisation analytics pack to understand device fragmentation in your customer base. Look at the differences in device use between different mobile applications offered by your organisation, and between your responsive public website and mobile applications. Which devices do you think you should test against in each area? Once you have done some research, take your lists to someone in your mobile team. Discuss how analytics drive decisions about device fragmentation.


STEP - Emulators

One of the ways we can tackle device fragmentation is through the use of emulators. There are pros and cons to using emulators instead of real devices. Get some insight into the arguments and use a common emulator:

EXERCISE
[1 hour] Access a responsive public website using the Chrome dev tools device mode. Investigate the features of the emulator, see which aspects of mobile it represents effectively, and identify cases where the emulation fails to reflect reality. As you investigate the site across different devices and resolutions, note any problems that you discover. In addition to test coverage of the site, try to explore all the options of the emulator tool bar.


STEP - Mobile automation strategy

Another response to device fragmentation and the rapid pace of mobile development is the use of automation. Here are some varying opinions on what types of automation are useful and why you might use them:

EXERCISES
[2 hours] Spend some time investigating the mobile automation in place for the iOS and android versions of an existing mobile application in your organisation. Read any associated documentation for these suites, or overviews of how they work. To get closer to the code and see it in action, ask someone to give you a demonstration.

[1 hour] The "device wall of awesome" is an ambitious goal for mobile continuous integration. Investigate which continuous integration practices are currently in place for your mobile applications. Research some other options for continuous integration in mobile. Talk to your mobile team and share your ideas.


STEP - Testing mobile design

There are significant differences between web and mobile user interfaces with design for mobile devices considering smaller screen size, environments for use, etc.

Here are some starter posts for mobile design and usability:

EXERCISE
[2 hours] Considering the devices that you test against, apply a thumb zone heat map to one of the screens, in one of your mobile applications, for each device. Look at the placement of your user interface elements within the map. How has the design considered the placement of widgets to accommodate use of the application on different devices? Take your maps to a mobile designer, talk about what you've discovered and learn about the other design considerations for mobile.


STEP - Mobile First

We talk about becoming "mobile first", but what does that actually mean and how will we implement it?

EXERCISE
[1 hour] Reflect on what you've read and consider how "mobile first" might alter your existing development approach. Talk to a someone who sets strategy in your organisation, share your thoughts and discover their opinions about what "mobile first" will mean for you.


STEP - Wearables

The release of the Apple Watch has accelerated the growth of wearable technology. In this emerging field there are interesting opinions about the influence of wearables:

EXERCISE
[2 hours] Find out whether your organisation has an application for Apple Watch, what features are included, how many people use it, who developed it, and how it has been tested. Try and find somebody with your application installed on their very own Apple Watch!


STEP - Screen capture, mirroring & recording

Tools that capture the screen may be useful, especially in situations where a bug is observed by cannot be reproduced. There are a number of tools available:

EXERCISE
[2 hours] Try installing some of these tools. Explore their functionality. Determine whether there are any that you prefer. Talk to someone in your mobile team about which tools they prefer and why.


STEP - A/B testing on mobile

A/B testing is a method for testing in production where we present alternate builds to our users and use the data about the differences in how people react to each build to make informed decisions on what changes to make or keep:

EXERCISE
[2 hours] Talk to someone in your organisation to find out more about how you've used A/B testing within your applications. Discover application flags and how you interpret user analytics. Talk to the a designer to learn about the situations they might recommend as appropriate for A/B testing in mobile. Talk to someone in your mobile team about how they've used A/B testing within your mobile applications in the past.


STEP - Performance & Stress

As in your desktop applications, performance on mobile is important.

EXERCISE
[1 hour] Talk to someone in your mobile team about how they deal with performance and stress testing now. Based on what you've read, share any ideas you have on how the process might be improved.


STEP - Following the future

Mobile testing is a vibrant field with new articles and trends emerging regularly. Here are some suggestions of people to follow on Twitter who are active in mobile testing:


Tuesday, 18 August 2015

Elastic Role Boundaries

How do you explain roles in an agile team?

In this short presentation Chris Priest and Katrina Clokie explain a model of Elastic Role Boundaries to highlight the difference between flexible ownership of small activities and the enduring commitment of a role.

This presentation stemmed from collaborative discussion at the fifth annual Kiwi Workshop for Software Testing (KWST5) with James Bach, Oliver Erlewein, Richard Robinson, Aaron Hodder, Sarah Burgess, Andy Harwood, Adam Howard, Mark Boyt, Mike Talks, Joshua Raine, Scott Griffiths, John Lockhart, Sean Cresswell, Rachel Carson, Till Neunast, James Hailstone, and David Robinson.



Friday, 7 August 2015

How do you become a great tester?

At the fifth annual Kiwi Workshop for Software Testing (KWST5) that happened earlier this week, James Bach asked a seemingly simple question during one of the open season discussions that I have been thinking about ever since.

"How do you know you're a good tester?"

Since the conference I've had a number of conversations about this, with testers and non-testers, in person and on Twitter. During these conversations I've found it much easier to think of ways to challenge the responses provided by others than to think of an answer to the original question for myself.

Today I asked a colleague in management how they knew that the testers within the team they managed were good testers. We spent several minutes discussing the question in person then, later in the morning, they sent me an instant message that said "... basically a good tester knows they are not a great tester." 

This comment shunted my thinking in a different direction. I agree that most of the people who I view as good testers have a degree of professional uncertainty about their ability. But I don't think that it is this in isolation that makes them a good tester, rather it's the actions that are driven from this belief. And this lead me to my answer.

"How do you know you're a good tester?"

I know I'm a good tester because I want to become a great tester. In order to do this, I actively seek feedback on my contribution from my team members, stakeholders and testing peers. I question my testing and look for opportunities to improve my approach. I imagine how I could achieve better outcomes by improving my soft skills. I constantly look to learn and broaden my testing horizons.





What would you add?

Wednesday, 5 August 2015

Formality in open season at a peer conference

I attended the fifth annual Kiwi Workshop for Software Testing (KWST5) this week. Overall, I really enjoyed spending two days discussing the role of testing with a group of passionate people.

I took a lot from the content. But it's not what we discussed that I want to examine here, instead it's how we discussed it. As I was sharing the events of the final day with my husband he made a comment that troubled me. I took to Twitter to gauge how other people felt about his statement:


Since this tweet created a fair amount of discussion, I thought I would take the time to gather my thoughts and those from others into a coherent narrative, and share some of the ways in which I would approach the same situation differently next time.

Who was there?

I found the dynamic at this year's conference different to previous years. It felt like I was going to spend two days with my friends. Among the attendees, there were only two people who I had never met before. Most of the people in the room were frequent attendees at KWST, or frequent attendees at WeTest, or people who help create or contribute to Testing Trapeze, or current colleagues, or former colleagues, or simply friends who I regularly chat to informally outside of work.

This meant that it was the first year that I wasn't nervous about the environment. It was also the first year that I didn't feel nervous about delivering a talk. Though I was a little anxious about the content of my experience report overall I would say that I felt relatively relaxed.

So, who exactly was in the room? James Bach, Oliver Erlewein, Richard Robinson, Aaron Hodder, Sarah Burgess, Andy Harwood, Adam Howard, Mark Boyt, Chris Priest, Mike Talks, Joshua Raine, Scott Griffiths, John Lockhart, Sean Cresswell, Rachel Carson, Till Neunast, James Hailstone, David Robinson and Katrina Clokie.

What happened?

I was the first speaker on the second day of the conference. My experience report was the first set in an agile context. The topic of the role of testing in agile had been touched on through the first day, but not explored.

I knew that there was a lot of enthusiasm for diving in to a real discussion, and was expecting a robust open season. In fact, the passion for the topic far exceeded my expectations. The particular exchanges that my husband questioned were in one particular period of the open season of my experience report.

Oliver proposed a model to represent roles in agile teams that kicked off a period of intense debate. During this time the only cards in use by participants were red, the colour that indicates the person has something urgent to say that cannot wait. I believe this spell of red cards exceeded 30 minutes, based on a comment from Mike who, when called as the subsequent yellow card, said "hooray, I've been waiting almost 40 minutes".

During this period of red cards, there were several occasions where multiple people who were waiting to speak were actively waving red cards. There were people interrupting one another. There were people speaking out of turn, without waiting to be called upon.

There were specific exchanges within this particular period that my husband questioned. I'm going to share four examples that relate specifically to my own behaviour.

The first happened relatively early in the red card period. Aaron made a statement that I started to respond to. When he attempted to interrupt my response, and he was not the first to interrupt me, I replied by raising my voice and telling him to shut up, so that I could finish what I was saying.

Perhaps halfway through the red card period, I had stopped responding to the people who were raising the red cards and the conversation was flowing among the participants themselves. Rich asked, in his role as facilitator, whether I agreed with what people were saying. I replied that no, I thought they were full of sh*t.

Near the end of the exchange I was asked whether I believed, on reflection, that I had behaved as a moron during the first experience I shared in my experience report. As a caveat my interpretation of this comment has been refuted in subsequent Twitter discussions.

Finally, there was a case where three people were speaking at once and none had used a card. I interjected with a comment that "we have cards for a reason" to shut down their conversation.

Was it a problem?

At the time, I didn't think there was a problem. I shared James' view that "it was an intense exchange done in a good and healthy spirit". I found that red card period of open season incredibly challenging, but I never felt unsafe.

On reflection though, I do think there was a problem.

Why?

My behaviour during open season contributed to an atmosphere where people were talking over one another and behaving informally. The lack of discipline in the heat of these exchanges meant that several people in the room withdrew from the discussion.

This goes directly against the spirit of a peer conference, which is designed for everyone to be included equally. I now feel that I was part of an exchange that excluded those who were unable or unwilling to voice an opinion in this atmosphere.

What would I do differently?

In future, I think that I need to remember to respect the formality of a peer conference. I felt that I was among friends and, because of this, I bought an informal attitude to my exchanges.

I believe this reflection is shared by some others who were present. On Twitter, Aaron said "I shouldn't interact with people I know during formal exchanges differently, and open season is a formal exchange". Sean said "Maybe we need to be more conscious of those relationship biases we bring to peer conferences? I'm guilty of it".

In future, if I felt overwhelmed by interruptions, I would stop and ask for support from the facilitator. On reflection, the very first time I felt compelled to raise my voice and start participating in the culture of talking across people would have been a good opportunity to pause and reset expectations for the discussion.

What do other people think?









What do you think? How formal are your peer conferences? How formal should they be?

Thursday, 16 July 2015

Mobile Testing Taster

I recently ran a one-hour hands-on workshop to give a group of 20 testers a taste of mobile application testing. This mobile testing taster included brainstorming mobile-specific test ideas, sharing some mobile application testing mnemonics, hands-on device testing, and a brief demonstration of device emulation.

Preparation

In advance of the session, I asked the participants to bring along a smartphone or tablet, either Apple or Android, with the chosen test application installed. I selected a test product with an iOS app, an android app, and a website optimised for mobile. I asked those who were able to bring a laptop, in order to compare mobile and web functionality.

I set up the room so that participants were seated in small groups of 3 – 4 people. Each table had one large piece of flipchart paper and three different coloured markers on it. The chairs were arranged along two adjacent sides of the table so that participants within each small group could collaborate closely together.

Brainstorming

After a brief outline of what the session would cover, I asked participants to start brainstorming their test ideas for the chosen test application that they had available on the devices in front of them. They were allowed to use the device as a reference, and I asked them to choose one coloured marker to note down their ideas as a group.

Five of the six groups of participants started a feature tour of the application. Their brainstorming started with the login screen, then moved through the main functionality of the application. The other team took a mobile focused approach from the very beginning of the session.

After five minutes, I paused the activity. I wanted to switch the thinking of everyone in the room from functionality to mobile-specific test ideas. I encouraged every team to stop thinking about features and instead to start thinking about what was unique about the application on mobile devices.

To aid this shift, I handed out resources for popular mobile testing mnemonics: the full article text for I SLICED UP FUN from Jonathan Kohl and the mind map image of COP FLUNG GUN from Dhanasekar Subramanian. These resources are full of great prompts to help testers think of tests that may apply for their mobile application. I also encouraged the groups to make use of their laptops to spot differences between the web and mobile versions of the software.

The participants had a further 15 minutes to brainstorm from this fresh perspective using a different coloured marker. For a majority of groups this change in colour emphasised a noticeable change in approach.

At the end of the brainstorming session there was quite a variety in the nature and number of test ideas generated in each small group. I asked the participants to stand up, walk around the room, look at the work of other groups, and read the ideas generated by their peers.

Testing

Armed with ideas, the next phase of the workshop was to complete ten minutes of hands-on device testing. I asked each tester to pick a single test idea for this period of time, so that they focused on exploring a particular aspect of the application. 

Each group was asked to use the final coloured marker to note any problems they found in their testing. There were relatively few problems, but they were all quite interesting quirks of the application.

Though ten minutes was a very short period of time, it was sufficient to illustrate that testing a mobile application feels very different to testing on a computer. The participants were vocal about enjoying the experience. As a facilitator I noticed that this enjoyment made people more susceptible to distraction.

It was also interesting to see how much functionality was covered despite the testing being focused on the mobile-specific behaviours of the application. For example, one tester navigated through the product looking at responsive design when switching between portrait and landscape view, which meant that she completed a quick visual inspection of the entire application.

Emulation

While discussing ideas for this session, Neil Studd introduced me to the Device Mode function available in Chrome Developer Tools. During the last part of the workshop I played a five minute video about device mode, then showed a quick live demonstration of how our test application rendered in various devices through this tool.

Device mode is well documented. I presented it as an option for getting an early sense of how new features will behave without having to track down one of our limited number of test devices. Emulators are not a substitute for physical devices, but they may help us consider responsive design earlier in our development process.

As facilitator I did feel like this was a lot to cover in an hour. However, the session filled its purpose of giving the attendees a relatively rounded introduction to mobile testing. Perhaps you'll find a similar mobile testing taster is useful in your organisation.