Thursday 11 May 2017

Three styles of automation

At Let's Test next week I have the privilege of presenting a hands-on workshop titled "Three styles of automation". The abstract for the session reads:

A lot of people use Selenium WebDriver to write their UI automation. But the specific implementation language and coding patterns will differ between organisations. Even within the same organisation, a set of front-end tests can look different between different products.

Katrina will share three different approaches to Java-based UI automation using Selenium WebDriver from her organisation. She will explain the implementation patterns, the reasons that differences exist between repositories, and the benefits and drawbacks of each approach.

Participants will download three different suites that each implement a simple test against the same web application. Once they have a high-level understanding of each code base, they will have the opportunity to execute and extend the test suite with targeted hands-on exercises.

In this post I share the code and resources that I'll be using in this workshop. Though you won't get the same level of support or depth of understanding as a workshop participant, I hope you will find something of interest.

Background

These three automated suites are written against a tool provided by the New Zealand tax department -the IRD PAYE and Kiwisaver deductions calculator. Each suite contains a single test that enters the details for a test employee and confirms that PAYE is calculated correctly.

Each suite is reflective of a framework that we use for test automation in my organisation: Pirate, AAS, or WWW. These public versions do not contain any proprietary code; they've been developed as a training tool to provide a safe place for testers to experiment.

Each training suite was created almost a year ago, which means they're showing their age. They still run with Selenium 2 against Firefox 45. We're in the process of upgrading our real automation to Selenium 3, and switching to Chrome, but these training suites haven't been touched yet.

The three suites illustrate the fundamental differences in how we automate for three of our products. Some of these differences are based on conscious design decisions. Some are historic differences that would take a lot of work to change. The high-level implementation information about each suite is:

Pirate
  • Uses Selenium Page Factory to initialise web elements in page objects 
  • Methods that exit a page will return the next page object 
  • Has an Assertion Generator to automatically write assertions 
  • Uses rules to trigger @Before and @After methods for tests 

AAS
  • Uses a fetch pattern to retrieve page objects 
  • Provides WebDriverUtils to safely retrieve elements from the browser 
  • Tests are driven by an HTML concordion specification 
  • Uses inheritance to trigger @Before and @After methods for tests 

WWW
  • Uses Selenide as a wrapper for Selenium to simplify code in page objects 
  • Uses a Selenide Rule to configure Web Driver 
  • Uses @Before and @After methods in the tests directly

Installation

There are pre-requisite installation instructions to help you get the code running on your own machine. To get the tests executing within each framework, you may have to download and install:

  • git
  • Java
  • Firefox 45
  • An IDE e.g. IntelliJ
You can download the three suites from GitHub. If you haven't used GitHub before, you may need to create an account in order to clone the three repositories.

Comparison

The beauty of these training frameworks is that it is easy to compare the three implementations. If you are familiar with the way that one works, you can easily map your understanding to learn the others. 

In each suite you will see different page object implementation. The enterUserAndTaxDetails method in the UserAndTaxYearPage is a good example of the different approaches to finding and using web elements:

The same functionality implemented in three different ways

There are different types of assertions in the tests. Pirate assertions are created by an automated assertion generator, in AAS the English language concordion specification holds the assertions, and WWW make use of simple JUnit asserts.

The navigation through the application varies too. Pirate passes page objects, AAS implements fetch methods, and WWW simply use the Selenide open method. 

These differences are apparent when reading through the code, but to really understand them you are best to get hands-on. As a starting point, try adding to the existing test so that a 3% employee KiwiSaver deduction is selected, then make sure that deduction is reported correctly in the summary.

Conclusion

I don't claim that any of these frameworks are a best practice for UI automation. However, they represent a real approach to tests that are regularly executed in continuous integration pipelines in a large organisation. I wish that more people were able to share this level of detail.

I find the similarities and differences, and the rationale for each, to be fascinating. Given the variety within our team, it makes me wonder about the variety worldwide. There are so many different ways to tackle the same problem.

This is a whistle-stop tour of a three hour workshop. I hope to see some of you at Let's Test to have the opportunity to explain in person! If you cannot attend and have questions, or suggestions for improvements in our approach, please leave a comment below or ask via Twitter.

11 comments:

  1. Very informative. Personally I prefer Pirate way of writing automation cases.

    ReplyDelete
    Replies
    1. I'm glad you think so. Pirate is our newest suite and the one I've had most influence over.

      Delete
  2. I have to agree that the Pirate way seems cleaner and less confusing simply because there is less of a need to grab the ID each time. The benefit here is clear - if the UI changes or the selector changes then the test under Pirate will still work. The same can be said for WWW but the functions are less clear - what is n12 for instance? Could someone read that code and determine specifically what n12 is?

    Thanks for sharing - shame I can't be there!

    ReplyDelete
    Replies
    1. Agreed. The code that I picked to develop these training suites against is not very readable, but it suited the purpose of being a non-authenticated workflow that does not create any meaningful action. If it were a product that I had influence over I would make some suggestions around testability. n12 is a locator.

      Delete
  3. Why r u using legacy ff?
    Why r u using "if else" in scenarious? Make 2 methods for setting Employer name and Emoloyee. It's bad practice

    ReplyDelete
    Replies
    1. This suites are approximately a year old, as I mentioned, which is why you are seeing a legacy Firefox.
      The conditional example you've identified is something that we look at in the refactoring module, which is part of the training that people can work through when they pick up these simple starting points.
      Thanks for your comment.

      Delete
  4. Hi Katrina, great to see some different ways of using WebDriver. The AAS suite doesn't run, since the source path for the test code isn't set. I've fixed this up and made some improvements to the Concordion test at https://github.com/nigelcharman/j4n-ird-aas. Feel free to copy these :)

    There's a few branches:
    1. get-it-working - fixes build.gradle, also makes gradlew executable
    2. update-dependencies - needed since I have latest Firefox, you'll also need geckodriver on your path
    3. markdown-spec - refactored spec to use Markdown, which is more concise, easier to write and read and more fun (especially when used with IntelliJ Concordion Support plugin)
    4. refactored-spec - extracted the "acceptance criteria" to the top, and made the example more specific. Removed assertTrue, which should be used sparingly (http://concordion.org/instrumenting/java/markdown/#assert-true-and-assert-false-commands)

    I'll blog about this later :)

    ReplyDelete
    Replies
    1. Thanks Nigel. We do have a traditional approach to concordion. Nice to get style tips from an expert.

      Delete
    2. Blogged at http://tutansblog.blogspot.co.nz/2017/05/improving-concordion-specification.html. Keen for any feedback!
      If I get time, I might have a go at documenting an alternate WebDriver approach too :)

      Delete
    3. Looking at the WebDriver code, I'd write something similar to the pirate version, though I quite like the selenide one too. Would be great to hear the outcome from Let's Test. Wish I was there, the conference looks awesome!!

      Delete
  5. Which one is the best of the three models that you follow?

    ReplyDelete