Javascript Landscape for Testers

Introduction – Javascript

There’s a lot to know in order to understand what’s going on with the javascript landscape for Quality Engineering testers, especially for those involved in writing browser automation.  One of the biggest complaints recently is literally not knowing where to start, what the dozens of acronyms and terms mean and also what to study and in what logical order to study it.

There is also getting your head around the current use of Object Oriented javascript.  When I first used javascript it was all about making simple web page effects.  Now it is a very popular web language with Object Oriented functionality, used for building full fledged internet applications and used server side with technologies like Node.

Understanding a brief history of javascript is helpful to understanding how to use javascript today and what it means to write javascript tests:

What Object Oriented means in todays javascript is explained well in the following book.  I’ve read many javascript books and as someone with existing knowledge in both Scripting and Object Oriented languages, this is definitely the book that’s been most helpful to me:

Another interesting aspect of Javascript Testing is that that are elements for all 4 of the Agile Testing Quadrants – Unit, Integrated, Performance and Exploratory.

Part I – User acceptance / Integrated testing with Selenium

For BDD there is:

  • Cucumber – https://cucumber.io/
    Using the Given, When, Then Gherkin format

For user acceptance testing there is web page testing with Selenium using javascript, e.g.

For TDD/BDD in Ruby there is:

  • rspec with capybara
  • watir (pronounced “water”)

There is also javascript based Selenium testing for javascript frameworks such as AngularJS:

There is also ‘headless’ browser testing which attempts to address the speed issue of any browser based testing by not actual bringing up a browser window.  Its success will depend on the specifics for a given web site / application.

Part II – Javascript Unit Testing

Then there is the testing of actual javascript itself.  This is primarily about unit testing javascript code.  Here we have two broad areas, one is Node based and one is not.  Node is about server side javascript.  You have a ‘node’ server running that responds to requests.  You can build and use frameworks such as Angular and React and you can also use server side javascript that is compiled into regular javascript which is then sent to browser clients.

As you advance in javascript testing you’ll also want to get a good understanding about spies, mocks and stubs:

As you advance further you may find it useful to understand and use chai – an assertation library for all those ‘should’, ‘expect’ and ‘assert’ statements!

 

Part III – Performance and Load Testing


Apache Bench
is a (node based) option here.

https://httpd.apache.org/docs/2.4/programs/ab.html

ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your Apache installation is capable of serving.”

 

Part IV – Exploratory Testing

You can explore, debug and test javascript by using the browser Console tab to interact with the browser and issue commands

Screenshot from 2017-06-01 07-58-25

You can also use Node at the terminal command line by simply typing node or nodejs

Screenshot from 2017-06-01 07-59-32

Advertisements

A new way to think about tests

Tests that pass are good. right?

That is such a given that it may seem strange to even question it! So here goes:

Tests only add value when they fail

There it is. Failing tests. A good thing. ok I’ve said it.
I’m already removing the arrows, ow! so let me explain:

The idea here is that tests that never ever fail indicate a problem. Tests are supposed to break – when you break the application. After all, that’s why they are there – as safety rails so you can develop in comfort, changing the application as needed and letting you know when your changes break existing functionality.
So what happens when they do break? Well here’s where the quality of the test itself also comes into play. It’s not too hard to write tests that, when they do break (yeah!) they give meaningful information.

Unfortunately it’s also fairly easy to come up with tests that break and tell you something like this:

“Expected true to be true but was false instead”

Sound good? Can you fix that now? Of course not.
So write a test whose failure looks like this:

“Expected the child to be a member of the customers family but they are not listed in the family plan” – now that’s something you can work with !

This is also support for the concept of making sure your tests fails first, i.e. Red, Green, Refactor.  Making sure it fails first provides an opportunity to hone and refine the failure message.

The main caveat to this idea is that tests, if well written, act as documentation  So even if they never fail they can still fulfill that function.  Indeed I have learned about more than one system just from reading the test suite!

A downside to such ‘wordy’ tests is that we can have test code descriptions that don’t match, or get out of sync, with the code being called. This is why we avoid program comments whenever possible, in favor of meaningfully named code objects and function descriptions. However I believe the upside of good and meaningful tests outweighs this downside for these tests.