Discover or Predict

There has been considerable talk recently about software development having distinct phases.

First there is the discovery phase. Here the solution and timeline is unknown and must be discovered through multiple iterations and experiments, most of which should fail.

Second is the predictable phase. The architecture of the technology and approach and implementation have been determined and development can be made at a known, predictable pace.

This is wrong

This was the belief during the 1990s (that I worked in). The proposition that, with enough gantt charts and microsoft project timelines we would have high quality software developed according to a fixed, predictable schedule. The actual result of this planning, coupled with the technology of the time (terminals, cobols, mainframes, printed manuals), was that while timelines were (sometimes) kept, this was usually at a high cost to quality leading to low quality products that were then buggy and hard to change, extend or enhance due to their rigid design and implementation.

Also the belief in predictability relied on experienced employees not leaving the firm for any reason, recessions not happening, technology not changing, etc. In the real world all these things happen which would always lead to a lot of unpredictable events that affected dates.

Then came the web. This changed the fixed technology landscape that was in place with modern implementations leading to the fluid and fast changing landscape we see today where constant innovation and change are essential.

And Agile was born (or codified under the new term at least). Agile recognized that software development is never predictable. It is always discovery. All the time. This changes the paradigm. In order to exist in this environment, businesses need to radically change how they develop products. This is where factors such as getting automated user feedback and empowering developers to innovate comes to the fore, instead of more traditional targets and goals and KPIs. It is an environment where development practices such as feature flags, AB testing, BlueGreen releases and Canary deployments are used to manage change and delivery in a form that is usable for the business and their need for predictability (which has not gone away just because development is ‘Agile’).

We can’t do full Agile because…

This is the other truism that I have heard for the past 15 years. Regardless of company, industry, location, product, consumers, size, the following reasons for not being ‘able’ to truly embrace Agile are always presented:

  • Regulation with fixed requirements
  • Externally fixed dates for required functionality
  • Financial targets to meet for the quarter and year
  • Integrations with external partners that have to be coordinated

The answer to this is that actually Agile can be done with all of the above factors in play. Much has been written by silicon valley founders about the lean development environments and people empowerment models that support this with humble and servant leadership.

The Management Challenge

This new environment presents an interesting challenge. If we can no longer depend on dates to drive development and give direction, how do we manage people to produce what we need for the business? This is where the other transformation of recent years come to play – the employee empowerment approach with decentralized and flat organizations. While the comparison is often made between traditional ‘waterfall’ and ‘agile’ environments I think this misses the most important point. All environments actually have some element of waterfall and some elements of agile development. The crucial different for management however is the change from the previous paradigm of what is often called ‘command and control’. Those are ‘fightin’ words (who admits to them?) so it is easier to use the softer words we hear today – ‘clear direction’, ‘strong alignment’, ‘good focus’.

Quality – The Excuses

Everyone wants quality!

Some of the barriers to achieving it are

  • We don’t allow time for writing good tests
  • The definition of done is missing the testing requirements
  • There aren’t existing test examples to copy to write new tests
  • We don’t use modern test frameworks and tools for our automation
  • PRs are big enough already without risky changes to improve quality
  • We don’t pay automation engineers as much as application engineers
  • We need to meet the application deadline regardless of test coverage
  • Testing and Automation is not a key business goal with OKRs and KPIs
  • We don’t include quality engineers as key stakeholders in work determination
  • We don’t invest enough money or people in the infrastructure needed for CI

Stop hiding the quality work

In software we use project management such as Jira, Trello, Pivotal Tracker, etc to help us manage our work.

In recent engagements I’ve noticed a curious trend – hiding work.

Three common ways that are used are labels, subtasks and swimlanes

The most common type of work that is hidden is the infrastructure work needed in order for a company to grow and thrive and realize its 100X dreams.

Infrastructure work may sound like the work to set up servers, databases, cloud providers, etc. While those elements are certainly true, much, of the infrastructure work happens in teams doing product development itself. The folks who are writing application code, installing tools to make development easier, writing tests (or not) to make growth possible, setting rules for how code is developed and reviewed and merged.

Infrastructure work is critical to company growth. Give it the recognition and respect it deserves by making it visible to everyone in the company. Teach the company owners about it and how it will enable the company to fulfill its dreams if the hard work is done now.

Quality Engineer – A vision

  • Work closely with product management to understand customer needs in depth and how implementations address those needs by providing a high quality product that adds value for customers
  • Work on continuous integration approaches that support a development “pipeline” that gives application engineers feedback in seconds and minutes for both local and remote CI
  • Work to promote an Agile Testing Pyramid
  • Work closely with application development to:
    • Create a plan during backlog refinement for how to test each feature or change at unit and integration levels, for example using Given, When, Then
    • Promote tools and approaches for higher quality code such as quality linting, static code analysis and code coverage
    • Eliminate unit test dependencies such as database or network
    • Promote application and test code that prefers full english names and avoids premature optimizations
    • Address failed, pending or flaky tests at any level of testing
    • Create a robust system for test data for all levels of testing
    • Remove dependencies for UI unit testing such as database or authentication
    • Test every feature change from the start of the development prototype, not the end
    • Review and provide input for PR’s for application code and related unit and integrated testing at multiple levels
    • Create a documented standard for method parameters, testing for valid, invalid, blank, null, undefined and defaults
    • Work on selenium based tests that test key functionality in the UI and are part of the standard CI for application developers and run in less than 10 minutes
  • Manually test new or changed features in multiple devices with exploratory testing based on current and planned users and devices
  • Work to achieve a UI test reliability rate of 99.999%+ so that a delivery pipeline can be trusted for immediate feedback that is reliable
  • Support release management including on-call support rotation responsibilities
  • Promote BDD and TDD approaches including teaching and training them to team members
  • Promote and provide coaching for company wide quality practices such as continuous delivery, shift-left testing, 5 why’s, argument view swapping, cost/reward of testing, etc.

Quality KPIs

If number of bugs isn’t a good measure of quality, what is ?

Number of bugs is an outcome of a low quality product.
Tests will not fix a low quality product.
Especially if they are written by someone else later in the process.

To improve quality and reduce bugs in software, measure and monitor over time:

  • Product usage by current production users
  • Features are used as imagined by the company
  • Measures are available for usage by key demographics and devices
  • Usage in relation to revenue, adoption or other measures or KPIs
  • Test suite length of run time
  • Application code unit test coverage
    • 100% should be the general rule
    • Test for parameters that are zero, missing, blank, null
  • Application code average LOC method/class sizes
  • Application code average method complexity
  • Average amount of time to fix a production bug
  • Mean time for tickets from entry to deployment
  • Mean time between production failures
  • CI UI Automation code failure rate – target N5 (00.001%)
  • Pending tests – target zero
  • Size of backlog
  • Backlog size change over time – target zero
  • Application performance

Also pay attention to softer and more subtle factors that may be harder to measure such as

  • Naming objects well
  • Usability issues related to fonts and colors and sizes for all users
  • Domain specific usability issues for systems and ecosystems
  • Maximizing accessibility to help reach the largest market share
  • Maximizing accessibility for users with different physical abilities
  • Emotions related to color schemes
  • UI consistency
  • Verbal and interactive feedback from key users

Balancing all these different aspects is why quality is hard

Hiring competent component devs

My vision

To hire developers from around the world who work on software components for as little as $100 and within hours

How I achieved it

  • I solved the test data and database requirements issue for authentication and authorization
  • I solved the local services authentication issue for remote workers
  • I am the lead dev, reviewing code, tests and functionality and providing a lot of gentle feedback
  • My main focus in code is how good tests are, how clear code is to read and how easy code is to change
  • I use strong code linting practices to standardize formatting
  • I use one place – github – for code, PRs, tracking features and issues, CI and projects
  • I use a service that provides programmers for hire by the hour and then I screen people through actual ticket work starting with simple tickets
  • I use GCP to deploy apps to public URLs for testing on any device / browser / version
  • I pay more, e.g. $400 for a feature, for additional work over time based on quality work they have already done
  • I accepted the humble premise that I am now hiring people who are better as specific pieces than me

Hiring programmers every week

I am hiring programmers every week. They complete assignments and my application continues to grows.

Sounds simple right?

Of course anyone who has tried to to do this, knows the burden – interviews, recruiters, coding tests, personality reviews, the list is endless.

Then there is the pay. I want to pay someone $500 for a piece of work. Not $150,000 to hire them for a year. Not to mention all the work that comes with actually hiring full time employees.

Finally when you hire someone you have to take time to on-board them. equip them. train them.

That’s a lot and there is a better way and I am using it.

I hire programmers to do units of work. There is a lot of parts to get right to make this happen so I decided to detail them here:

  • A technical programmer lead to manage the infrastructure and work
  • Code, tickets, continuous integration all in github
  • A Slack workspace with github integration
  • The ability to run the UI locally using fake data*
  • The ability to deploy to a public URL for any device testing*

*It is the last two – running the UI locally with fake data, and deploying to a public URL that distinguish my approach. By implementing fake data and deploying to GCP I can avoid the considerable testing hurdles that challenge most organizations which include:

  • Authentication and Authorization to access a secure API for the data
  • Running a server locally to request data
  • A database for test data
  • Credentials for the test database for authentication and authorization
  • Needing to use custom emulators such as Virtual Box, Android Studio, Xcode, etc to test apps locally

With all of these problems solved I can now hire a freelance programmer (I use and give them a slice of work with no need for credentials, databases, emulators etc. to do the work.

It also means that I can work on the application with no internet connection. That’s a big deal for me.

Finally, I don’t worry about revealing the source code as I use the google model that it’s the implementation of the business around it that counts. I do manage github users access carefully for access.

This model is working well for me. Would it work for you?

I am a pattern programmer

Some folks are writing scripts, others are wring Object Oriented code and yet others are excelling in the delights of functional programming. I use all these approaches but I consider my own approach to programming really matches none of them and can best be described using a term I have decided to call pattern programming.

I look at code. I look at data. I look at use cases and tests. I look at classes and methods and I ponder.

What is the pattern here?
Is the pattern obvious from the existing code?
Are these high and/or low level patterns?
Is this a pattern I could use?
How would I need to modify this pattern to accommodate something different.
Should I abstract this pattern for reuse?
Can this pattern accommodate additional use cases?
Has usage of this pattern reached a point where we need to divide it into multiple other patterns?

These are questions I think about a lot.

Telling a data story with color

Providing higher level ‘meta’ information about detailed data can be clumsy.

One approach is to use groups with headers and totals.

Another approach is to use special characters and text to group items.

Another might be different bold background colors. Ugh

The above approaches can be rather clunky and lead to a cluttered and ugly display with links and IDs

Here’s an approach I like that uses text color and subtle grey shading.

It takes seconds to grasp and will quickly become a key tool for power users of the application in question.

From an example application I created, see if you can quickly and easily tell

  • Which transactions are part of batches? (hint: row background!)
  • How many BTC transactions?
  • How many SELL transactions?

Good advice from great people

Presence is a foundation for trust. The Mind of the Leader. Rasmus Hougard.

Silence is a greatly underestimated source of power. Leading With Emotional Courage. Peter Bregman.

These are the four magic words of management: “What do you think?” —WOODY MORCOTT, Former CEO, Dana Corporation.

The two biggest barriers to good decision making are your ego and your blind spots. Principles: Life and Work. Ray Dalio.

Remember, the clarity of your guidance gets measured at the other person’s ear, not at your mouth. Radical Candor. Kim Scott.

Foster a respectful, supportive work environment that emphasizes learning from failures rather than blaming. Accelerate. Nicole Forsgen PhD

Self- expression, experimentation, and a sense of purpose: these are the switches that light up our seeking systems. Alive At Work. Daniel Cable.

Being transparent and telling people what they need to hear is the only way to ensure they both trust you and understand you. Powerful. Patty McChord.

Speaking up is only the first step. The true test is how leaders respond when people actually do speak up. The Fearless Organization. Amy C. Edmondson.

Psychological safety is about candor, about making it possible for productive disagreement and free exchange of ideas. The Fearless Organization. Amy C. Edmondson.

“Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don’t improve it. Managing the Unmanageable. Steve Mcconnell.

Embracing radical truth and radical transparency will bring more meaningful work and more meaningful relationships. I have found that it typically takes about eighteen months. Principles. Ray Dalio.

At the heart of almost all chronic problems in our organizations, our teams, and our relationships lie crucial conversations. Crucial Conversations. Kerry Paterson.

While most managers, supervisors, and colleagues genuinely appreciate the people with whom they work, they often neglect to verbally express that appreciation. The 5 languages of Appreciation in the workplace. Gary Chapman.

A few simple, uncommon, powerful phrases that anyone can utter to make the workplace feel just a tiny bit more psychologically safe: I don’t know. I need help. I made a mistake. I’m sorry. The Fearless Organization. Amy C. Edmondson.

Activity is not the same as productivity. When we complete a task, even the smallest insignificant task like sending an email, dopamine is released in the brain. This can make the task addictive. The Mind of the Leader. Rasmus Hougard.

You manifest what you model. Your people are not only watching your every move, they are emulating you. And, unfortunately, you don’t get to pick and choose which parts they copy. How F*cked up is your management. Johnathan Nightingale.

You have to accept that anger, for example, is not something you can eradicate from your life. Don’t fight against something you can’t change. What you can change are the thoughts which sustain anger. Rewire Your Mind. Steven Schuster.

Quality Code

It takes time to learn why Quality Code practices help the business succeed

  1. It meets the business purpose. This is always priority #1
  2. 100% test code coverage is the standard and the practiced norm.
  3. Code linting is extensive and all settings are fatal with no warnings
  4. TDD and BDD are truly practiced and operate in Agile environments
  5. Continuous Integration gives quick feedback that code runs elsewhere
  6. Most coding is refactoring existing code to be easier to change in the future
  7. Developer feedback is immediate w/editor autosave & tests suites in seconds
  8. Tests follow a “500%” Positive, Negative, Blank, Undefined, Null testing pattern

Two lines to remember

Remembering what users have typed in constantly used applications is very helpful for them and it’s actually very easy to do… just use the browser local storage API to maintain the state.

OK, that sounds a bit intimidating… put more simply… just change these two lines to achieve this!

This is for a React application that is uses hooks to maintain state and has a commonly used input field whose value is tracked with text

First, change

const [ text, setText ] = useState('');


const [ text, setText ] =
useState(localStorage.getItem('your-app-your-name' ) || '')

Then, when the user updates text, e.g. for an input field which has an event handler and currently maintains state, update the code of that handler to also update local storage

setText(text) // existing code
localStorage.setItem('your-app-your-name', text ); // Add this line

That’s it! Now your users input will be remembered even if they close their browser, restart their machine, you deploy new code, etc. All with no login and no cookies!

React gotcha (#1)

Here’s some pieces of React code that work (showLinks) and do not work (showAdminLinks).

However the UI does not have the output and it does not throw an error either in compilation or in the console. hmmm

So it seems quite a mystery.

Can you spot the issue?

Answer: It is the use of the wrapping { and } instead of ( and ) on lines 25 and 29

This has caught me a few times. There are a number of formats that can be used here, including, but not limited to {(...)}, (...), return (...), return ({...}), etc.

This variety of formats can make it difficulty for you and/or your IDE to spot the issue shown here.

Not seeing any error makes it sometimes hard to realize what the issue is.

There are a few React gotchas like this. This is listed as ‘number #1’ but this is not intended to indicate it is the “primary” react issue like this, just that it is the first of several React gotchas I am documenting for others… and my future self. This one has caught me a few times.

Docker Basics

For quick reference

Creating your own Docker Image:
Let’s start by creating a very simple Node app. To begin, create a directory – name it dockernode – and then initialize a new NPM project in it:
npm init -y
Next, add Express to it:
npm install –save express
Finally, create a server.js file and put the following code in it:
const express = require("express");
const app = express(); app.get("/", (inRequest, inResponse) => { inResponse.send("I am running inside a container!"); });
app.listen("8080", ""); console.log("dockernode ready");
You can, at this point, start this little server:
node server.js You should be able to access it at
Of course, what it returns, “I am running inside a container!”, is a dirty lie at this point! So, let’s go ahead and make it true!
To do so, we must add another file to the mix: Dockerfile. Yes, that’s literally the name!
A Dockerfile is a file that tells Docker how to build an image. In simplest terms, it is basically a list of commands that Docker will execute, as if it were you, the user, inside a container. Virtually any valid bash commands can be put in it, as well as a few Docker-specific ones. Docker will execute the commands in the order they appear in the file and whatever the state of the container is at the end becomes the final image.So, here’s what we need to put in this Dockerfile for this example:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./ COPY server.js ./
RUN npm install
CMD [ "node", "server.js" ]
The first command, FROM, is a Docker-specific command (the only one required, in fact) that tells Docker what the base image is. All images must be based on some existing image. If you want to start “from scratch,” the closest you can generally get is to choose an image that is nothing but an operating system. In this case, however, since we’re using Node, we can start from an image that, yes, has an operating system, but then also has Node already installed on top of it. Alternatively, we could start with an image like ubuntu, and then put the commands into the Dockerfile that would install Node (apt-get install nodejs), and we would wind up with an image that is basically the same as this. But let’s be lazy and use what’s already there!
The next command, WORKDIR , really does two things, potentially. First, it creates the named directory if it doesn’t already exist. Then, it does the equivalent of a cd to that directory, making it the current working directory for subsequent commands.
Next, two COPY commands are used. This is another Docker command that copies content from a source directory on the host to a destination directory in the image’s file system. The command is in the form COPY , so here we’re saying to copy from the current working directory on the host (which should be the project directory) to the current working directory in the image (which is now the one created by the WORKDIR command) any file named package*.json (which means package.json and package-lock.json) and our server.js file.
After that, we must think as if we’re executing these commands ourselves. If someone gave us this Node project, we would next need to install the dependencies listed in package.json. So the Docker RUN command is used, which tells Docker to execute whatever command follows as if we were doing it ourselves at a command prompt (because remember that basically is what a Dockerfile is!).
You know all about the npm install at this point, so after this is done, all the necessary code for the application to run is present in the image.
Now, in this case, we need to expose a network port; otherwise, our host system, let alone any other remote systems, won’t be able to reach our Node app inside the container. It’s a simple matter of telling it which port to expose, which needs to match the one specified in the code, obviously.
Finally, we want to specify a command to execute when the container starts up. There can be only one of these in the file, but we can do virtually anything we want. Here, we need to execute the equivalent of node server.js as we did manually to test the app. The CMD command allows us to do this. The format this command takes is an array of strings where the first element is an executable, and all the remaining elements are arguments to pass to it.
Once that file is created, it’s time to build the image! That just takes a simple command invocation: docker build -t dockernode .
Do that, and you should see an execution something like Figure 12-5. Figure 12-5
Building the dockernode example image
Now, if you do a docker images, you should see the dockernode image there. If it is, you can spin up a container based on it:
docker run --name dockernode -p 8080:8080 -d dockernode
At this point, the container should be running (confirm with docker ps), and the app should be reachable from a web browser. Also, if you do docker logs dockernode you should now see the “dockernode ready” string. You could attach to the container if you wanted to now and play around.

Basic Typescript tsconfig.json

"compilerOptions": {
"target": "es5",
"module": "commonjs",
"sourceMap": true,
"outFile": "./dist",
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
"include": [ "src/**/*" ]

Your own instant JSON server…

Here’s my zero install json server:
(Note that I am replacing my github username with ‘me’ and the repo is called ‘shoppingList’


with end points:

What’s really going here: is a handy service.  It is effectively a running JSON server that use YOUR GITHUB REPO for the data!

So basically:

  • You set up a github repo and in the root directory you place a json.db file
  • The db.json file has some simple JSON such as
    “items”: [
    { “id”: 1, “listId”: 1, “title”: “Peas”, “quantity”: 1, “price”: 6.99 },
    { “id”: 2, “listId”: 1, “title”: “Rice”, “quantity”: 1, “price”: 0.99 }
  • You can now immediately use the endpoint for items, e.g.
    (again replace ‘me’ with your username and ‘shoppingList’ with your github repo name

If you’re developing an app using data from a JSON API (kinda common these days…), now you have a perfect way to test and develop against a real server, where you control the data!

React fetch Hook

Might as well have this for one of the most common operations we do!

import React, {useState, useEffect }  frm 'react';

export function() useFetch(uri) {
  const [data, setDate] = useState()
  const [error, setError] = useState()
  const [loading, setLoading] = useState(true)

  useEffect(() => {
    if(!uri) return;
      .then(data => data.json()
      .then() => setLoading(false)
  }, [uri];

  return {




We’re “Agile”

“We’re Agile”
or so every company in 2020 seems to say.  Followed by “we do standups, retros and backlog refinement!”.
However, in the next breath, most companies give the following reasons why they can’t ‘quite’ be Agile ‘the way it was intended’.
This is a most insidious falsehood because the problem that they list are EXACTLY THE PROBLEMS THAT AGILE ADDRESSES.
Lets face it – if these factors didn’t exist at most companies we wouldn’t need Agile in the first place!
Having experienced the “we’re so agile” workplaces firsthand I have no desire to repeat the experience.

The main reasons that companies give for not being “truly agile”:

– We have deadlines (that we frequently miss)
– We have acquisitions (that we struggle to integrate)
– We are regulated (but we don’t understand controls well)
– We are constrained by HIPPA deadlines (and we don’t prepare in time)
– We are constrained by OxleySarbanes (and we announce quarterly goals that we are then held to)
– Our compliance department needs controls (and we don’t recognize the ones IT uses anyway)
– We have a fixed IT budget (We don’t take into account the value that each single IT workers brings)
– We are cost cutting (we are shortsightedly focusing on near-term results at the expense of long term profits).
– We have a IT hiring freeze (we are failing to recognize the power of our IT staff to generate revenue).
– We can’t hire talented people (we don’t create a welcoming culture for A talent).

To restate the point. These are the very reasons you use Agile. to solve THESE problems. It can be done. and it has and is.

People ≠ Resources

People are not Resources!

This concept has been going around the Agile community for a while and deserves more attention.

Referring to people as resources is dehumanizing, insulting and unprofessional.

  1. Q) Why is this a big deal anyway?
  2. Q) Isn’t this just political correctness?
  3. Q) Projects need people, so why aren’t they considered resources to the project?

I believe words counts. Every. Single. One.


Lets step back for a second and look at the power and consequences of this ‘resource’ word.

Imagine a  project where you need two programmers.

You say “we need 2 resources on this”.

Nice.  Easy.  Doesn’t really address any human factors mind you.


Trying again:

We need two people (at least one of which must be senior) on this project.

Nicer.  But still missing the picture.


Trying again:

We need two additional people on this project so perhaps can we could put Stacey and Aveal to work on it…

Although…. Stacey is maxed out on project x and G and K right now so she’s only available about 10 hours  a week.

Also, Aveal is on Paternity Leave until next month and will then be half-time for 3 months when he returns.

Hmmm, maybe we need to rethink this from a people perspective…

Using the term “resources” for people is easy.

But so very wrong. And misleading as to the reality of actual plans that can be accomplished with actual people.  Referring to Stacey and Aveal is reality.

Test Metric Development (TMD)

TMD – Test Metric Development

In Test Metric Development unit test coverage is the guideline.

This is unlike TDD where the failing test is written first and then that test drives the design of the application code until the test passes which ensure close to 100% test coverage is hard to avoid.

In contrast, in TMD, the application code is written first, before any tests and without consideration of its testability.

Once the code appears to work, enough tests are added to keep the code coverage level at some artificially chosen metric.

This is TMD. It is a stage of software development maturity between BDUF (Big Design Up Front) and TDD (Test Driven Development).

js anagrams

Although I got this heap routine from a blog post, I couldn’t help but make a few tweaks, as is usually the case. Nothing that affected or improved performance (based on some timing runs that I did), these were about readability, such as

– use array deconstructor for the swap
– use Array.fill(0)


function swap(chars, i, j) {
var tmp = chars[i];
chars[i] = chars[j];
chars[j] = tmp;
function getAnagrams(input) {
var counter = [],
anagrams = [],
chars = input.split(''),
length = chars.length,
for (i = 0; i < length; i++) {
counter[i] = 0;
i = 0;
while (i < length) {
if (counter[i] < i) {
swap(chars, i % 2 === 1 ? counter[i] : 0, i);
i = 0;
} else {
counter[i] = 0;
return anagrams;

'use strict';
exports.getAnagrams = (input) => {
const anagrams = new Array(input);
const chars = input.split('');
const counter = new Array(chars.length).fill(0);
let j = 0;
while (j < chars.length) {
if (counter[j] < j) {
const k = j % 2 === 1 ? counter[j] : 0;
[chars[j], chars[k]] = [chars[k], chars[j]];
j = 0;
} else {
counter[j] = 0;
j += 1;
return anagrams;

Key Javascript DOM methods

DOM Mode

The DOM is the Document Object Model of a page. It is the code of the structure of a webpage. JavaScript comes with a lot of different ways to create and manipulate HTML elements (called nodes).

The following is a subset of some of the most useful properties and methods.

Key Node Properties

  • attributes — Returns a live collection of all attributes registered to an element
  • childNodes — Gives a collection of an element’s child nodes
  • firstChild — Returns the first child node of an element
  • lastChild — The last child node of an element
  • nodeName —Returns the name of a node
  • nodeType —  Returns the type of a node
  • nodeValue — Sets or returns the value of a node
  • parentNode — Returns the parent node of an element
  • textContent — Sets or returns the textual content of a node and its descendants

Key Node Methods

  • appendChild() — Adds a new child node to an element as the last child node
  • cloneNode() — Clones an HTML element
  • insertBefore() — Inserts a new child node before a specified, existing child node
  • removeChild() — Removes a child node from an element
  • replaceChild() — Replaces a child node in an element

Key Element Methods

  • getAttribute() — Returns the specified attribute value of an element node
  • getAttributeNode() — Gets the specified attribute node
  • querySelector() — Provides first matching element
  • querySelectorAll() — Provides a collection of all matching elements
  • getElementsByTagName() — Provides a collection of all child elements by tag
  • getElementById() Provides an Element whose id matches
  • getElementsByClassName() — Provides a collection of child elements by class
  • hasAttribute() — Returns true if an element has any attributes, else false
  • removeAttribute() — Removes a specified attribute from an element
  • removeAttributeNode() — Takes away a specified attribute node and returns it
  • setAttribute() — Sets or changes the specified attribute to a specified value
  • setAttributeNode() — Sets or changes the specified attribute node

Full list at JS Cheat Sheet

Javascript Today 03/04/2020


  • classes and methods
  • destructuring for array swap
  • getter for method returning a result
module.exports = class BubbleSort {
  constructor(ary) {
  this.contents = ary;
get bubbled() {
  const contents = this.contents;
  const size = contents.length;
  for(let outer=size; outer > 0; outer--) {
    for(let inner = 0; inner < outer; inner++) {
      if (contents[inner] > contents[inner+1]) {
  return contents;
swap(index) {
  const contents = this.contents;
  [ contents[index], contents[index+1] ] = [ contents[index+1], contents[index] ];


It’s time to stop testing

Testing, as it has traditionally been done, may no longer be a good idea.

I am talking here about end to end, usually UI based, testing. Even the automated kind.  The kind that I myself have specialized in for several years!

Once simple fact cannot be ignored:

  • Quality for customers is determined by application code quality
  • Tests, in of themselves, do not improve the quality of application code

These simple facts have bothered me greatly in my role as an automation specialist over the past few years.

One clarification – I’m not talking about Unit tests (including not talking about Unit Tests in the UI layer, i.e. Javascript).  Those are the tests that are written using TDD and thus must be written before the application code, initially failing, to drive the application code design and result in testable code that always has tests.  There is never a ‘no time for tests’ situation when you always write the test first.  The practice is harder to do than it is to write here but it can be adopted.  Those Unit tests are still essential and should never be skipped.

When working on application code itself, I have recently seen the considerable difference in quality due to different approaches in writing ES6+ functional style JavaScript code and it is quite remarkable the number of bugs that can be avoided by using the modern constructs, along with the huge increase in readability and maintainability that contributes to higher quality code and less bugs for customers.

For much of our industry, End To End testing has moved from manual to automated processes and yet at company after company the approaches I encounter still reflect waterfall and command and control approaches – the UI tests are written by someone other than the developer and the feedback to the developers comes days to weeks later and is done by someone else who is then in a defensive, checking, testing role, not a quality improvement role. ‘Our QA is now called QE and is embedded in the team’ is a common refrain.  Unfortunately the next comments are often about how hard it is for them to keep up with developers when they write tests. Not to mention the fact that their best QE just got “promoted” to (application) developer and now earns $45,000 more per year.  Actions always speak louder than words and 2nd class citizen syndrome becomes rampant and accepted by all. “The QA person” has quite a remarkable set of assumptions and triggers implicit biases (many based on real evidence) in our industry.  The other issue is that the testing code base itself will take more maintenance over time, quickly becoming an additional issue for code quality and needing tests to test the tests and even tests to test them.

There are several key approaches that need to be adopted to address this change.  These approaches are well known by many organizations, however they still struggle to realize the changes that are needed in existing processes including architectural approaches.

The key new approaches are:

  • CI – Continuous integration to run all tests in the cloud for all branches during development
  • TDD measurements as KPIs, reporting and compliance measures
  • Immediate feedback from production customers by automated means
  • Canary Releases
  • Blue Green Releases
  • Feature Flags
  • Speed – Avoiding testing suite time lengths that continually grow
  • Continuous Deployment reducing MTTR (Mean Time To Recover)
  • Teams that promote contributions from all members and pay equitably

It’s exceedingly hard to do the above because most organizations default to continuing the previous developed testing characteristics of

  • Manually run automation and manual testing
  • TDD compliance not monitored as a KPI
  • Measuring bugs and focusing on speed of response to bugs
  • Production customer real-time automated feedback KPIs not shown in-house on primary displays to development teams
  • Test Suites that grow in length every week
  • QA’s being failed or junior developers that are paid less

and doing those activities quickly leads to no time to do the previously mentioned ‘new approach’ activities that would actually be of more benefit in improving quality for customers.  Change is hard, especially when it appears to mean less testing and more risk.  When done correctly it can actually mean more testing (more unit, less UI) and less risk if many supporting parts are done correctly but moving to this model is very hard and the more established the company and their software development shop, the harder it is.  This is one of the key reasons that small companies and startups continue to disrupt, as it is generally easier to adopt new practices than to change existing practices that were successful in the past.

To summarize, improve quality for customers with

Less End to End testing…


and more quality activities such as…

Code Linting, Code Grading, Code Reviews, Code Education and Training, Immutable Objects, Internal Iterators, TDD Measurement,Well named objects and methods, avoiding premature optimization, Short methods, Short Classes, Single Responsibility, English readable, well stubbed and mocked, SOLID principle based code that is deployed in a modern CI/CD environment that provides immediate automated feedback based on customer activity and provides the facility to revert changes with minimal efforts within seconds or minutes.

Works on ALL my machines !

It’s a familiar situation at work where code ‘works on my machine’… but when another developer… or a staging deploy.. or a production deploy happens, it doesn’t work on that machine.  There are many practices to address this – virtual machines, docker, automated tests, etc, etc.

There is a similar situation in learning new technology skills: The same “I did it once, on my machine and it worked, but when I tried later to do it, it didn’t work.  I don’t remember exactly how I did it before and this time I encounter unexpected problems I didn’t experience before.

This leads to a number of issues:

  • I typed something once.  I’m unlike to remember that in a week
  • At some point I’ll try and use a different machine
  • Dependency hell – stuff seems to be ok but then on machine X it isn’t
  • I didn’t encounter any problems so didn’t learn how to get around them

To address this, I use a practice of switching machines 2-3 times a day.

This approach developed naturally over time to match my daily schedule, i.e. by working from home on a desktop, working from a cafe on a laptop and then working from home on the desktop again.  As with other coding activities I am addressing the pain point by doing the activity more often, not less.  This invokes the lazy programmer in me that will then address ephemeral issues such as local setup and get the experience I need to be able to walk into other situations and make progress having encountered and conquered many different setup issues while learning.  It also ups the amount of source code management that I do through git which is always good practice.  I recently stopping coding within a Dropbox directory ‘cos I need to exclude node_modules/and dropbox doesn’t allow that (you’d have to selective sync for every node projects node/modules directory which is waaaay too much config management for me.


git setups

It’s a small thing, but… when I get

There is no tracking information for the current branch. 
Please specify which branch you want to merge with. 
See git-pull(1) for details. 
git pull <remote> <branch> 
If you wish to set tracking information for this branch you can with
git branch --set-upstream-to=origin/<branch> master

it’s easy to fix cos in my ~/.gitconfig file I’ve got

setups = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`

which  lets me simply type

git setups

to get

Branch 'master' set up to track remote branch 'master' from 'origin' by rebasing.

Vim for JS

I’m having the time to really fix and set up my tooling and that is such a good thing.

Today a couple of seemingly basic tasks I’d not had time for recently.

  1.  Get vim paste working within javascript files
  2. Get javascript tabs as spaces working as expected

These are a pretty big deal now I’m immersed in the world of good looking es6 javascript.
The last thing I want is my carefully styled code looking like blahhhh in other formats, editors, etc. due to mixed tabs and spaces.

As if often the case, the fixes turned out to be much simpler than feared.  Lets face it, when you are changing your main tools configuration there is good reason (and for me experience) at what might happen if you mess it up.

For 1, the changes were to add a line to my ~/.vimrc file for javascript the same way I had previously done for Ruby.  For ruby I have:

autocmd FileType ruby setlocal ts=2 sts=2 sw=2 expandtab

so for Javascript I just added

autocmd FileType javascript setlocal ts=2 sts=2 sw=2 expandtab

for issue 2, the fact that pastes
kept getting
more and more indented

I found that the fix for that was to do [esc]:set paste before doing the paste.  Remember to then [esc]:set nopaste after pasting because I remember something else breaking later if you don’t reset it.

Now I know I’ll be much more prepared to share js code with my IDE fan friends !

Test Code Samples

Examples of tests I have written in various frameworks and languages

  • Languages
    • Javascript
    • Python
    • Ruby
    • Java
    • C#
  • Frameworks
    • Chai
    • Rails
    • Rspec
    • Mocha
    • Jasmine
    • Selenium
    • Capybara
    • Protractor
  • Features
    • Tags
    • DSLs
    • Expect
    • Retry Flakies
    • Page Objects
    • ES6 Javascript
    • Happy, Sad, Smoke
    • Suites and Examples
    • Multi Browser Testing
    • Before and Before Each for DRY code
  • Test Types
    • Unit
    • Integrated
    • Browser UI


The Javascript and Ruby examples are more complete and reflect languages I have used more extensively.
Ruby is the best example of Page Objects, Retries and Tags.

You can see a number of youtube videos of me coding TDD/BDD exercises in Ruby and Javascript at:

They include some simple examples of refactoring, which is another favourite activity of mine.

Python, C# and Java are intended as basic templates for languages I have used less recently.

Modern Javascript Kata

Practicing es6-7-8-9-10 approaches such as

  • CLOVA – const over let over var for scoping
  • ASYNC – async, await, resolve, reject, Promise
  • SADR – Spread and Deconstruct Rest
  • CAMP – Classes and Methods using Prototype
  • ARF – Arrow Functions for readability, preserving this, not hoisted
  • DEVA – Default parameter values
  • AFIN – Array.find for first array value
  • NAN – isNAN for not a number
  • TEMPLAR – Template Literals are readable


Security in mind

Don’t commit your credentials with your source code !

This is important advice.

The question then comes up though – where can I store them and still work efficiently and effectively on a day-to-day.

The first choice might see to be in your .bashrc (or .bash_profile) config file.

for example

export AWS_ACCESS_KEY_ID='abc123'
export AWS_SECRET_ACCESS_KEY='abc23456789'

However if you are lazy like me and don’t want to have to manually add this to your current .bashrc every time you switch machine, you will likely store your config files online.  Although not in the source code of the application (which is good), this is still additional exposure to your secrets.
I also like to make my ‘setup’ files available publically to others as public github repos and I definitely don’t want to be publishing my secrets that way !

The answer to this was to put the setting of the AWS credentials in a separate file and then include that file if it exists. For this I created the file

with the above two export lines.

and of course

$ chmod +x

to make it executable

Then I check for this file and use it if it exists.

The additional file ( is the part that does not ever get committed to any source code repository and is the one part of the process that you do manually each time you set up a new project or set up a project for the first time on a different machine, which provides the security of not having credentials in any code base.

This is done with this code added as the last line in my .bashrc file:

test -f ~/ && . $_

Longer term, KMS provides better ways to automate and protect secrets in most orgs


A testing State of mind

“In order to have quality UI automation* you need to control state”

I wrote this a year ago and it’s as true today as it was then.

To control state you need:

  1. APIs to create test data state
  2. Session controls to set user state
  3. DB controls to set application state
  4. Environment state control (VMs, lambdas)


*Quality UI Automation is defined as:

  • Fast
  • Decoupled
  • User focused
  • Fault Tolerant
  • Easy to change
  • Highly available
  • Easy to maintain
  • Tagged for test type
  • Test pyramid based
  • Documentation as Code
  • Providing actionable feedback


Study What Stays The Same

One thing I’ve noticed about a career in programming… over the course of a career in programming… is that one broad distinction of skills is that of those skills that continue to be useful over long periods of time vs. those skills that become outdated are no longer used and are replaced by other new skills and knowledge.  I’d like to name a javascript framework as the latest shiny toy example but by the time I publish this article, there will probably be a successor to it already.

You will always need to have some of the more current and in-demand skills and expertise that are (only) needed for your work today, but be sure to blend in longer term skills which will improve your overall productivity over a longer time frame.

For skills that change over time I am talking about:

  • Editors
  • Languages
  • Database Flavors

For skills that just keep on providing more value the better you get at them I mean:

  • Linux
  • Testing
  • Networks
  • Decoupling
  • Readable code
  • Small methods
  • Naming things
  • Command Line
  • YAML and JSON
  • Using REST API’s
  • Source Control (Git)
  • Pairing and Mobbing

One of the difficult things about this list is that much, if not most of it will not be taught to you in school or provided to you by your employer so self-study for most of these areas is essential.  The great thing is that:

The above list will be important in virtually every language you use

Statistical Methods for Quality Assurance

Screenshot from 2019-11-11 11-22-20

I’m getting a little refresh on techniques for measuring in the quality field.
Some apply more than others in modern software development.
It’s always good to refresh the fundamentals on measurement.

Great value comes from determining exactly what to measure in an industry where change is constant and indeed the norm.
Great caution must be present.
Be very careful what you measure and why you measure it.

Comments are back !

I avoid comments in my code these days.
Long gone are previous practices of carefully crafted blocks of comments.
Happily replaced with well named methods, class and variables.

So how are comments back in style for me?

One Phrase: Infrastructure As Code

A key part of infrastructure as code is the use of configuration files.

They usually come in two flavors – JSON and YAML
JSON is ugly to me:

    "a" : "1"
    "b": {
        "x" : "1"

YAML is much cleaner to me


Apart from that however (and the point of this article) is that there is another difference and that is that YAML allows comments
This is useful because, unlike programming languages, you can’t just replace a comment with a well named class and method that describes what the comment would have said.  All you have is the YAML identifiers and changing them will likely affect any existing application that relies on their current format, i.e. if they are being used they are a dependency that can break.
YAML files can therefore be somewhat cryptic, hard to understand and hard to change.

Comments to the rescue!

If used carefully, I have found that comments have a clear role in YAML files to help out the future me.


a:1  # Key knowledge here
x:1  # Key knowledge here

Conclusion:  Use comments wisely when appropriate in YAML files


How to test and what to test for an API

At a high level

Test the API Endpoints, Status Codes and Data with Smoke, Happy and Sad Tests

At a detailed level one needs to ask the following questions.

The answers will guide what and how to test.

  • What documentation exists ?
  • What functionality it provide ?
  • Does it support concurrency ?
  • What are the API endpoints ?
  • Is the API internal or external ?
  • Which endpoints are idempotent ?
  • Are endpoints stateless or stateful ?
  • Do any workflows*1 vary by client ?
  • Are there performance requirements ?
  • Do API endpoints make up a workflow ?
  • What validations are expected for data ?
  • What system or library is behind the API ?
  • Do we need to mock dependent services ?
  • Does it constrain traffic aka Rate Limiting ?
  • What (if any) versioning approach is used ?
  • Does the API support Multiple Languages ?
  • If already using SOAPui, how is it integrated ?
  • Is the API be restricted to a country or region ?
  • Does it provide client stubs in specific languages ?
  • What status codes are expected for given endpoints ?
  • What domain format and structure exists for the data ?
  • Does the API use HATEOS*2 for self documentation ?
  • What kind of data validation/ testing can be performed ?
  • What API is supported by the test framework I’m using ?
  • What actions are performed, e.g. GET, PUT, POST etc ?
  • Do we need to prepare dependent test data or services ?
  • What non-API approaches will be needed to verify data ?
  • Are there existing API definitions e.g. WADLWSDLThrift ?
  • What non-API approaches will be needed to prepare data ?
  • What (if any) Authorization (‘what’) mechanism will be used ?
  • What (if any) Authentication (‘who’) mechanism will be used ?
  • Who will use it, external programmers or another internal module ?
  • What format(s): SOAPRESTGraphQLThriftProtoBuffer, Other ?

*1 Workflows often require multiple API calls and may have dependencies between them
*2 HATEOS – Hypertext As The Engine Of Application State, which allows self-discovery of an API

Credit to whose focus was performance testing.

Beautiful Questions

15 faves

  • What Else ?
  • How can I help ?
  • Can I begin now ?
  • Do you know why ?
  • What would you do ?
  • What could I change ?
  • What is your opinion ?
  • What can I stop doing ?
  • What do you like least ?
  • What am I so afraid of ?
  • Do I reflect and ask why ?
  • Do I ask Why before How ?
  • Can I take connect breaks ?
  • What would an outsider do ?
  • Do I admit being wrong frequently ?