React fetch Hook

Might as well have this for one of the most common operations we do!

import React, {useState, useEffect }  frm 'react';

export function() useFetch(uri) {
  const [data, setDate] = useState()
  const [error, setError] = useState()
  const [loading, setLoading] = useState(true)

  useEffect(() => {
    if(!uri) return;
    fetch(uri)
      .then(data => data.json()
      .then(setData)
      .then() => setLoading(false)
      .catch(setError);
  }, [uri];

  return {
    loading,
    data,
    error
  };
}

 

 

 

We’re “Agile”

“We’re Agile”
or so every company in 2020 seems to say.  Followed by “we do standups, retros and backlog refinement!”.
However, in the next breath, most companies give the following reasons why they can’t ‘quite’ be Agile ‘the way it was intended’.
This is a most insidious falsehood because the problem that they list are EXACTLY THE PROBLEMS THAT AGILE ADDRESSES.
Lets face it – if these factors didn’t exist at most companies we wouldn’t need Agile in the first place!
Having experienced the 1980’s and 1990’s (not Agile) workplaces firsthand I have no desire to be in that culture again.

The main reasons that companies give for not being “truly agile”:

– We have deadlines (that we frequently miss)
– We have acquisitions (that we struggle to integrate)
– We are regulated (but we don’t understand controls well)
– We are constrained by HIPPA deadlines (and we don’t prepare in time)
– We are constrained by OxleySarbanes (and we announce quarterly goals that we are then held to)
– Our compliance department needs controls (and we don’t recognize the ones IT uses anyway)
– We have a fixed IT budget (We don’t take into account the value that each single IT workers brings)
– We are cost cutting (we are short-sightedly focusing on near-term results at the expense of long term profits).
– We have a IT hiring freeze (we are failing to recognize the power of our IT staff to generate revenue).
– We can’t hire talented people (we don’t create a welcoming culture for A talent).

To restate the point. These are the very reasons you use Agile. to solve THESE problems. It can be done. and it has and is.

People ≠ Resources

People are not Resources!

This concept has been going around the Agile community for a while and deserves more attention.

Referring to people as resources is dehumanizing, insulting and unprofessional.

  1. Q) Why is this a big deal anyway?
  2. Q) Isn’t this just political correctness?
  3. Q) Projects need people, so why aren’t they considered resources to the project?

I believe words counts. Every. Single. One.

 

Lets step back for a second and look at the power and consequences of this ‘resource’ word.

Imagine a  project where you need two programmers.

You say “we need 2 resources on this”.

Nice.  Easy.  Doesn’t really address any human factors mind you.

 

Trying again:

We need two people (at least one of which must be senior) on this project.

Nicer.  But still missing the picture.

 

Trying again:

We need two additional people on this project so perhaps can we could put Stacey and Aveal to work on it…

Although…. Stacey is maxed out on project x and G and K right now so she’s only available about 10 hours  a week.

Also, Aveal is on Paternity Leave until next month and will then be half-time for 3 months when he returns.

Hmmm, maybe we need to rethink this from a people perspective…

Using the term “resources” for people is easy.

But so very wrong. And misleading as to the reality of actual plans that can be accomplished with actual people.  Referring to Stacey and Aveal is reality.

Test Metric Development (TMD)

TMD – Test Metric Development

In Test Metric Development unit test coverage is the guideline.

This is unlike TDD where the failing test is written first and then that test drives the design of the application code until the test passes which ensure close to 100% test coverage is hard to avoid.

In contrast, in TMD, the application code is written first, before any tests and without consideration of its testability.

Once the code appears to work, enough tests are added to keep the code coverage level at some artificially chosen metric.

This is TMD. It is a stage of software development maturity between BDUF (Big Design Up Front) and TDD (Test Driven Development).

js anagrams

Although I got this heap routine from a blog post, I couldn’t help but make a few tweaks, as is usually the case. Nothing that affected or improved performance (based on some timing runs that I did), these were about readability, such as

– use array deconstructor for the swap
– use Array.fill(0)

Before:

function swap(chars, i, j) {
var tmp = chars[i];
chars[i] = chars[j];
chars[j] = tmp;
}
function getAnagrams(input) {
var counter = [],
anagrams = [],
chars = input.split(''),
length = chars.length,
i;
for (i = 0; i < length; i++) {
counter[i] = 0;
}
anagrams.push(input);
i = 0;
while (i < length) {
if (counter[i] < i) {
swap(chars, i % 2 === 1 ? counter[i] : 0, i);
counter[i]++;
i = 0;
anagrams.push(chars.join(''));
} else {
counter[i] = 0;
i++;
}
}
return anagrams;
}

After:
'use strict';
exports.getAnagrams = (input) => {
const anagrams = new Array(input);
const chars = input.split('');
const counter = new Array(chars.length).fill(0);
let j = 0;
while (j < chars.length) {
if (counter[j] < j) {
const k = j % 2 === 1 ? counter[j] : 0;
[chars[j], chars[k]] = [chars[k], chars[j]];
counter[j]++;
j = 0;
anagrams.push(chars.join(''));
} else {
counter[j] = 0;
j += 1;
}
}
return anagrams;
};

Key Javascript DOM methods

DOM Mode

The DOM is the Document Object Model of a page. It is the code of the structure of a webpage. JavaScript comes with a lot of different ways to create and manipulate HTML elements (called nodes).

The following is a subset of some of the most useful properties and methods.

Key Node Properties

  • attributes — Returns a live collection of all attributes registered to an element
  • childNodes — Gives a collection of an element’s child nodes
  • firstChild — Returns the first child node of an element
  • lastChild — The last child node of an element
  • nodeName —Returns the name of a node
  • nodeType —  Returns the type of a node
  • nodeValue — Sets or returns the value of a node
  • parentNode — Returns the parent node of an element
  • textContent — Sets or returns the textual content of a node and its descendants

Key Node Methods

  • appendChild() — Adds a new child node to an element as the last child node
  • cloneNode() — Clones an HTML element
  • insertBefore() — Inserts a new child node before a specified, existing child node
  • removeChild() — Removes a child node from an element
  • replaceChild() — Replaces a child node in an element

Key Element Methods

  • getAttribute() — Returns the specified attribute value of an element node
  • getAttributeNode() — Gets the specified attribute node
  • querySelector() — Provides first matching element
  • querySelectorAll() — Provides a collection of all matching elements
  • getElementsByTagName() — Provides a collection of all child elements by tag
  • getElementById() Provides an Element whose id matches
  • getElementsByClassName() — Provides a collection of child elements by class
  • hasAttribute() — Returns true if an element has any attributes, else false
  • removeAttribute() — Removes a specified attribute from an element
  • removeAttributeNode() — Takes away a specified attribute node and returns it
  • setAttribute() — Sets or changes the specified attribute to a specified value
  • setAttributeNode() — Sets or changes the specified attribute node

Full list at JS Cheat Sheet

Javascript Today 03/04/2020

Featuring:

  • classes and methods
  • destructuring for array swap
  • getter for method returning a result
module.exports = class BubbleSort {
  constructor(ary) {
  this.contents = ary;
}
get bubbled() {
  const contents = this.contents;
  const size = contents.length;
  for(let outer=size; outer > 0; outer--) {
    for(let inner = 0; inner < outer; inner++) {
      if (contents[inner] > contents[inner+1]) {
        this.swap(inner);
      }
    } 
  } 
  return contents;
}
swap(index) {
  const contents = this.contents;
  [ contents[index], contents[index+1] ] = [ contents[index+1], contents[index] ];
  }
};

 

It’s time to stop testing

Testing, as it has traditionally been done, may no longer be a good idea.

I am talking here about end to end, usually UI based, testing. Even the automated kind.  The kind that I myself have specialized in for several years!

Once simple fact cannot be ignored:

  • Quality for customers is determined by application code quality
  • Tests, in of themselves, do not improve the quality of application code

These simple facts have bothered me greatly in my role as an automation specialist over the past few years.

One clarification – I’m not talking about Unit tests (including not talking about Unit Tests in the UI layer, i.e. Javascript).  Those are the tests that are written using TDD and thus must be written before the application code, initially failing, to drive the application code design and result in testable code that always has tests.  There is never a ‘no time for tests’ situation when you always write the test first.  The practice is harder to do than it is to write here but it can be adopted.  Those Unit tests are still essential and should never be skipped.

When working on application code itself, I have recently seen the considerable difference in quality due to different approaches in writing ES6+ functional style JavaScript code and it is quite remarkable the number of bugs that can be avoided by using the modern constructs, along with the huge increase in readability and maintainability that contributes to higher quality code and less bugs for customers.

For much of our industry, End To End testing has moved from manual to automated processes and yet at company after company the approaches I encounter still reflect waterfall and command and control approaches – the UI tests are written by someone other than the developer and the feedback to the developers comes days to weeks later and is done by someone else who is then in a defensive, checking, testing role, not a quality improvement role. ‘Our QA is now called QE and is embedded in the team’ is a common refrain.  Unfortunately the next comments are often about how hard it is for them to keep up with developers when they write tests. Not to mention the fact that their best QE just got “promoted” to (application) developer and now earns $45,000 more per year.  Actions always speak louder than words and 2nd class citizen syndrome becomes rampant and accepted by all. “The QA person” has quite a remarkable set of assumptions and triggers implicit biases (many based on real evidence) in our industry.  The other issue is that the testing code base itself will take more maintenance over time, quickly becoming an additional issue for code quality and needing tests to test the tests and even tests to test them.

There are several key approaches that need to be adopted to address this change.  These approaches are well known by many organizations, however they still struggle to realize the changes that are needed in existing processes including architectural approaches.

The key new approaches are:

  • CI – Continuous integration to run all tests in the cloud for all branches during development
  • TDD measurements as KPIs, reporting and compliance measures
  • Immediate feedback from production customers by automated means
  • Canary Releases
  • Blue Green Releases
  • Feature Flags
  • Speed – Avoiding testing suite time lengths that continually grow
  • Continuous Deployment reducing MTTR (Mean Time To Recover)
  • Teams that promote contributions from all members and pay equitably

It’s exceedingly hard to do the above because most organizations default to continuing the previous developed testing characteristics of

  • Manually run automation and manual testing
  • TDD compliance not monitored as a KPI
  • Measuring bugs and focusing on speed of response to bugs
  • Production customer real-time automated feedback KPIs not shown in-house on primary displays to development teams
  • Test Suites that grow in length every week
  • QA’s being failed or junior developers that are paid less

and doing those activities quickly leads to no time to do the previously mentioned ‘new approach’ activities that would actually be of more benefit in improving quality for customers.  Change is hard, especially when it appears to mean less testing and more risk.  When done correctly it can actually mean more testing (more unit, less UI) and less risk if many supporting parts are done correctly but moving to this model is very hard and the more established the company and their software development shop, the harder it is.  This is one of the key reasons that small companies and startups continue to disrupt, as it is generally easier to adopt new practices than to change existing practices that were successful in the past.

To summarize, improve quality for customers with

Less End to End testing…

please!

and more quality activities such as…

Code Linting, Code Grading, Code Reviews, Code Education and Training, Immutable Objects, Internal Iterators, TDD Measurement,Well named objects and methods, avoiding premature optimization, Short methods, Short Classes, Single Responsibility, English readable, well stubbed and mocked, SOLID principle based code that is deployed in a modern CI/CD environment that provides immediate automated feedback based on customer activity and provides the facility to revert changes with minimal efforts within seconds or minutes.

Works on ALL my machines !

It’s a familiar situation at work where code ‘works on my machine’… but when another developer… or a staging deploy.. or a production deploy happens, it doesn’t work on that machine.  There are many practices to address this – virtual machines, docker, automated tests, etc, etc.

There is a similar situation in learning new technology skills: The same “I did it once, on my machine and it worked, but when I tried later to do it, it didn’t work.  I don’t remember exactly how I did it before and this time I encounter unexpected problems I didn’t experience before.

This leads to a number of issues:

  • I typed something once.  I’m unlike to remember that in a week
  • At some point I’ll try and use a different machine
  • Dependency hell – stuff seems to be ok but then on machine X it isn’t
  • I didn’t encounter any problems so didn’t learn how to get around them

To address this, I use a practice of switching machines 2-3 times a day.

This approach developed naturally over time to match my daily schedule, i.e. by working from home on a desktop, working from a cafe on a laptop and then working from home on the desktop again.  As with other coding activities I am addressing the pain point by doing the activity more often, not less.  This invokes the lazy programmer in me that will then address ephemeral issues such as local setup and get the experience I need to be able to walk into other situations and make progress having encountered and conquered many different setup issues while learning.  It also ups the amount of source code management that I do through git which is always good practice.  I recently stopping coding within a Dropbox directory ‘cos I need to exclude node_modules/and dropbox doesn’t allow that (you’d have to selective sync for every node projects node/modules directory which is waaaay too much config management for me.

 

git setups

It’s a small thing, but… when I get

There is no tracking information for the current branch. 
Please specify which branch you want to merge with. 
See git-pull(1) for details. 
git pull <remote> <branch> 
If you wish to set tracking information for this branch you can with
git branch --set-upstream-to=origin/<branch> master

it’s easy to fix cos in my ~/.gitconfig file I’ve got

[alias]
setups = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`

which  lets me simply type

git setups

to get

Branch 'master' set up to track remote branch 'master' from 'origin' by rebasing.

Vim for JS

I’m having the time to really fix and set up my tooling and that is such a good thing.

Today a couple of seemingly basic tasks I’d not had time for recently.

  1.  Get vim paste working within javascript files
  2. Get javascript tabs as spaces working as expected

These are a pretty big deal now I’m immersed in the world of good looking es6 javascript.
The last thing I want is my carefully styled code looking like blahhhh in other formats, editors, etc. due to mixed tabs and spaces.

As if often the case, the fixes turned out to be much simpler than feared.  Lets face it, when you are changing your main tools configuration there is good reason (and for me experience) at what might happen if you mess it up.

For 1, the changes were to add a line to my ~/.vimrc file for javascript the same way I had previously done for Ruby.  For ruby I have:

autocmd FileType ruby setlocal ts=2 sts=2 sw=2 expandtab

so for Javascript I just added

autocmd FileType javascript setlocal ts=2 sts=2 sw=2 expandtab

for issue 2, the fact that pastes
kept getting
more and more indented

I found that the fix for that was to do [esc]:set paste before doing the paste.  Remember to then [esc]:set nopaste after pasting because I remember something else breaking later if you don’t reset it.

Now I know I’ll be much more prepared to share js code with my IDE fan friends !

Test Code Samples

https://github.com/durrantm/code_samples

Examples of tests I have written in various frameworks and languages

  • Languages
    • Javascript
    • Python
    • Ruby
    • Java
    • C#
  • Frameworks
    • Chai
    • Rails
    • Rspec
    • Mocha
    • Jasmine
    • Selenium
    • Capybara
    • Protractor
  • Features
    • Tags
    • DSLs
    • Expect
    • Retry Flakies
    • Page Objects
    • ES6 Javascript
    • Happy, Sad, Smoke
    • Suites and Examples
    • Multi Browser Testing
    • Before and Before Each for DRY code
  • Test Types
    • Unit
    • Integrated
    • Browser UI

 

The Javascript and Ruby examples are more complete and reflect languages I have used more extensively.
Ruby is the best example of Page Objects, Retries and Tags.

You can see a number of youtube videos of me coding TDD/BDD exercises in Ruby and Javascript at:

They include some simple examples of refactoring, which is another favourite activity of mine.

Python, C# and Java are intended as basic templates for languages I have used less recently.

Modern Javascript Kata

Practicing es6-7-8-9-10 approaches such as

  • CLOVA – const over let over var for scoping
  • ASYNC – async, await, resolve, reject, Promise
  • SADR – Spread and Deconstruct Rest
  • CAMP – Classes and Methods using Prototype
  • ARF – Arrow Functions for readability, preserving this, not hoisted
  • DEVA – Default parameter values
  • AFIN – Array.find for first array value
  • NAN – isNAN for not a number
  • TEMPLAR – Template Literals are readable

C A S C A D A N T

Security in mind

Don’t commit your credentials with your source code !

This is important advice.

The question then comes up though – where can I store them and still work efficiently and effectively on a day-to-day.

The first choice might see to be in your .bashrc (or .bash_profile) config file.

for example

export AWS_ACCESS_KEY_ID='abc123'
export AWS_SECRET_ACCESS_KEY='abc23456789'

However if you are lazy like me and don’t want to have to manually add this to your current .bashrc every time you switch machine, you will likely store your config files online.  Although not in the source code of the application (which is good), this is still additional exposure to your secrets.
I also like to make my ‘setup’ files available publically to others as public github repos and I definitely don’t want to be publishing my secrets that way !

The answer to this was to put the setting of the AWS credentials in a separate file and then include that file if it exists. For this I created the file

set_aws_credentials.sh

with the above two export lines.

and of course

$ chmod +x set_aws_credentials.sh

to make it executable

Then I check for this file and use it if it exists.

The additional file (set_credentials.sh) is the part that does not ever get committed to any source code repository and is the one part of the process that you do manually each time you set up a new project or set up a project for the first time on a different machine, which provides the security of not having credentials in any code base.

This is done with this code added as the last line in my .bashrc file:

test -f ~/set_aws_credentials.sh && . $_

Longer term, KMS provides better ways to automate and protect secrets in most orgs

 

A testing State of mind

“In order to have quality UI automation* you need to control state”

I wrote this a year ago and it’s as true today as it was then.

To control state you need:

  1. APIs to create test data state
  2. Session controls to set user state
  3. DB controls to set application state
  4. Environment state control (VMs, lambdas)

 

*Quality UI Automation is defined as:

  • Fast
  • Decoupled
  • User focused
  • Fault Tolerant
  • Easy to change
  • Highly available
  • Easy to maintain
  • Tagged for test type
  • Test pyramid based
  • Documentation as Code
  • Providing actionable feedback

20191208_081654

Study What Stays The Same

One thing I’ve noticed about a career in programming… over the course of a career in programming… is that one broad distinction of skills is that of those skills that continue to be useful over long periods of time vs. those skills that become outdated are no longer used and are replaced by other new skills and knowledge.  I’d like to name a javascript framework as the latest shiny toy example but by the time I publish this article, there will probably be a successor to it already.

You will always need to have some of the more current and in-demand skills and expertise that are (only) needed for your work today, but be sure to blend in longer term skills which will improve your overall productivity over a longer time frame.

For skills that change over time I am talking about:

  • Editors
  • Languages
  • Database Flavors

For skills that just keep on providing more value the better you get at them I mean:

  • Linux
  • Testing
  • Networks
  • Decoupling
  • Readable code
  • Small methods
  • Naming things
  • Command Line
  • YAML and JSON
  • Using REST API’s
  • Source Control (Git)
  • Pairing and Mobbing

One of the difficult things about this list is that much, if not most of it will not be taught to you in school or provided to you by your employer so self-study for most of these areas is essential.  The great thing is that:

The above list will be important in virtually every language you use

Statistical Methods for Quality Assurance

Screenshot from 2019-11-11 11-22-20

I’m getting a little refresh on techniques for measuring in the quality field.
Some apply more than others in modern software development.
It’s always good to refresh the fundamentals on measurement.

Great value comes from determining exactly what to measure in an industry where change is constant and indeed the norm.
Great caution must be present.
Be very careful what you measure and why you measure it.

Comments are back !

I avoid comments in my code these days.
Long gone are previous practices of carefully crafted blocks of comments.
Happily replaced with well named methods, class and variables.

So how are comments back in style for me?

One Phrase: Infrastructure As Code

A key part of infrastructure as code is the use of configuration files.

They usually come in two flavors – JSON and YAML
JSON is ugly to me:

{
    "a" : "1"
    "b": {
        "x" : "1"
    }
}

YAML is much cleaner to me

a:1
b:
  x:1

Apart from that however (and the point of this article) is that there is another difference and that is that YAML allows comments
This is useful because, unlike programming languages, you can’t just replace a comment with a well named class and method that describes what the comment would have said.  All you have is the YAML identifiers and changing them will likely affect any existing application that relies on their current format, i.e. if they are being used they are a dependency that can break.
YAML files can therefore be somewhat cryptic, hard to understand and hard to change.

Comments to the rescue!

If used carefully, I have found that comments have a clear role in YAML files to help out the future me.

 

a:1  # Key knowledge here
x:1  # Key knowledge here

Conclusion:  Use comments wisely when appropriate in YAML files

 

How to test and what to test for an API

At a high level

Test the API Endpoints, Status Codes and Data with Smoke, Happy and Sad Tests

At a detailed level one needs to ask the following questions.

The answers will guide what and how to test.

  • What documentation exists ?
  • What functionality it provide ?
  • Does it support concurrency ?
  • What are the API endpoints ?
  • Is the API internal or external ?
  • Which endpoints are idempotent ?
  • Are endpoints stateless or stateful ?
  • Do any workflows*1 vary by client ?
  • Are there performance requirements ?
  • Do API endpoints make up a workflow ?
  • What validations are expected for data ?
  • What system or library is behind the API ?
  • Do we need to mock dependent services ?
  • Does it constrain traffic aka Rate Limiting ?
  • What (if any) versioning approach is used ?
  • Does the API support Multiple Languages ?
  • If already using SOAPui, how is it integrated ?
  • Is the API be restricted to a country or region ?
  • Does it provide client stubs in specific languages ?
  • What status codes are expected for given endpoints ?
  • What domain format and structure exists for the data ?
  • Does the API use HATEOS*2 for self documentation ?
  • What kind of data validation/ testing can be performed ?
  • What API is supported by the test framework I’m using ?
  • What actions are performed, e.g. GET, PUT, POST etc ?
  • Do we need to prepare dependent test data or services ?
  • What non-API approaches will be needed to verify data ?
  • Are there existing API definitions e.g. WADLWSDLThrift ?
  • What non-API approaches will be needed to prepare data ?
  • What (if any) Authorization (‘what’) mechanism will be used ?
  • What (if any) Authentication (‘who’) mechanism will be used ?
  • Who will use it, external programmers or another internal module ?
  • What format(s): SOAPRESTGraphQLThriftProtoBuffer, Other ?

*1 Workflows often require multiple API calls and may have dependencies between them
*2 HATEOS – Hypertext As The Engine Of Application State, which allows self-discovery of an API

Credit to https://sqa.stackexchange.com/a/23693/8992 whose focus was performance testing.

Beautiful Questions

15 faves

  • What Else ?
  • How can I help ?
  • Can I begin now ?
  • Do you know why ?
  • What would you do ?
  • What could I change ?
  • What is your opinion ?
  • What can I stop doing ?
  • What do you like least ?
  • What am I so afraid of ?
  • Do I reflect and ask why ?
  • Do I ask Why before How ?
  • Can I take connect breaks ?
  • What would an outsider do ?
  • Do I admit being wrong frequently ?

Page Objects – Duplicates and Orphans

Having Page Objects is great but a couple of issues often show up:

  • Duplicates – where there is more than one definition
  • Orphans – where no spec refers to them anymore

To prevent this I wrote the following rspec tests.

They are based on having locators in a locators.yml file, e.g.

`​​locator: ‘some_identifier’`

and on test files being in the format *_spec.rb

It support tests in subdirectories

require 'rspec'

describe 'Page Objects locator yml file' do

  before :each do
    load_locators
  end

  it 'does not have duplicates' do
    if @keys.uniq.count != @keys.count
      dupe_keys = @keys.select{|n| @keys.count(n) > 1}.uniq
      dupe_keys.each do |key|
        @pairs.each do |pair|
          p "#{key.to_sym} : #{pair[key]}" if pair[key]
        end 
      end
    end
    
    expect(@keys.uniq.count).to (eq @keys.count),
      lambda {"Duplicate page object keys found! #{dupe_keys}"}
      
  end 

  it "uses all its locator keys" do
    files = Dir.glob("**/*_spec.rb")
    unused_keys = []
    @keys.each do |key|
      @key_used = false
      files.each {|file| search_file_for_key(file, key) }
      unused_keys << key unless @key_used
    end
    unused_keys_exist = unused_keys.size > 0

    expect(unused_keys_exist).not_to be,
      lambda {"Failure - orphan page objects  exist #{unused_keys}"}
      
  end 

  def search_file_for_key(file, key)
    spec_file = File.open(file) 
    file_contents = spec_file.read
    spec_file.close 
    @key_used = true if file_contents.match(/#{key}/)
  end
  
  def load_locators
    locators_file = File.open('locators.yml')
    @pairs = []
    @keys = []
    
    locators_file.each_line do |line|
      words = line.split(': ') 
      @pairs << {words[0] => words[1]}
      @keys << words[0]
    end
    locators_file.close
  end
  
end

What does debugging ruby look like?

It’s hard to capture the look of “work in progress” when it is code.

It’s usually a fleeting moment.  One is ‘in the moment’ and when one print out and checks enough stuff to understand what’s going on, all the ‘aiding’ code is removed.

Here’s the somewhat  intimate view of actual wip with lots of prints and assertions of various kinds to see if what I expect is going actually is going on, because initially it certainly wasn’t

There’s some ugly stuff here, soon removed after capturing it here, but here it is before I make it look all pretty again.

Screenshot from 2019-04-24 07-46-51

Writing good code, step by step

It’s hard to capture the process of writing code. The way you mold and improve it.

I think of it as often being the ‘anything works and ugly code is ok then immediately refactor’ approach.

Here are 4 versions of ‘Rock, Paper, Scissors’ showing such code molding

p 'v1-------'
# Version 1
my_guess = rand(3)
your_guess = rand(3)
my_guess_text = ['rock', 'paper', 'scissors'][my_guess]
your_guess_text = ['rock', 'paper', 'scissors'][your_guess]
p "You guessed #{your_guess_text} and I guessed #{my_guess_text}"
winner=''
case my_guess_text
when 'rock'
if your_guess_text == 'paper'
winner = 'you'
elsif your_guess_text == 'scissors'
winner = 'me'
end
when 'paper'
if your_guess_text == 'scissors'
winner = 'you'
elsif your_guess_text == 'rock'
winner = 'me'
end
when 'scissors'
if your_guess_text == 'rock'
winner = 'you'
elsif your_guess_text == 'paper'
winner = 'me'
end
end
winner == '' && winner = 'tie'
p "Winner was #{winner}"

p 'v2-------'
# Version 2
my_guess = rand(3)
your_guess = rand(3)
my_guess_text = ['rock', 'paper', 'scissors'][my_guess]
your_guess_text = ['rock', 'paper', 'scissors'][your_guess]
p "You guessed #{your_guess_text} and I guessed #{my_guess_text}"
who_wins = {paper: 'rock', rock: 'scissors', scissors: 'paper'}
result = who_wins[my_guess_text.to_sym]
winner =
if my_guess_text == your_guess_text
"tie"
elsif your_guess_text == result
"me"
else
"you"
end
p "Winner was #{winner}"

p 'v3-------'
# Version 3
my_guess = rand(3)
your_guess = rand(3)
my_guess_text = ['rock', 'paper', 'scissors'][my_guess]
your_guess_text = ['rock', 'paper', 'scissors'][your_guess]
p "You guessed #{your_guess_text} and I guessed #{my_guess_text}"
winners = {paper: 'rock', rock: 'scissors', scissors: 'paper'}
winner =
if your_guess_text == my_guess_text
"tie"
elsif your_guess_text == winners[my_guess_text.to_sym]
"me"
else
"you"
end
p "Winner was #{winner}"

p "v4------"
# Version 4
winners = {paper: 'rock', rock: 'scissors', scissors: 'paper'}
my_guess = winners.keys[rand(3)]
your_guess = winners.keys[rand(3)]
p "You guessed #{your_guess} and I guessed #{my_guess}"
winner =
if your_guess == my_guess
"tie"
elsif your_guess == winners[my_guess]
"me"
else
"you"
end
p "Winner was #{winner}"

Puzzle fix #2

Trying to get Docker to use nokogiri

Posted the steps I took at https://stackoverflow.com/q/55678669/631619

in the end I figured out I needed to add

RUN apt install -y build-essential patch ruby-dev zlib1g-dev liblzma-dev

to the Docker build

Puzzle fix #1

My attempt to capture the minutia of a programmers life

Working with ruby, switched laptops and out of the blue

$ bundle
Traceback (most recent call last):
2: from /home/michael/.rbenv/versions/2.5.1/bin/bundle:23:in `<main>'
1: from /home/michael/.rbenv/versions/2.5.1/lib/ruby/2.5.0/rubygems.rb:308:in `activate_bin_path'
/home/michael/.rbenv/versions/2.5.1/lib/ruby/2.5.0/rubygems.rb:289:in `find_spec_for_exe': can't find gem bundler (>= 0.a) with executabl
e bundle (Gem::GemNotFoundException)

hmmm. and here we go:

What version of ruby do I have (and make sure if I have _some_ version!)

$ ruby -v
ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux]

hmmm, can I install gems?

$ gem install pry
Successfully installed pry-0.12.2
Parsing documentation for pry-0.12.2
Installing ri documentation for pry-0.12.2
Done installing documentation for pry after 1 seconds
1 gem installed

hmmm, yes. So let me try bundle again in case one time issue, network, etc.

$ bundle
Traceback (most recent call last):
2: from /home/michael/.rbenv/versions/2.5.1/bin/bundle:23:in `<main>'
1: from /home/michael/.rbenv/versions/2.5.1/lib/ruby/2.5.0/rubygems.rb:308:in `activate_bin_path'
/home/michael/.rbenv/versions/2.5.1/lib/ruby/2.5.0/rubygems.rb:289:in `find_spec_for_exe': can't find gem bundler (>= 0.a) with executabl
e bundle (Gem::GemNotFoundException)
08:33:20 michael durrantm-2018 /home/michael/Dropbox/_/ultimate-weather-org/ultimate-weather-ruby master

hmmm, maybe reinstall bundler ?

$ gem install bundler
Fetching: bundler-2.0.1.gem (100%)
Successfully installed bundler-2.0.1
Parsing documentation for bundler-2.0.1
Installing ri documentation for bundler-2.0.1
Done installing documentation for bundler after 2 seconds
1 gem installed

ok, let me try again…

$ bundle
Fetching gem metadata from http://rubygems.org/..........
Fetching public_suffix 3.0.3
Installing public_suffix 3.0.3
Fetching addressable 2.6.0
...

ok, that worked. fairly simple one this time…

 

 

Tests as documentation

I have found that rspec’s formatting just lacks options.

The output is critical given that I put it in front of business users.

Here’s my way to do that.  I will also add other domain specific lines for the codebase I am actually working on but the ones below are intended to apply to all uses of rspec:

find . -regex \
 '\(.*_spec\.rb\|spec_.*\.rb\)' ! -name 'spec_helper.rb' \
 -exec cat {} \; | # cat out each file \
grep -E \
'(^ +?describe |RSpec\.describe|^ +?it |^ +?expect|\.should)'|\
sed -E 's/ do +?$//' | # No need for the do \
sed 's/, order\: \:defined//' | # No need for order: :defined \
sed "s/\.value)/)/" | # No need for .value \
sed "s/ eq / equal /" | # Change 'eq' to 'equal' \
sed "s/ eq(/ equal (/" |
sed "s/).to_s/)/" | # No need for .to_s\
sed "s/.text)/)/" | # No need for .text \
sed "s/RSpec\.//" | # Remove Rspec. \
sed $'s/^describe /\\\ndescribe /' | # linefeed between specs \
sed "s/(page\./(/" | # No need for page. \
sed -E "s/find_by_id ?//" | # No need for find_by_id \
sed -E "s/\(\(/(/;s/\)\)/)/" | # Remove double parens ((and))  \
sed -E "s/\(find /(/" | # No need for find statements \
sed -E "s/\(p\./(/" | # No need for p. before page objects \
sed -E "s/have_css/have/" | # make have_css more readable \
sed -E "s/\.disabled\?\)\.to/.disabled\? to/" | # Fix parens \
sed -E $'s/enddescribe/end\\\n\\\ndescribe/' | # Fix issue \
sed '/^end$/d' | # Remove lines with just end on them \
sed '1d' # Remove first (blank) line

Building an Iron pipeline – in the cloud

Indy Agilists build a devops pipeline… in a day

Indianapolis, Indiana, Saturday 01/05/2018

As the small crowd started showing up on a crisp Saturday morning In January for the ‘hackathon at the Ironworks’ there was a lot of uncertainty about just what they would – or even could – actually “build”.
Everyone knew each other because they worked at or was a contractor at Sallie Mae and had seen some of the pieces of this at work. This was an opportunity for them to build a similar code testing and deployment system using modern secure open source tools and using some of the most modern hosting and application building services that are pre-made to be plug and play with each other. Using personal (non-sallie mae) pc’s and a non-sallie mae network were obvious prerequisites to the endeavor.

dscn4424

So how’d they do?

Well clearly fun was on the Agenda from the start. The Admin Assistant gave out high quality (metal) spinners to all and a stuffed polar bear took center piece on the table (see photo). Then folks divvied up the various roles. Bascially nearly everyone was a Director! There was a Director of Application Development, Director of Devops, Director of Quality Integrations, Director of Product Development and of course… an actual engineer… with the title ‘Actual Engineer’. The person behind the hackathon itself took on the highest title of course – Admin Assistant.

Did they actually build the Pipeline?

Yes! Within a few minutes they had a slack workspace, connected to a Jira ticketing system. From that they were soon able to hook in code reviews and secure Continuous Integration with CircleCI to run tests in the cloud. As the day progressed they were then able to scale the system up to use 4 parallel servers to run the tests and a Code Grading system to make code quality an easy and visible process. By the time they were finished they were able to do the end to end process of make a change, perform code review and grading, run all the tests in the cloud and if passed, promote the code to production – a true CI/CD system.

What did they learn?

Many of the integrations between the different vendors and services were remarkably easier to configure, often in literally seconds or minutes. It was a stark comparison to many enterprise systems that have traditionally required months of pain staking work to set this up. The exercise was also a good example and reminder of the basic of running tests in the cloud as the default way of working.  Perhaps most of all the power of working together in the same location with the ability to simply ask each ‘director’ to do the work (often relatively small with modern tools) right on the spot was wonderful to experience.

Jira Juggling

Re-entering the world of Jira I once again encountered mapping and transition issues.

The import lessons were to remember to consider System Level workflows, Project Level workflows and Board level mappings.  The three work together in a non-obvious way.  Add to that the need to understand active and inactive workflows, workflows vs. workflow schemas and the Jira approach to publishing changes, and you quickly get lost in a series of dead-end error message such as ‘can’t edit an active workflow’ and the inability to be able to move cards to different status on the board (a common result of not getting changes right).

Also one key thing is to make sure to refresh screens and boards wile doing changes.  Some changes are updated real time, others require a page refresh.  Easy to get caught out thinking a change hasn’t worked when it has.

Crucial screenshots:

  1. These are the system level controls for things like workflows.  They are often not obvious or easy to know about when you are at a project or board level

Screenshot from 2019-01-01 15-06-45

2.Workflows and Workflow Schemas and which are active in projects

Screenshot from 2019-01-01 15-06-19

3. When you change workflow you’ll frequently need to then readjust a board to deal with the different status and how they should show.

Screenshot from 2019-01-01 13-33-34

4. At the project level there are settings

Screenshot from 2019-01-01 13-08-15

5. Including workflows

Screenshot from 2019-01-01 13-08-34

6. Different boards may need different status mapping to get their workflow right

Screenshot from 2019-01-01 13-49-51

 

Terms of Endearment

Actually the term I am thinking off is far from ‘endearment’ but let me avoid naming it for a minute ‘cos it is a pretty loaded term at this point.

Here’s the behaviors I observe in my current work:

  • When help is needed from another team I ask my manager and they ask the other persons manager and the other persons manager then directs the other employee to do the work
  • When I disagree with someone on another team I let them know what my position is and who I work for and tell them that they will have to contact my manager if they disagree with my decisions
  • When I can’t make a deadline I will identify the team or person that is blocking me so that my boss knows about it and can take action
  • Periodically management will need to send a ‘clear message’ to the team about what their priorities should be.  Often done in an urgent fashion generating fear

The one thing you don’t see here is any mention that management is using command and control.  Also there is no mention of whether the manager is deeply involved in the conversations or just watching from the sidelines.

What can be said however is that the command and control is clearly taking place and to a large part due to the employee enabling their manager.  By asking the boss to ‘fix things’ instead of asking to (and being able to) work directly with the other employee a huge amount of non-essential work emerges, often including backlogs that are weeks long (even if the current request is simple to do), and all the trapping of command and control and the politics that naturally come with it.

This is why part of a new culture needs to be people asking other people directly for help without groups hiding behind the shields of SAFe, Ticket systems like ServiceNow, CommSec or whatever process is used to create lags.  The lags can easily end up taking 99% of the elapsed time to address the need.  This defeats the ability to go faster and without that the business is in jeopardy.

GoodAt BadAt

An novel and open look at what I am good at and bad at.

Super simplistic for sure but I am exploring this as an interview tool that allows a candidate (such as myself) to self-assess and reduces the need for companies to need to test them for skills they openly admit being bad at.  This is a new approach.  Might not work for some.  Focusing on pushing candidates on their personally rated GoodAt list may be a helpful approach.
For me, the ‘good at’ list also reflects the skills I’ve actually needed in my automation and leadership positions in several recent jobs and the ‘bad at’ are skills I’ve not needed to use in those positions.

Good At

Screenshot from 2019-08-20 11-54-08
Good At

Bad At

Screenshot from 2019-08-20 11-15-39

The Journey

My journey for knowledge and transformation started 18 months ago as I joined a new and large company.  As I started to read books about new ways of working I found that being at an organization going through dramatic change was the best place to be in order to fully explore many of these issues in person.

So I started reading.
and reading.
and reading.
and reading some more.

After reading a few influential books and getting an ever greater hunger for more, I started to notice something: my thinking had changed.  My approach to problems had changed.  omg my marriage had changed.  My health changed dramatically.  At first I didn’t see the changes but then a good friend pointed out “have you noticed how there seems to be a cumulative effect just from reading so many of these books?”.  Boom.  She was spot on (thanks Stacey!).  Once I started to become aware of this – and upped my reading volume even more I started to become more and more aware of this.

It is one things to read a book.  Or two.  or three.  However good they are, they tend to have a fairly limited effect on you.  You read some great things and then ‘life continues’.  Some of them you will retain and they will pop up at the perfect time in the future, but for the most part, much of the advice fades into the background of life.

Unless.
You read a lot.  You start spending a significant portion of your week just reading, highlighting and thinking.  Then more dramatic transformation can happen.  We have learned that the brain is plastic and this feels to me like taking advantage of that to change and ‘reprogram’ yourself.

Having gone through this I want to pass on what I’ve learned. I want to show the path I’ve taken and how it can work for you in far fewer steps.

OK so I have a list of 80 books on transformation and leadership.  All you’ve got to do is go through all those book, highlight and reflect the key parts, and… JUST KIDDING.
I have actually read 80 books about change on my journey but if I had to do the journey again I would pick out the super critical ones.  So this is what I have done for you.  At first I picked the top 40.  So yeah, right, just read 40 books.  Again, who am I kidding?  So OK I chopped that down to 20.  Still too many huh?  OK I chopped that down to 15.  Still too many?  Tough.  That is really the minimum I can do (I tried 12 but it meant critical books on the cutting room floor).  I think every one of these books is a critical part of the journey and not to be skipped.

The other interesting thing for me was the order I read those book in (preserved here).  One book lead to another – primarily through Amazon recommendations from the previous book(s) – but also from recommendations from current colleagues – thanks Joe, Carla and Stacey – and from the challenges I faces in my daily work at a company going through and struggling with an Agile transformation.

So here is the list in the order that I read them:

  1. The Psychology of Computer Programming
    Gerald Weinberg
    The book that ‘got me started’.  I was looking for some unusual reading and this caught my eye.  It seemed such a different take from all the programming books I’d read over the decades but I though I’d give it a go.  It also seemed somewhat dated perhaps.  Once I started reading it however I quickly realized that the advice was priceless and just as relevant today as when it was written.  This was the start of my journey…
    Screenshot from 2018-12-15 13-00-02


  2. Crucial Conversations Tools for Talking When Stakes Are High
    Kerry Patterson
    This made me consciously aware of so many things about the conversations I have had at every place I have worked at, but had never paid attention to before in detail.  It was eye opening to me
    Screenshot from 2018-12-16 06-47-35


  3. Managing the Unmanageable
    Mickey Mantle
    This opened my eyes to the characteristics I had observed my whole working career about how ‘techies’ work and how it can be so different from other professions.
    Screenshot from 2018-12-16 06-37-25


    The next three book are all about experiences from specific companies and helped me to start having some answers to the many questions that I was coming up with in my current workplace.


  4. Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity
    Kim Scott
    The story told here resonated strongly with me.  Being open and honest is explored in a wonderful way in this book
    Screenshot from 2018-12-15 13-01-14


  5. Powerful: Building a Culture of Freedom and Responsibility
    Patty McCord
    This is story of Netflix and new ways of working that I strongly related to
    Screenshot from 2018-12-15 13-01-39


  6. Work Rules!: Insights from Inside Google That Will Transform How You Live and Lead
    Laszlo Bock
    Who wouldn’t want to know how they do it at Google huh?  The interesting thing I found in this book was the emphasis over and over again, that it doesn’t matter what the business or industry is, the things that google does to reward and retain talent can be done anywhere
    Screenshot from 2018-12-15 13-02-11


  7. Accelerate: The science of Lean Software and DevOps
    Forsgren PhD
    A great read on what going faster means in practice
    Screenshot from 2018-12-16 06-43-36


  8. The Culture Code: The Secrets of Highly Successful Groups
    Daniel Coyle
    A great book that examines what a change in culture really means
    Screenshot from 2018-12-15 13-02-53


    At this point in my journey I started to get a lot more ‘into’ how people and groups work (and don’t) work well to achieve goals and also some of the science behind it.
    I started to read a lot of books on just how people work together and what has proven to work well and what hasn’t.


  9. Alive at Work: The Neuroscience of Helping Your People Love What They Do – Daniel M. Cable
    This was a fascinating book on the science behind how the brain works
    Screenshot from 2018-12-15 13-03-09


  10. Humble Leadership: The Power of Relationships, Openness, and Trust
    Edgar H. Schein
    An eye-opening book that goes by beyond ‘servant’ leadership to a radically different work of working.  I want to work with people who have read and agree with this book.
    Screenshot from 2018-12-15 13-03-29


  11. Not Nice: Stop People Pleasing, Staying Silent, & Feeling Guilty… And Start Speaking Up, Saying No, Asking Boldly, And Unapologetically Being Yourself – Aziz Gazipura
    This book’s title may mislead you
    It is about being nice by being honest and it is a great way to think differently.
    This is the book that affected (improved) my health the most by explaining the stress that I hadn’t been able to see or understand well.
    Screenshot from 2018-12-15 13-03-48


  12. Psychological Triggers: Human Nature, Irrationality, and Why We Do What We Do. The Hidden Influences Behind Our Actions, Thoughts, and Behaviors
    Peter Hollins
    This gives you a much greater understanding of the why behind peoples motivations and actions
    Screenshot from 2018-12-15 13-04-33


  13. Dare to Lead: Brave Work. Tough Conversations. Whole Hearts
    Brené Brown
    A moving book that brought me to tears.  Amazing advice for tough talks
    Screenshot from 2018-12-15 13-04-49


  14. Perfect Software and other illusions about testing
    Gerald Weinberg
    Another fantastic book by Gerald that is a most read for anyone doing testing and automation or leading that effort.
    Screenshot from 2018-12-16 06-40-21


  15. Bring Your Human to Work: 10 Surefire Ways to Design a Workplace That Is Good for People, Great for Business, and Just Might Change the World
    Erica Keswin
    Continuing the ‘make it real’ movement at work with tips and tools to succeed
    Screenshot from 2018-12-15 13-05-09


    Finally here are all 15 books should you want to just go on a spending spree.
    I got them all on kindle so total cost was probably around $200.  A great investment.
    20181216_060158

 

Useful Dockerfile Setups

https://github.com/durrantm/docker_11_2018_bashful_bash

e.g. alpine

FROM alpine:latest
LABEL maintainer="Michael Durrant<junk@snap2web.com>"
RUN apk add bash git vim
COPY alpine_bashrc /root/.bashrc
COPY .bash_functions.sh /root
COPY .bash_aliases /root
COPY .git-completion.bash /root
RUN "/bin/bash"
RUN git config --global user.name 'Michael'
RUN git config --global user.email 'no-reply@google.com'

Some common docker commands:

docker version
docker images
docker pull alpine
docker ps -a
docker start container_name
docker attach container_name
docker ps

Without a Dockerfile…
docker run -it alpine
docker attach container_name

With a Dockerfile…
docker build -t alpine_plus_some .
docker run -it alpine_plus_some
docker attached container_name

Top five technologies to master by 7/1/2019

“Choose the five tools and technologies that you want to be highly proficient in by July 1st 2019”
I picked:

– AWS development
– Docker
– JS – Javascript-es6-npm-mocha
– Chef
– Repl development

I’m already doing docker today!


 

Update: 12/1/2019

AWS: Done (triple certified)
Docker:  Done
JS: In-progress, node heavy
Chef: Not done, not sure.
Repl development: Not doing

Additional:

Making Ruby Coding Videos
Making Refactoring Ruby/Python Videos