Hiring quality code programmers

I recently read a blog post that made the following statement:

“”The candidates we are interviewing all have senior titles and 8+ years of experience, yet a number of them have struggled to find the max value in an array, and one couldn’t even start because of the inability to use a for-loop. After seeing this, I requested we move the programming exercises to the start, because all else becomes irrelevant if they fail there. Moving them has helped us reject candidates earlier when it becomes clear they haven’t passed through our filter. All of our candidates had great looking resumes, fancy titles, and can easily recite their job history with a smile, but put them in front of an IDE and all pretense quickly falls away.”

I know and have a lot of respect for the author but in this case I could not disagree more with the conclusions they have reached.

Let me explain:

One well known take on programming is that there are two big problems in writing code: naming and premature optimization. This is about the latter: Premature Optimization
Unfortunately this (very common) approach to coding in interviews is missing the value that senior developers add. Lets me start with this: despite 40+ years of programming experience I still can’t do these exercises in what is essentially performance art in an interview. I am an introvert with imposter syndrome so putting me on the spot to prove my knowledge is a disaster. I can’t think clearly at that point.
Ironically I managed to get fairly good at the abstract programming exercises about 20 years ago when I temporarily accepted the (false) premise that performance and efficiency is everything in hiring a programmer. In reality I learned over time that these are not the problems that I face in the companies I am hired to work in. When I want to sort items 99.99% of the time .sort() is what I will need because it was optimized in the languages I use about 20-30 years ago.
I assumed these out-of-date exercises would soon be replaced. I was very wrong. “Data structures and algorithms testing’ has become more common since then as a (very poor) substitute for evaluating a programmer.
Today I detest these exercises because want I crave is a conversation about the problems that the company is experiencing in its codebase (there are always problems!). Unfortunately, similar to dating, most interviewers don’t want to admit to or talk about the problems. Only very senior devs get this.
So instead of a mature conversation about problems we are left with abstract problems that have already been solved. Fail.
I also noticed that the codebases in EVERY company I have worked in (other than my own recent ones) were a huge mess. Low unit testing. Complex code that only one person could maintain. Massive classes and methods. No documentation in the form of tests. No infrastructure in the form of IAC. What do the programmers that work in these companies know how to do based on the ‘bigO hiring process’? Produce memory and time efficient (bigO) code? What problem is that solving in their actual job? Most developers are using a framework, for example React and the problems they face and needs to solve are almost universally NOT about BigO.
Instead of testing for abstract BigO implementations of sorting numbers how about talking about unit testing first? How about talking about the use of English in naming stuff – another MASSIVE problem that isn’t covered in interviews but on Day 1 of the job the horrors in the codebase for naming are revealed.
What about modularity? What about the philosophy of breaking stuff down. What is the state of this in the company’s ACTUAL codebase?
What about linting? This is one of the biggest quality items for me. I favor a massive linting list of rules for consistency and feedback and could spend several hours in interviews just talking about linting. Do you talk about linting in interviews?
Finally I am neuro-diverse and find trying to do any of the ‘i challenge you to write code in front of me and then respond to my challenges about performance’ in interviews to be too stressful to think clearly as I am in flight or fight mode and higher thinking powers have shut down. This is different from the pair-programming collaboration that I favor and lvoe doing with fellow programming enthusiasts.
I beg you and all who read this:
Stop with the bigO abstract performance acting way of interviewing people. Your codebase will thank you.

Tests as Documentation

This is a good concept that is often stated clearly but lacks actual implementations

I solve this problem in Javascript / Jest by using a script to extract out my test descriptions from the ‘describe’ and ‘it’ statements in my tests. 

Now I have documentation that I can share with non-technical team members such as product owners and increase their understanding and ownership of what the system does and how it is tested. They can then become full informed champions of automation efforts.

Here’s the code:

// documentation.sh
find . -regex \
'.*.test.tsx' \
-exec cat {} \; | \
grep -E '(describe(|it()' |
sed -E 's/()//' |
sed -E 's/=> {//' |
sed -E "s/it('//" |
sed -E "s/describe('//" |
sed -E "s/'\,//"

This leads to output such as the following:

App
Should render without errors
Error Page
Should show the error text
Question Admin
Should show the Question Admin page
WizardLayout Component
renders without crashing
HomePage
Should render HomePage component

Codeless UI Automation

Codeless UI automation systems have been around for a while.
Sikuli, GhostInspector, SeleniumIDE and Rainforest are some of the ones that I’ve used. GhostInspector is my personal favorite!
I have worked with several of these tools. Unfortunately, in my experience, they are often abandoned by companies after 2-3 of years of use as they don’t scale and starting creating more problems (mostly in reduced development speed and flaky tests) than they solve. The lack of people who can update them can lead to permanent slow downs of development speed for the business. Over time this can be fatal to the business as competitors will move faster.

Codeless UI automation systems typically suffer from the following common set of problems:

  • As time passes, tests are added and runtime gets longer and longer, slowing down the entire development process
  • Managing, organizing, grouping and naming test cases quickly reveals classic problems that only programming can effectively address
  • Naming problems grow. Without the tools used in programming such as lint rules, code format, syntax and style will be inconsistent
  • Copy and paste issues. Without the ability to DRY up repeated use of the same code, inconsistencies will arise leading to bugs.
  • Object Locators are often fragile and brittle and will lead to tests that will break in the future due to unrelated changes such as moving the component around on the page.
  • Technical Debt grows but there are not tools and approaches to address it
  • As patterns and approaches emerge there are limited options to capture and formalize those approaches in code
  • Automation exceptions that require specific programming approaches cannot be done
  • Interaction with other services and APIs becomes challenging or not possible
  • Stubbing or Mocking data and/or network can be hard or impossible
  • Although programming is not required, the specialized knowledge to create complex tests will still be needed and will require programmatic approaches
  • Flaky tests cannot be easily addressed in code leading to crude approaches such as long waits or manual interventions
  • Programmatic approaches to authentication and authorization can be hard to implement
  • Specialized knowledge is needed to modify the tests, this divides ‘development’ and ‘testing’ into two tribes that now depend on each other
  • From Visual Basic to Frontpage, many attempts have been made to make codeless development systems and they have all failed
  • Continuous Integration can be challenging and non-standard, hard to implement and hard to change, slowing down development and testing
  • Application developers will not use the tool as part of their own workflow leading to slower feedback and bugs
  • Accessibility and using aria- locators are not a focus of many no-code UI automation tools leading to accessibility issues not being detected

There is a solution:

  • Have an experienced programmer train folks in programming for UI automation using standard programming languages, tools and frameworks such as Selenium and Cypress.

Contracting Small Piece Programmers

Hiring contract programmers to develop code in small pieces can be a great way to scale application development.

It requires a system where it is easy and obvious for them to do the right thing without the period of interviewing and then on-boarding that would usually take place in longer engagements such as those typically used by IT consulting firms.

By using the right tools and a high degree of automated testing, this can be done very effectively. I have found that it does require many pieces be in place for it to work well.
These pieces are already good ideas for most development teams but there tends to be a lot of leeway in most organizations as to which are used, and to what extent. for long term project contributors. Under the small piece consulting model they become more important as they provide immediate “non-learned” guidance and reduce ambiguity which makes them both easier to follow and also not require supervision.

 Here is my list of the things you want to have in place for small piece contractors:

  • Code Consistency. Contractors follow existing code first for standards
  • Small components. Large components are hard to change and hard to test
  • A lot of lint rules. Remove preferences and ambiguity about how to format code
  • High unit test coverage – close to 100% with a test file for each application file
  • Fast unit test suites – this means mocking database, screen and network
  • A well thought out and implemented unit test data strategy
  • Ability to run application locally in various test and offline modes
  • Ability to hide private keys
  • High end-to-end app. coverage to ensure production functionality isn’t broken
  • Editor feedback, for example using typescript over javascript to make coding easier
  • Editor configuration to make sure new devs have the same setup for feedback
  • Issue ticket system such as jira or github where you write specs for contractors
  • Code review system such as github PR’s to manage branch work and access
  • CI system that runs linting and unit and end-to-end tests for PRs

AI is a flashing red danger

AI is the hot subject lately. Not just in geeky tech circles but throughout society people are discovering, using and experiencing huge benefits from this exciting new tool.

Some recent examples:

  • I was able to use GPT to generate an end-user browser application for me by just describing it at a high level. I was able to iterate and ask for specific approaches to coding, scoring and writing tests. I was able to generate content in 1 hour that would previously have taken me several weeks.
  • I was able to use GPT to generate a document detailing various options for a struggling startup with founders that had different viewpoints. I used GPT to generate a list of 10 different routes that the startup could consider and I was able to iterate on the proposal to have it focus on specific topics.
  • I was able to give GPT existing computer application code and have it write automated tests in a few minutes. This used to take a human many days
  • I was able to get detailed information including the pros and cons of various treatment options on a recent medical issue in seconds.

So what’s the downside? What’s the danger?
The popular press comments are typically about how dangerous this is. How the advice given may be bad, illegal or simply ineffective, despite ‘sounding reasonable’, partly because it is written in well-phrased English.

This is true but it is missing the larger story!

Here’s the actual danger: humans are being replaced. This is unlike previous technological iterations such as manufacturing where the most repetitive and simple tasks which were previously done by humans are replaced by machines. This has long been the promise of technology “it will free up humans to do more creative and innovative work”.

What we are actually seeing now is that with AI tools like chatGPT the most advanced jobs are being replaced. jobs that have typically required humans years to learn. Advanced programming, analytics, analysis are all areas where this is happening. Also any industry that leans heavily on communication with consumers – think sales, marketing, advertising. Also medical situations that previously relied on very experienced humans for diagnosis can now replace many of them. The list of industries and applications is endless.

In primary school I learned about ‘first’ rounds of technology that replaced humans – the cotton gin, the windmill, the automobile. Many people feared (and experienced) being replaced by machines during these times. It turned out that these innovations enabled people to actually have better jobs because the machines that do the work still need to be designed, built and maintained by humans.  It is often assumed that this is the same situation today. However I am not sure this is true. I am seeing this model break all around me. Folks laid off in the last 3 years are not getting jobs (and after 6 months they no longer ‘count’ for unemployment statistics). So they are effectively “hidden”.

Without the high incomes of the jobs that humans used to do we will have a growing economic problem. This is also “hidden” because companies will still do well. Profits will be higher. Domestic GDP will be rising. All the indicators that we have historically relied on to show economic health will be green. But people won’t have jobs. Or salaries. This doesn’t work.

Experience *Not* required

An interesting side effect of modern technology is that it is making the value of human work experience far less valuable with each advancement in technology.

The advantages that experienced workers have historically brought are many and include:

  • Knowledge from many different areas
  • A network of fellow professionals at a senior level
  • Millions of answers to millions of questions if they are asked
  • Obscure, hard to learn approaches for working effectively
  • The ability to present concepts and ideas to junior folks to educate them
  • The collective wisdom of those that went before, only known to a few seniors

However this has largely been replaced by the wonders of the modern age – you can Google for most answers (even before AI), you can get your programming answers from Stack Overflow and they are usually better answers than any senior local person and you can study any complex topic in Udemy with amazingly high quality productions for just $10

These are amazing advancements. They are also devastating for senior folks with experience who now find themselves considered obsolete and unable to contribute at a senior level.

We see the outcomes of this in the world around us:

  • Tech meetups have died. Why bother when you can uDemy to learn more tech and use linkedIn to hear about a job?
  • Senior folks stay unemployed. They don’t add unique value anymore and they typically cost a lot of money
  • Folks with families struggle. The diminishing number of senior jobs available in tech and the nature of the job tends to favor younger workers without families and financial obligations. This increases inequity in society.
  • Ageism is growing as many older folks struggle to add value in this new world order where their experience and accumulated wisdom is not valued and is increasingly seen as outdated. More inequity.
  • Competition is brutal due to the ability to work remotely. Many tech workers enjoy working remotely and those in other industries saw the advantages during the pandemic. The downside is that now hiring someone who appears only through a screen means you have thousands of choices, including much lower cost ones from much further away.

Here’s some skills that are still useful:

  • Communication
  • Listening
  • Asking good questions
  • Not having many answers without more info

The challenge many senior folks have now is that the above skills are not the focus of interview processes which are often highly scripted and still asking questions from previous employment models that were already out of date many years ago.

Boston Fake Tech Jobs

Over the past few years I’ve noticed a pattern of certain New England tech employers constantly having the same software development and QA positions open month after month. Sometimes year after year. I’ve applied for many of these positions at these companies and typically have not been able to obtain initial interviews other than a couple with junior HR recruiters. This was true even before both the pandemic and the last year of tech layoffs but now seems to be more prevalent than ever before.

I have more programming and QA experience in a wide variety of industries and domains than all other local candidates so my failure to get an initial interview for these positions at these companies is most revealing ( I share low salary expectations by the way). I also know several other senior QA developers in New England who have shared that they have had the same experience at these companies.

Some of the positions listed have thousands of applicants revealing that these companies have little regard for the amount of time and energy wasted by those applicants for these fake jobs. Many of the job descriptions reveal organizations with fundamental development quality issues that can’t be fixed by adding more QA processes and people at the final quality verification step. The list of responsibilities for fixing upstream issues usually shows organizational issues in quality that will not be fixed by QA hires. Others may be using them as tools for immigration visas which require proving no local qualified candidates exist which is not hard to do when that is the actual goal – it is also how I obtained my own immigrant visa so I know it happens.

Here’s my personal “New England Hall Of Shame Employers With Fake Tech Jobs They Never Actually Fill in 2023”

  • Fidelity
  • Citizens
  • StateStreet
  • HarvardUniversity
  • FoundationMedicine
  • TechTarget
  • Moderna

Telecommuting… from the Office

Working remotely has been a constant topic over the past few years and grew massively due to the Covid19 pandemic. This has been a constant topic in tech companies for over 30 years which has is now a mainstream issue for companies throughout the economy.

Many companies have struggled to find the right balance and many have organically grown to a place where a combination of policies, practices and guidelines exist that work against each other and place too much emphasis on where work is done and not enough on it actually getting done.

Probably the biggest mistake I have observed in this area is the combination of setting a “minimum number of days per week in the office” combined with ‘”no set schedule for which days”. The latter being seen as being more flexible and considerate to employees.

This leads to frequent situations where a person will commute into the office, with all the associated time and cost involved, only to then have video call meetings during the day with co-workers who have chosen to be remote on that day. This is also not inclusive for introverts who, for specific activities (typically highly technical one such as programming) often crave the solitude and lack of distractions that power their focus and productivity.

These situations make the reasons for coming into the office – the human contact and work productivity reasons – contrast starkly to the ‘need to see butts in chairs for control’ model.

If the benefit of coming into the office is better collaboration and in-person shared experiences (both of which I am a huge fan of!) then setting a schedule for everyone for those days seems an obvious step that both employees and employers will benefit from.

This certainly presents challenges for traditional office buildings as it can led to crammed offices on some days, followed by mostly empty offices on others. This requires thought and planning into how to manage that. For example it can help to have specific groups or combination of groups all come in on certain days and other groups set to other days. This requires cross-company communication an collaboration and it one of the ‘new workplace’ skills that directors and managers should grow their competency in. A benefit to direct employees is a known and predictable schedule ahead of time which many find easier to then manage their private lives and families around.

APIs and Integrations make the difference

Things that SAAS startups (and most successful big companies) do that are very hard but give a competitive advantage precisely because of that:

  • Integrations and APIs with systems from other providers and companies
  • Sucking in big data and being able to map and subsequently update it
  • Proving an admin system that non technical users can use to maintain meta data
  • Mocking and stubbing entire services for development and testing

The first bullet point – Integrations and APIs is the hardest because, for the other bullet points, you are in control and can do the work, but for Integrations and APIs, these are dependent on other organizations and information that may not be available or easily obtained

My experience with integrations and APIs:

Everquote – Insurance company’s and state DMV integrations were critical
Paperless Parts – ERP integrations were critical
Zipcar – Insurance companies and state Motor Vehicle Departments were critical
Teladoc – Large number of medical integrations with many systems were critical
Children’s Hopsital – Billing, labs, radiology, imaging. Integrations were critical
Crypto Trading – Provider APIs were critical to the system built on top of it
District Management Council – School ERP system integrations were critical
Sallie Mae – National School Applicant and federal loan integrations were critical
MyWeatherApp – Uses 3 different APIs to get weather data so APIs are critical.

Leads me to conclude:

Anyone can create a saas product with local data they’ve collected.
Winners are those who master integrations with other systems

A huge challenge for integrations is that you are not in control of the foreign API or (any) documentation that supports it, so you are at the mercy of other organizations that often move very slowly (think months instead of days for a simple request for information). Also they might change the API without informing you at any time. Their documentation might be poor or not exist. They may require a partner agreement which may take months to achieve. The challenges are numerous. Thus companies that can handle them are those that add value and grow.

Do end-to-end tests replace unit tests?

In many organizations developers use end-to-end tests to act as a proxy for having unit tests.

This is an attractive approach because you can cover hundreds of individual units with a few end to end tests.

However there is a drawback. You don’t get the immediate feedback that unit tests give you in a few seconds. Without that immediate feedback, writing application code and writing automation code remain separate activities. First you get the app code working. Then you write the tests. Then you run the tests and wait for several minutes. That is if you can figure out how to do the testing and if you have plenty of time. The problem is you don’t know how to test initially and then is never plenty of time. So this ‘drawback’ turns how to be critical. I’ve seen many organizations go down this path and the result is always low test coverage, poorly written app code and lots of bugs.

The greatest benefits of testing come from when it is actually part of the developer workflow.

My preferred solution to this is to have actual unit tests for unit (application) code. The feedback from running the test happens in under 5 seconds, so the developer gets the feedback in real-time, then carries on writing code immediately. When app code writing and automation code writing happen this close together – essentially in parallel – then all the speed and reliability benefits you are looking for from testing can happen. Your application code will look simple because it has to be testable from the outset, not as an after thought. Also you no longer need to ‘sell’ testing as something that needs to be remembered because it is simply an integral part of the development process. However when you separate the act of creation (writing app code) from the act of testing (writing tests), then your quality efforts will always struggle.

Note that writing good unit tests at the time that you write the application code does not require adherence to a particular methodology such as TDD (Test Driven Development) or BDD (Behavior Driven Development). All it requires is that by the time that you submit a PR for review, it includes tests to fulfill the maxim that Implementations should have Specifications (in the form of unit tests).

It’s a wrap!

Actually a “wrapper” is the topic here. The subject is testing of course. In this case React Unit tests.

The path to good React unit test coverage requires a number of activities and approaches as well as a diverse assortment of knowledge. Here’s a summary in one place.

1. Just render it!

This is where I start off for a new component or a component with no test coverage. Just try and render it! This follows the concept that 80% of the value can be obtained with 20% of the effort and I have found that just the basic ability to render a component with default attributes is a great place to start. The other part of writing a basic render test is that it will be the scaffold you need for the complexity you will need to test as the component grows and/or is changed. It will help to start setting the example that every implementation has a specification and it will communicate to the next person working on this piece of code the expectation that it has a test and both need to be maintained.

import { screen, render } from '@testing-library/react';import SomeComponent from './SomeComponent';
import React from 'react';
describe('A component', () => {
  it('should render the component', () => {
    render(SomeComponent, );
    expect(screen.getByText('something')).toBeInTheDocument();
});

Note:

For a complex component with many lines, parameters, methods and conditional branches, adding tests for the first time may be quite challenging. This is useful feedback. Large, complex, application components are hard to change to meet changing business needs and requirements and they more frequently have bugs due to the complex interaction of parameter values and the logic in code path branches. Writing tests (including struggling to write them!) exposes this complexity and encourages the breakout of smaller, more focused components with much fewer dependencies and few or no code branch paths leading to software that is easier and thus cheaper to change more quickly.

2. Render with router wrapper !

Other than the highest level component, it’s pretty likely that you’re using React Router, so that’s the second technique I’ll cover. It’s quite simple and a frequent pattern. Here’s an example of what code looks like to account for it.

import { screen, render } from '@testing-library/react';import SomeComponent from './SomeComponent';
import React from 'react';
import { BrowserRouter } from 'react-router-dom';  // <-- Note: added
describe('A component', () => {
  it('should render the component', () => {
    render(SomeComponent, { wrapper: BrowserRouter }); // <-- Note use of 'wrapper'
    expect(screen.getByText('something')).toBeInTheDocument();
});

3. Render with context wrapper !

The third technique concerns a global state such as the logged in user which is currently provided to the component through a context wrapper. In cases such as this you can provider the wrapper yourself with the value you want to be under test. The resulting code looks like this (this is for testing a logout link):

it('renders a logout link for logged in users', () => {
  const page = render(
    <UserAuthContext.Provider value={{
      user: { uid: 'a' }, signIn: jest.fn(), signUp: jest.fn(), logOut: jest.fn()
    }}
    >
      <AppNavBar />,
    </UserAuthContext.Provider>,
    { wrapper: BrowserRouter }
  );
  const navLink = page.getByText(/Logout/i, { selector: 'nav a' });
  expect(navLink).toBeInTheDocument();

The code for the context wrapper returns the following which you are stubbing out (as shown above)

...  
  AuthContextProvider = ...
  return (
    <UserAuthContext.Provider
      value={{ user, signIn, signUp, logOut }}
    >
      {children}
    </UserAuthContext.Provider>
  );

4. Render with default param wrapper !

The fourth technique solves an interesting challenge: what happens when the state is only maintained internally, i.e. through a useState hook in the component under test and not passed in or currently ‘wrapped’ and thus not exposed to a mechanism such as external testing to change it’s state?

One approach here is to switch the application to using a context wrapper. However this is likely not the right approach in many cases where you don’t need to create what are essentially global variables accessible through horizontal ‘pyramid of doom’ wrappers (i.e. many of them), when your only goal is to be able to set the initial and subsequent states of the component that you want to test.

This final challenge therefore requires a change to the application component. You introduce a new parameter which is the value to use for the default state. Then, in the useState hook of the application component, you use that as the initial value of the useState or useContext (as in this example) hook, i.e.

import
const SomeSet = ({ defaultSomeSet }: SomeSetProps) => {
  const { someList } = defaultSomeSet || useContext(SomesContext); // <-- Use the param 'or' the context.

This then enables you to write a test and pass in parameter(s) to control the flow paths, i.e.

const defaultSomeSet = 'initial'
render(<component {defaultSomeSet} />)

The above techniques should give you most of the control you need for writing React tests. Good luck!

Test Strategy

A comprehensive test strategy is essential to achieving a high quality product.

In other posts I’ve talked about the practical details of creating valuable test suites. In this post I will talk more about the high level strategic activities that are also needed.

Establish that quality and testing is everyone job

This means eliminating walls and silos and making sure that quality is the responsibility of everyone on a team , from product owner to application developer to designer to qa staff are all willing and able to fully participate in testing activities and help in creating quality automated tests

Establish that application engineers are responsible for high quality unit tests

Writing application code for new features or modifying existing code must be seen as only part of the job. The other part is the unit tests that will accompany the application code. Product Owners and their leadership need to aware that writing tests is a key activity for application developers and not the outdated ‘qa members do the testing’ approach from before Agile.

Establish code coverage standards

Establish the concept that implementations (application code) should have specifications (application unit tests).. If code does not have unit tests then it is missing its specification and can’t be guaranteed to continue working when future changes are made. Thus code without tests is a significant risk for the company.

Ensure that Automation Engineers are first class citizens

Start by paying them as much as developers. Use titles of Application Developer and Automation Developer to emphasize that they are both developers of code. Ensure that their physical and organizational placement makes it clear that they are full members of the development team and process and not simply ‘after development’ verification testers. Make sure that they are present in planning meeting and their contribution in those meetings about testing is welcomed and championed by the rest of the team.

How are we going to test that ?

These are the 7 “magic” words that I continually try to get teams to use during all their backlog refinement (or similar planning) sessions. Starting out by discussing how we test it, what will we have for unit and integrated and e2e tests, what existing tests can we enhance, what new tests do we need to write, how do we manage the test data, etc. as the type of questions that lead to critical conversations as early in the process as possible to help in adopting a ‘shift-left’ approach to quality.

Write testable code

There can be many arguments for writing quality code and there are even more definitions for what is is. Modular, Single Responsibility, Sandi Metz rules, etc. These can end up being treated as academic and seen as impractical to implement. There is a simple solution to this – write well tested code. With (unit) tests that are simple and easy and only test one thing at a time. By starting with this principle developers will find that all the other desired attributes – modular code, DRY, SRP, etc. will be a natural outcome.

What to test where?

I talk a lot about the Agile Testing Pyramid and the case to be made for unit, integrated and end-to-end automated tests.

This is a good foundation but still leaves many organizations with a great deal of uncertainty about what specifically to test where during the actual development as well as who writes the tests..

Here is the general guidelines that I follow:

Unit tests:

  • Start with the principle that Implementations (application code) require specifications (test code)
  • Mock and stub the unit dependencies. The database calls. The network API calls. Create mocks and stubs for the data that the component uses.
  • Measure and display coverage clearly in the tooling being usd for development
  • Code coverage should be at 100% and only less by agreed on exceptions
  • Tests are the living documentation for you, for teammates, for newbies, for contractors and for the future
  • Focus on negative test cases for units of application code
  • Use PRs to manually or automatically apply code coverage standards

Integration tests:

  • Use real dependencies to make sure units work together
  • Use real dependencies to compliment unit dependency mocking
  • Test endpoints to ensure that APIs and contracts haven’t changed

End to End tests:

  • Use a framework that is robust in handling dependencies such as network and database
  • Focus on a few positive test cases
  • Avoid data driven testing which favors volume over reliability

Who writes which tests?

Traditionally this has been divided up between application and automation (‘qa’) staff.
This frequently proves to be dysfunctional and ineffective.
As all automated tests should run for all changes, both types of specialists are most effective and productive and invested in quality code and testing if it is everyone’s jobs to create and maintain tests at all levels.

Contracting for Quality

In https://durrantm.wordpress.com/2023/01/20/quality-first-hiring-contractors/ I wrote about how to hire contractors and give them enough guidelines and guard rails to help guide them to write quality code.

In this ‘episode’ I am going to talk about what happens when there are quality issues with the code, despite this guidance.

“From a github repo PR conversation”… “Please do x”… “No I don’t like x i want y”…, “But i want to…”

These sort of code conversations – where I am trying to give quality advice to someone who I’ve hired for a small amount of coding tend not to go well.

Over time I’ve developed the following techniques to handle this:

  • Provide a lot of technical guidance so programmers aren’t guessing and sweating the quality details – see https://durrantm.wordpress.com/2023/01/20/quality-first-hiring-contractors/
  • If the basic code itself doesn’t work, breaks tests, etc. I work with them to fix it or I fire them.
  • If the programmer doesn’t really follow what I am saying easily after some code / effort, I pay them and move on. Done.
  • If the code works, but doesn’t meet quality standards for maintenance, decoupling, testing, etc. I pay them and I use the code but I start refactoring in short order. I may or may not use the contractor again
  • If the code works, if it is high quality, has great tests etc, I pay them a large bonus and try to hire them for the next job

Hats I Wear

The software development practice of using different personas (or ‘hats’) when I am reviewing code PR’s is one that I enjoy using as I find it adds great value in creating a high quality product.

The “Hats I Wear”
The 14 role personas I use when reviewing PR’sHat
RoleUserApp devArchitectQA dev
An experienced developer familiar with the codeYYY
A product owner focused on user functionalityY
A newbie developer still learning to programYY
A senior developer who is new to this codeYYY
A QA developer looking for well written testsYY
A QA developer looking for user workflow coverageYYY
An application dev focused on unit test coverageYY
A developer who focuses on code consistencyYYY
An application architect looking for new patternsYY
A QA architect who looks for emerging patternsYYYY
A consultant implementing a small feature quicklyYY
A devops person automating the dev pipelineYY
A devops person securing the dev pipelineY
A security developer looking for secure practicesYY

Quality First – Hiring software contractors

At quality first we use the services of contract programmers to write some of our code.
This presents a number of challenges, including:

  • How do we get them to follow our standards ?
  • How do we get them to write tests ?
  • How do we get them to write the highest possible quality code ?
  • How do we communicate with them ?
  • How do we write specifications for them ?
  • How can they work with our security model ?
  • How can we make sure they can be productive quickly ?

The approaches I have used to address these concerns are:

How do we get them to follow our standards ?

  • We have extensive linting and all the rules are at the error level, none are warnings
  • We have an existing code base with example for most of the patterns they should use
  • We have 100% code coverage so they can develop in safety
  • We measure code coverage so we can see if they write enough tests
  • We use a css framework (mui) that they should follow
  • We have a “how to write code for us” page” that defines all our rules and guidelines in one place


How do we get them to write tests?

  • We have 100% test coverage with many examples of how to test our app and our components
  • We measure test coverage and show it
  • We ask for tests in specifications and requirements
  • We look for tests in PR reviews


How do we get them to write the highest possible quality code ?

  • Our codebase is clean and lean and follows our standards and principles
  • Out classes are small and so are our methods and they are all tested or purposely excluded
  • Our linting is extensive and strict
  • We require them to go through our PR process
  • We accept that refactoring by us (after the PR is merged) is common
  • We maintain a set of general coding guidelines for all contract developers to read and follow


How do we communicate with contract programmers ?

  • Initially through the contract app, e.g. codementor.io, upwork, etc.
  • Then through a specification for them
  • Then through github PR’s for the code


How do we write specifications for them ?

  • We write up a detailed spec for each piece of work with mocks, guidelines, data as appropriate
  • We include a template of basic must dos


How can they work with our security model ?

  • We will create a system where no secure data can be accessed by developers
  • No secure data access keys or IDs will be given to contractors
  • We will make sure that developers can use secure approaches and still be productive


How can we make sure they can be productive quickly ?

  • We document and then point them to our guidelines and rules
  • Our codebase is simply to read and understand for a newbie
  • We limit tribal knowledge
  • We adjust based on their feedback

The Magic Development Triad

Software development typically covers 3 areas

  • Application Development
  • Testing and QA
  • Devops

In many organizations this involves different groups and leaders and teams.

Most of the development time is spent coordinating and communicating between these groups

There has been much advancement in recent years of application development and test automation development coming together in modern agile development practices. For the most part devops still means a different team.

I find this model to be slow and inefficient in comparison to a different model that I now use which I term the Magic Development Triad. Under this model an application developer spends significant time and effort on each of the three critical areas of

  • Writing application code
  • Writing automation code
  • Writing ci infrastructure as code

They don’t have to be in exactly equal time amounts. However if youm as a developer, regularly spend 1-2 hour a week on ci infrastructure and 2-3 hours a week on automation code and 35 hours a week on application code then a healthy balance is probably not present.

This touches on several of the key principles that now guide my development such as

  • Infrastructure Is the Work
  • Refactoring Is The Way (if you have good tests)
  • CI is the speed

I have found that operating under this model leads to development speed that is an order of magnitude faster

Ubuntu Setup 2022

  1  Install Slack per slack website # Slack
  2  sudo apt install gnome-software # Gnome
  3  wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb # Chrome
  4  sudo apt install git # git
  5  get .ssh/id_rsa # git key secrets From another machine
  6  chmod 600 .ssh/id_rsa # set permissions - require to clone
  7  git clone git@github.com:durrantm/setups.git # my stuff
  8  cd setups/
  9  ./copy_all_dot_setup_files.sh
 10  sudo apt install curl # curl
 11  curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash # nvm
 12  nvm install 16 # nvm version 16
 13  sudo apt-get install vim
 14  sudo apt-get install tmux # + append tmux to end of .bashrc
 15  mkdir ~/Dropnot
 16  cd ~/Dropnot/
 17  git clone git@github.com:cryptoTradings/crypto_trading.git
 18  cd crypto_trading/paxos_trader_be
 19  cat .env # secrets from other machine
 20  Install Dropbox and sync most recent folders to start

Want change? Provide data!

In every organization I have worked in, changes in practice are usually the hardest to achieve.

In engineering I have observed two primary approaches to championing change.

The first is to provide the raw data and let engineers draw their own conclusions and then take actions based on them.

The second is to continually use a variety of emotional appeals, for example for the good of the team, the department, the company, loyalty to the owner, to look ‘better’ than others on a leaderboard using a competitive model. If tasks are not done the emotional part of the message is increased to pressure action. This approach is frequently referred to as ‘alignment’ – essentially making sure the subordinate has got your message about what is important and is following your ‘clear’ direction.

In my experience, in a number of different areas and settings, I have observed that the first, data driven approach, works better to achieve the desired results when working with engineers. It requires a mature level of leadership that view interactions as partnerships over direction giving exercises.

Discover or Predict

There has been considerable talk recently about software development having distinct phases.

First there is the discovery phase. Here the solution and timeline is unknown and must be discovered through multiple iterations and experiments, most of which should fail.

Second is the predictable phase. The architecture of the technology and approach and implementation have been determined and development can be made at a known, predictable pace.

This is wrong

This was the belief during the 1990s (that I worked in). The proposition that, with enough gantt charts and microsoft project timelines we would have high quality software developed according to a fixed, predictable schedule. The actual result of this planning, coupled with the technology of the time (terminals, cobols, mainframes, printed manuals), was that while timelines were (sometimes) kept, this was usually at a high cost to quality leading to low quality products that were then buggy and hard to change, extend or enhance due to their rigid design and implementation.

Also the belief in predictability relied on experienced employees not leaving the firm for any reason, recessions not happening, technology not changing, etc. In the real world all these things happen which would always lead to a lot of unpredictable events that affected dates.

Then came the web. This changed the fixed technology landscape that was in place with modern implementations leading to the fluid and fast changing landscape we see today where constant innovation and change are essential.

And Agile was born (or codified under the new term at least). Agile recognized that software development is never predictable. It is always discovery. All the time. This changes the paradigm. In order to exist in this environment, businesses need to radically change how they develop products. This is where factors such as getting automated user feedback and empowering developers to innovate comes to the fore, instead of more traditional targets and goals and KPIs. It is an environment where development practices such as feature flags, AB testing, BlueGreen releases and Canary deployments are used to manage change and delivery in a form that is usable for the business and their need for predictability (which has not gone away just because development is ‘Agile’).

We can’t do full Agile because…

This is the other truism that I have heard for the past 15 years. Regardless of company, industry, location, product, consumers, size, the following reasons for not being ‘able’ to truly embrace Agile are always presented:

  • Regulation with fixed requirements
  • Externally fixed dates for required functionality
  • Financial targets to meet for the quarter and year
  • Integrations with external partners that have to be coordinated

The answer to this is that actually Agile can be done with all of the above factors in play. Much has been written by silicon valley founders about the lean development environments and people empowerment models that support this with humble and servant leadership.

The Management Challenge

This new environment presents an interesting challenge. If we can no longer depend on dates to drive development and give direction, how do we manage people to produce what we need for the business? This is where the other transformation of recent years come to play – the employee empowerment approach with decentralized and flat organizations. While the comparison is often made between traditional ‘waterfall’ and ‘agile’ environments I think this misses the most important point. All environments actually have some element of waterfall and some elements of agile development. The crucial different for management however is the change from the previous paradigm of what is often called ‘command and control’. Those are ‘fightin’ words (who admits to them?) so it is easier to use the softer words we hear today – ‘clear direction’, ‘strong alignment’, ‘good focus’.

Quality – The Excuses

Everyone wants quality!

Some of the barriers to achieving it are

  • We don’t allow time for writing good tests
  • The definition of done is missing the testing requirements
  • There aren’t existing test examples to copy to write new tests
  • We don’t use modern test frameworks and tools for our automation
  • PRs are big enough already without risky changes to improve quality
  • We don’t pay automation engineers as much as application engineers
  • We need to meet the application deadline regardless of test coverage
  • Testing and Automation is not a key business goal with OKRs and KPIs
  • We don’t include quality engineers as key stakeholders in work determination
  • We don’t invest enough money or people in the infrastructure needed for CI

Stop hiding the quality work

In software we use project management such as Jira, Trello, Pivotal Tracker, etc to help us manage our work.

In recent engagements I’ve noticed a curious trend – hiding work.

Three common ways that are used are labels, subtasks and swimlanes

The most common type of work that is hidden is the infrastructure work needed in order for a company to grow and thrive and realize its 100X dreams.

Infrastructure work may sound like the work to set up servers, databases, cloud providers, etc. While those elements are certainly true, much, of the infrastructure work happens in teams doing product development itself. The folks who are writing application code, installing tools to make development easier, writing tests (or not) to make growth possible, setting rules for how code is developed and reviewed and merged.

Infrastructure work is critical to company growth. Give it the recognition and respect it deserves by making it visible to everyone in the company. Teach the company owners about it and how it will enable the company to fulfill its dreams if the hard work is done now.

Quality Engineer – A vision

  • Work closely with product management to understand customer needs in depth and how implementations address those needs by providing a high quality product that adds value for customers
  • Work on continuous integration approaches that support a development “pipeline” that gives application engineers feedback in seconds and minutes for both local and remote CI
  • Work to promote an Agile Testing Pyramid
  • Work closely with application development to:
    • Create a plan during backlog refinement for how to test each feature or change at unit and integration levels, for example using Given, When, Then
    • Promote tools and approaches for higher quality code such as quality linting, static code analysis and code coverage
    • Eliminate unit test dependencies such as database or network
    • Promote application and test code that prefers full english names and avoids premature optimizations
    • Address failed, pending or flaky tests at any level of testing
    • Create a robust system for test data for all levels of testing
    • Remove dependencies for UI unit testing such as database or authentication
    • Test every feature change from the start of the development prototype, not the end
    • Review and provide input for PR’s for application code and related unit and integrated testing at multiple levels
    • Create a documented standard for method parameters, testing for valid, invalid, blank, null, undefined and defaults
    • Work on selenium based tests that test key functionality in the UI and are part of the standard CI for application developers and run in less than 10 minutes
  • Manually test new or changed features in multiple devices with exploratory testing based on current and planned users and devices
  • Work to achieve a UI test reliability rate of 99.999%+ so that a delivery pipeline can be trusted for immediate feedback that is reliable
  • Support release management including on-call support rotation responsibilities
  • Promote BDD and TDD approaches including teaching and training them to team members
  • Promote and provide coaching for company wide quality practices such as continuous delivery, shift-left testing, 5 why’s, argument view swapping, cost/reward of testing, etc.

Quality KPIs

If number of bugs isn’t a good measure of quality, what is ?

Number of bugs is an outcome of a low quality product.
Tests will not fix a low quality product.
Especially if they are written by someone else later in the process.

To improve quality and reduce bugs in software, measure and monitor over time:

  • Product usage by current production users
  • Features are used as imagined by the company
  • Measures are available for usage by key demographics and devices
  • Usage in relation to revenue, adoption or other measures or KPIs
  • Test suite length of run time
  • Application code unit test coverage
    • 100% should be the general rule
    • Test for parameters that are zero, missing, blank, null
  • Application code average LOC method/class sizes
  • Application code average method complexity
  • Average amount of time to fix a production bug
  • Mean time for tickets from entry to deployment
  • Mean time between production failures
  • CI UI Automation code failure rate – target N5 (00.001%)
  • Pending tests – target zero
  • Size of backlog
  • Backlog size change over time – target zero
  • Application performance

Also pay attention to softer and more subtle factors that may be harder to measure such as

  • Naming objects well
  • Usability issues related to fonts and colors and sizes for all users
  • Domain specific usability issues for systems and ecosystems
  • Maximizing accessibility to help reach the largest market share
  • Maximizing accessibility for users with different physical abilities
  • Emotions related to color schemes
  • UI consistency
  • Verbal and interactive feedback from key users

Balancing all these different aspects is why quality is hard

Hiring competent component devs

My vision

To hire developers from around the world who work on software components for as little as $100 and within hours

How I achieved it

  • I solved the test data and database requirements issue for authentication and authorization
  • I solved the local services authentication issue for remote workers
  • I am the lead dev, reviewing code, tests and functionality and providing a lot of gentle feedback
  • My main focus in code is how good tests are, how clear code is to read and how easy code is to change
  • I use strong code linting practices to standardize formatting
  • I use one place – github – for code, PRs, tracking features and issues, CI and projects
  • I use a service that provides programmers for hire by the hour and then I screen people through actual ticket work starting with simple tickets
  • I use GCP to deploy apps to public URLs for testing on any device / browser / version
  • I pay more, e.g. $400 for a feature, for additional work over time based on quality work they have already done
  • I accepted the humble premise that I am now hiring people who are better as specific pieces than me

Hiring programmers every week

I am hiring programmers every week. They complete assignments and my application continues to grows.

Sounds simple right?

Of course anyone who has tried to to do this, knows the burden – interviews, recruiters, coding tests, personality reviews, the list is endless.

Then there is the pay. I want to pay someone $500 for a piece of work. Not $150,000 to hire them for a year. Not to mention all the work that comes with actually hiring full time employees.

Finally when you hire someone you have to take time to on-board them. equip them. train them.

That’s a lot and there is a better way and I am using it.

I hire programmers to do units of work. There is a lot of parts to get right to make this happen so I decided to detail them here:

  • A technical programmer lead to manage the infrastructure and work
  • Code, tickets, continuous integration all in github
  • A Slack workspace with github integration
  • The ability to run the UI locally using fake data*
  • The ability to deploy to a public URL for any device testing*

*It is the last two – running the UI locally with fake data, and deploying to a public URL that distinguish my approach. By implementing fake data and deploying to GCP I can avoid the considerable testing hurdles that challenge most organizations which include:

  • Authentication and Authorization to access a secure API for the data
  • Running a server locally to request data
  • A database for test data
  • Credentials for the test database for authentication and authorization
  • Needing to use custom emulators such as Virtual Box, Android Studio, Xcode, etc to test apps locally

With all of these problems solved I can now hire a freelance programmer (I use codementor.io) and give them a slice of work with no need for credentials, databases, emulators etc. to do the work.

It also means that I can work on the application with no internet connection. That’s a big deal for me.

Finally, I don’t worry about revealing the source code as I use the google model that it’s the implementation of the business around it that counts. I do manage github users access carefully for access.

This model is working well for me. Would it work for you?

I am a pattern programmer

Some folks are writing scripts, others are wring Object Oriented code and yet others are excelling in the delights of functional programming. I use all these approaches but I consider my own approach to programming really matches none of them and can best be described using a term I have decided to call pattern programming.

I look at code. I look at data. I look at use cases and tests. I look at classes and methods and I ponder.

What is the pattern here?
Is the pattern obvious from the existing code?
Are these high and/or low level patterns?
Is this a pattern I could use?
How would I need to modify this pattern to accommodate something different.
Should I abstract this pattern for reuse?
Can this pattern accommodate additional use cases?
Has usage of this pattern reached a point where we need to divide it into multiple other patterns?

These are questions I think about a lot.

Telling a data story with color

Providing higher level ‘meta’ information about detailed data can be clumsy.

One approach is to use groups with headers and totals.

Another approach is to use special characters and text to group items.

Another might be different bold background colors. Ugh

The above approaches can be rather clunky and lead to a cluttered and ugly display with links and IDs

Here’s an approach I like that uses text color and subtle grey shading.

It takes seconds to grasp and will quickly become a key tool for power users of the application in question.

From an example application I created, see if you can quickly and easily tell

  • Which transactions are part of batches? (hint: row background!)
  • How many BTC transactions?
  • How many SELL transactions?

Good advice from great people

Presence is a foundation for trust. The Mind of the Leader. Rasmus Hougard.

Silence is a greatly underestimated source of power. Leading With Emotional Courage. Peter Bregman.

These are the four magic words of management: “What do you think?” —WOODY MORCOTT, Former CEO, Dana Corporation.

The two biggest barriers to good decision making are your ego and your blind spots. Principles: Life and Work. Ray Dalio.

Remember, the clarity of your guidance gets measured at the other person’s ear, not at your mouth. Radical Candor. Kim Scott.

Foster a respectful, supportive work environment that emphasizes learning from failures rather than blaming. Accelerate. Nicole Forsgen PhD

Self- expression, experimentation, and a sense of purpose: these are the switches that light up our seeking systems. Alive At Work. Daniel Cable.

Being transparent and telling people what they need to hear is the only way to ensure they both trust you and understand you. Powerful. Patty McChord.

Speaking up is only the first step. The true test is how leaders respond when people actually do speak up. The Fearless Organization. Amy C. Edmondson.

Psychological safety is about candor, about making it possible for productive disagreement and free exchange of ideas. The Fearless Organization. Amy C. Edmondson.

“Testing by itself does not improve software quality. Test results are an indicator of quality, but in and of themselves, they don’t improve it. Managing the Unmanageable. Steve Mcconnell.

Embracing radical truth and radical transparency will bring more meaningful work and more meaningful relationships. I have found that it typically takes about eighteen months. Principles. Ray Dalio.

At the heart of almost all chronic problems in our organizations, our teams, and our relationships lie crucial conversations. Crucial Conversations. Kerry Paterson.

While most managers, supervisors, and colleagues genuinely appreciate the people with whom they work, they often neglect to verbally express that appreciation. The 5 languages of Appreciation in the workplace. Gary Chapman.

A few simple, uncommon, powerful phrases that anyone can utter to make the workplace feel just a tiny bit more psychologically safe: I don’t know. I need help. I made a mistake. I’m sorry. The Fearless Organization. Amy C. Edmondson.

Activity is not the same as productivity. When we complete a task, even the smallest insignificant task like sending an email, dopamine is released in the brain. This can make the task addictive. The Mind of the Leader. Rasmus Hougard.

You manifest what you model. Your people are not only watching your every move, they are emulating you. And, unfortunately, you don’t get to pick and choose which parts they copy. How F*cked up is your management. Johnathan Nightingale.

You have to accept that anger, for example, is not something you can eradicate from your life. Don’t fight against something you can’t change. What you can change are the thoughts which sustain anger. Rewire Your Mind. Steven Schuster.

Quality Code

It takes time to learn why Quality Code practices help the business succeed

  1. It meets the business purpose. This is always priority #1
  2. 100% test code coverage is the standard and the practiced norm.
  3. Code linting is extensive and all settings are fatal with no warnings
  4. TDD and BDD are truly practiced and operate in Agile environments
  5. Continuous Integration gives quick feedback that code runs elsewhere
  6. Most coding is refactoring existing code to be easier to change in the future
  7. Developer feedback is immediate w/editor autosave & tests suites in seconds
  8. Tests follow a “500%” Positive, Negative, Blank, Undefined, Null testing pattern



Two lines to remember

Remembering what users have typed in constantly used applications is very helpful for them and it’s actually very easy to do… just use the browser local storage API to maintain the state.

OK, that sounds a bit intimidating… put more simply… just change these two lines to achieve this!

This is for a React application that is uses hooks to maintain state and has a commonly used input field whose value is tracked with text

First, change

const [ text, setText ] = useState('');

to

const [ text, setText ] =
useState(localStorage.getItem('your-app-your-name' ) || '')

Then, when the user updates text, e.g. for an input field which has an event handler and currently maintains state, update the code of that handler to also update local storage

setText(text) // existing code
localStorage.setItem('your-app-your-name', text ); // Add this line

That’s it! Now your users input will be remembered even if they close their browser, restart their machine, you deploy new code, etc. All with no login and no cookies!

React gotcha (#1)

Here’s some pieces of React code that work (showLinks) and do not work (showAdminLinks).

However the UI does not have the output and it does not throw an error either in compilation or in the console. hmmm

So it seems quite a mystery.

Can you spot the issue?

Answer: It is the use of the wrapping { and } instead of ( and ) on lines 25 and 29

This has caught me a few times. There are a number of formats that can be used here, including, but not limited to {(...)}, (...), return (...), return ({...}), etc.

This variety of formats can make it difficulty for you and/or your IDE to spot the issue shown here.

Not seeing any error makes it sometimes hard to realize what the issue is.

There are a few React gotchas like this. This is listed as ‘number #1’ but this is not intended to indicate it is the “primary” react issue like this, just that it is the first of several React gotchas I am documenting for others… and my future self. This one has caught me a few times.

Docker Basics

For quick reference

Creating your own Docker Image:
Let’s start by creating a very simple Node app. To begin, create a directory – name it dockernode – and then initialize a new NPM project in it:
npm init -y
Next, add Express to it:
npm install –save express
Finally, create a server.js file and put the following code in it:
const express = require("express");
const app = express(); app.get("/", (inRequest, inResponse) => { inResponse.send("I am running inside a container!"); });
app.listen("8080", "0.0.0.0"); console.log("dockernode ready");
You can, at this point, start this little server:
node server.js You should be able to access it at
http://localhost:8080.
Of course, what it returns, “I am running inside a container!”, is a dirty lie at this point! So, let’s go ahead and make it true!
To do so, we must add another file to the mix: Dockerfile. Yes, that’s literally the name!
A Dockerfile is a file that tells Docker how to build an image. In simplest terms, it is basically a list of commands that Docker will execute, as if it were you, the user, inside a container. Virtually any valid bash commands can be put in it, as well as a few Docker-specific ones. Docker will execute the commands in the order they appear in the file and whatever the state of the container is at the end becomes the final image.So, here’s what we need to put in this Dockerfile for this example:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./ COPY server.js ./
RUN npm install
EXPOSE 8080
CMD [ "node", "server.js" ]
The first command, FROM, is a Docker-specific command (the only one required, in fact) that tells Docker what the base image is. All images must be based on some existing image. If you want to start “from scratch,” the closest you can generally get is to choose an image that is nothing but an operating system. In this case, however, since we’re using Node, we can start from an image that, yes, has an operating system, but then also has Node already installed on top of it. Alternatively, we could start with an image like ubuntu, and then put the commands into the Dockerfile that would install Node (apt-get install nodejs), and we would wind up with an image that is basically the same as this. But let’s be lazy and use what’s already there!
The next command, WORKDIR , really does two things, potentially. First, it creates the named directory if it doesn’t already exist. Then, it does the equivalent of a cd to that directory, making it the current working directory for subsequent commands.
Next, two COPY commands are used. This is another Docker command that copies content from a source directory on the host to a destination directory in the image’s file system. The command is in the form COPY , so here we’re saying to copy from the current working directory on the host (which should be the project directory) to the current working directory in the image (which is now the one created by the WORKDIR command) any file named package*.json (which means package.json and package-lock.json) and our server.js file.
After that, we must think as if we’re executing these commands ourselves. If someone gave us this Node project, we would next need to install the dependencies listed in package.json. So the Docker RUN command is used, which tells Docker to execute whatever command follows as if we were doing it ourselves at a command prompt (because remember that basically is what a Dockerfile is!).
You know all about the npm install at this point, so after this is done, all the necessary code for the application to run is present in the image.
Now, in this case, we need to expose a network port; otherwise, our host system, let alone any other remote systems, won’t be able to reach our Node app inside the container. It’s a simple matter of telling it which port to expose, which needs to match the one specified in the code, obviously.
Finally, we want to specify a command to execute when the container starts up. There can be only one of these in the file, but we can do virtually anything we want. Here, we need to execute the equivalent of node server.js as we did manually to test the app. The CMD command allows us to do this. The format this command takes is an array of strings where the first element is an executable, and all the remaining elements are arguments to pass to it.
Once that file is created, it’s time to build the image! That just takes a simple command invocation: docker build -t dockernode .
Do that, and you should see an execution something like Figure 12-5. Figure 12-5
Building the dockernode example image
Now, if you do a docker images, you should see the dockernode image there. If it is, you can spin up a container based on it:
docker run --name dockernode -p 8080:8080 -d dockernode
At this point, the container should be running (confirm with docker ps), and the app should be reachable from a web browser. Also, if you do docker logs dockernode you should now see the “dockernode ready” string. You could attach to the container if you wanted to now and play around.

Basic Typescript tsconfig.json

{
"compilerOptions": {
"target": "es5",
"module": "commonjs",
"sourceMap": true,
"outFile": "./dist",
"strict": true,
"noImplicitAny": true,
"strictNullChecks": true,
"strictFunctionTypes": true,
"strictBindCallApply": true,
"strictPropertyInitialization": true,
"noImplicitThis": true,
"alwaysStrict": true,
"esModuleInterop": true,
"skipLibCheck": true,
"forceConsistentCasingInFileNames": true
},
"include": [ "src/**/*" ]
}

Your own instant JSON server…

Here’s my zero install json server:
(Note that I am replacing my github username with ‘me’ and the repo is called ‘shoppingList’

`http://my-json-server.typicode.com/me/shoppingList

with end points:

http://my-json-server.typicode.com/me/shoppingList/items

http://my-json-server.typicode.com/me/shoppingList/lists

What’s really going here:

my-json-server.typicode.com is a handy service.  It is effectively a running JSON server that use YOUR GITHUB REPO for the data!

So basically:

  • You set up a github repo and in the root directory you place a json.db file
  • The db.json file has some simple JSON such as
    {
    “items”: [
    { “id”: 1, “listId”: 1, “title”: “Peas”, “quantity”: 1, “price”: 6.99 },
    { “id”: 2, “listId”: 1, “title”: “Rice”, “quantity”: 1, “price”: 0.99 }
    ]
    }
  • You can now immediately use the endpoint for items, e.g.http://my-json-server.typicode.com/me/shoppingList/items
    (again replace ‘me’ with your username and ‘shoppingList’ with your github repo name

If you’re developing an app using data from a JSON API (kinda common these days…), now you have a perfect way to test and develop against a real server, where you control the data!

React fetch Hook

Might as well have this for one of the most common operations we do!

import React, {useState, useEffect }  frm 'react';

export function() useFetch(uri) {
  const [data, setDate] = useState()
  const [error, setError] = useState()
  const [loading, setLoading] = useState(true)

  useEffect(() => {
    if(!uri) return;
    fetch(uri)
      .then(data => data.json()
      .then(setData)
      .then() => setLoading(false)
      .catch(setError);
  }, [uri];

  return {
    loading,
    data,
    error
  };
}

 

 

 

We’re “Agile”

“We’re Agile”
or so every company in 2020 seems to say.  Followed by “we do standups, retros and backlog refinement!”.
However, in the next breath, most companies give the following reasons why they can’t ‘quite’ be Agile ‘the way it was intended’.
This is a most insidious falsehood because the problem that they list are EXACTLY THE PROBLEMS THAT AGILE ADDRESSES.
Lets face it – if these factors didn’t exist at most companies we wouldn’t need Agile in the first place!
Having experienced the “we’re so agile” workplaces firsthand I have no desire to repeat the experience.

The main reasons that companies give for not being “truly agile”:

– We have deadlines (that we frequently miss)
– We have acquisitions (that we struggle to integrate)
– We are regulated (but we don’t understand controls well)
– We are constrained by HIPPA deadlines (and we don’t prepare in time)
– We are constrained by OxleySarbanes (and we announce quarterly goals that we are then held to)
– Our compliance department needs controls (and we don’t recognize the ones IT uses anyway)
– We have a fixed IT budget (We don’t take into account the value that each single IT workers brings)
– We are cost cutting (we are shortsightedly focusing on near-term results at the expense of long term profits).
– We have a IT hiring freeze (we are failing to recognize the power of our IT staff to generate revenue).
– We can’t hire talented people (we don’t create a welcoming culture for A talent).

To restate the point. These are the very reasons you use Agile. to solve THESE problems. It can be done. and it has and is.

People ≠ Resources

People are not Resources!

This concept has been going around the Agile community for a while and deserves more attention.

Referring to people as resources is dehumanizing, insulting and unprofessional.

  1. Q) Why is this a big deal anyway?
  2. Q) Isn’t this just political correctness?
  3. Q) Projects need people, so why aren’t they considered resources to the project?

I believe words counts. Every. Single. One.

 

Lets step back for a second and look at the power and consequences of this ‘resource’ word.

Imagine a  project where you need two programmers.

You say “we need 2 resources on this”.

Nice.  Easy.  Doesn’t really address any human factors mind you.

 

Trying again:

We need two people (at least one of which must be senior) on this project.

Nicer.  But still missing the picture.

 

Trying again:

We need two additional people on this project so perhaps can we could put Stacey and Aveal to work on it…

Although…. Stacey is maxed out on project x and G and K right now so she’s only available about 10 hours  a week.

Also, Aveal is on Paternity Leave until next month and will then be half-time for 3 months when he returns.

Hmmm, maybe we need to rethink this from a people perspective…

Using the term “resources” for people is easy.

But so very wrong. And misleading as to the reality of actual plans that can be accomplished with actual people.  Referring to Stacey and Aveal is reality.

Test Metric Development (TMD)

TMD – Test Metric Development

In Test Metric Development unit test coverage is the guideline.

This is unlike TDD where the failing test is written first and then that test drives the design of the application code until the test passes which ensure close to 100% test coverage is hard to avoid.

In contrast, in TMD, the application code is written first, before any tests and without consideration of its testability.

Once the code appears to work, enough tests are added to keep the code coverage level at some artificially chosen metric.

This is TMD. It is a stage of software development maturity between BDUF (Big Design Up Front) and TDD (Test Driven Development).

js anagrams

Although I got this heap routine from a blog post, I couldn’t help but make a few tweaks, as is usually the case. Nothing that affected or improved performance (based on some timing runs that I did), these were about readability, such as

– use array deconstructor for the swap
– use Array.fill(0)

Before:

function swap(chars, i, j) {
var tmp = chars[i];
chars[i] = chars[j];
chars[j] = tmp;
}
function getAnagrams(input) {
var counter = [],
anagrams = [],
chars = input.split(''),
length = chars.length,
i;
for (i = 0; i < length; i++) {
counter[i] = 0;
}
anagrams.push(input);
i = 0;
while (i < length) {
if (counter[i] < i) {
swap(chars, i % 2 === 1 ? counter[i] : 0, i);
counter[i]++;
i = 0;
anagrams.push(chars.join(''));
} else {
counter[i] = 0;
i++;
}
}
return anagrams;
}

After:
'use strict';
exports.getAnagrams = (input) => {
const anagrams = new Array(input);
const chars = input.split('');
const counter = new Array(chars.length).fill(0);
let j = 0;
while (j < chars.length) {
if (counter[j] < j) {
const k = j % 2 === 1 ? counter[j] : 0;
[chars[j], chars[k]] = [chars[k], chars[j]];
counter[j]++;
j = 0;
anagrams.push(chars.join(''));
} else {
counter[j] = 0;
j += 1;
}
}
return anagrams;
};

Key Javascript DOM methods

DOM Mode

The DOM is the Document Object Model of a page. It is the code of the structure of a webpage. JavaScript comes with a lot of different ways to create and manipulate HTML elements (called nodes).

The following is a subset of some of the most useful properties and methods.

Key Node Properties

  • attributes — Returns a live collection of all attributes registered to an element
  • childNodes — Gives a collection of an element’s child nodes
  • firstChild — Returns the first child node of an element
  • lastChild — The last child node of an element
  • nodeName —Returns the name of a node
  • nodeType —  Returns the type of a node
  • nodeValue — Sets or returns the value of a node
  • parentNode — Returns the parent node of an element
  • textContent — Sets or returns the textual content of a node and its descendants

Key Node Methods

  • appendChild() — Adds a new child node to an element as the last child node
  • cloneNode() — Clones an HTML element
  • insertBefore() — Inserts a new child node before a specified, existing child node
  • removeChild() — Removes a child node from an element
  • replaceChild() — Replaces a child node in an element

Key Element Methods

  • getAttribute() — Returns the specified attribute value of an element node
  • getAttributeNode() — Gets the specified attribute node
  • querySelector() — Provides first matching element
  • querySelectorAll() — Provides a collection of all matching elements
  • getElementsByTagName() — Provides a collection of all child elements by tag
  • getElementById() Provides an Element whose id matches
  • getElementsByClassName() — Provides a collection of child elements by class
  • hasAttribute() — Returns true if an element has any attributes, else false
  • removeAttribute() — Removes a specified attribute from an element
  • removeAttributeNode() — Takes away a specified attribute node and returns it
  • setAttribute() — Sets or changes the specified attribute to a specified value
  • setAttributeNode() — Sets or changes the specified attribute node

Full list at JS Cheat Sheet

Javascript Today 03/04/2020

Featuring:

  • classes and methods
  • destructuring for array swap
  • getter for method returning a result
module.exports = class BubbleSort {
  constructor(ary) {
  this.contents = ary;
}
get bubbled() {
  const contents = this.contents;
  const size = contents.length;
  for(let outer=size; outer > 0; outer--) {
    for(let inner = 0; inner < outer; inner++) {
      if (contents[inner] > contents[inner+1]) {
        this.swap(inner);
      }
    } 
  } 
  return contents;
}
swap(index) {
  const contents = this.contents;
  [ contents[index], contents[index+1] ] = [ contents[index+1], contents[index] ];
  }
};

 

It’s time to stop testing

Testing, as it has traditionally been done, may no longer be a good idea.

I am talking here about end to end, usually UI based, testing. Even the automated kind.  The kind that I myself have specialized in for several years!

Once simple fact cannot be ignored:

  • Quality for customers is determined by application code quality
  • Tests, in of themselves, do not improve the quality of application code

These simple facts have bothered me greatly in my role as an automation specialist over the past few years.

One clarification – I’m not talking about Unit tests (including not talking about Unit Tests in the UI layer, i.e. Javascript).  Those are the tests that are written using TDD and thus must be written before the application code, initially failing, to drive the application code design and result in testable code that always has tests.  There is never a ‘no time for tests’ situation when you always write the test first.  The practice is harder to do than it is to write here but it can be adopted.  Those Unit tests are still essential and should never be skipped.

When working on application code itself, I have recently seen the considerable difference in quality due to different approaches in writing ES6+ functional style JavaScript code and it is quite remarkable the number of bugs that can be avoided by using the modern constructs, along with the huge increase in readability and maintainability that contributes to higher quality code and less bugs for customers.

For much of our industry, End To End testing has moved from manual to automated processes and yet at company after company the approaches I encounter still reflect waterfall and command and control approaches – the UI tests are written by someone other than the developer and the feedback to the developers comes days to weeks later and is done by someone else who is then in a defensive, checking, testing role, not a quality improvement role. ‘Our QA is now called QE and is embedded in the team’ is a common refrain.  Unfortunately the next comments are often about how hard it is for them to keep up with developers when they write tests. Not to mention the fact that their best QE just got “promoted” to (application) developer and now earns $45,000 more per year.  Actions always speak louder than words and 2nd class citizen syndrome becomes rampant and accepted by all. “The QA person” has quite a remarkable set of assumptions and triggers implicit biases (many based on real evidence) in our industry.  The other issue is that the testing code base itself will take more maintenance over time, quickly becoming an additional issue for code quality and needing tests to test the tests and even tests to test them.

There are several key approaches that need to be adopted to address this change.  These approaches are well known by many organizations, however they still struggle to realize the changes that are needed in existing processes including architectural approaches.

The key new approaches are:

  • CI – Continuous integration to run all tests in the cloud for all branches during development
  • TDD measurements as KPIs, reporting and compliance measures
  • Immediate feedback from production customers by automated means
  • Canary Releases
  • Blue Green Releases
  • Feature Flags
  • Speed – Avoiding testing suite time lengths that continually grow
  • Continuous Deployment reducing MTTR (Mean Time To Recover)
  • Teams that promote contributions from all members and pay equitably

It’s exceedingly hard to do the above because most organizations default to continuing the previous developed testing characteristics of

  • Manually run automation and manual testing
  • TDD compliance not monitored as a KPI
  • Measuring bugs and focusing on speed of response to bugs
  • Production customer real-time automated feedback KPIs not shown in-house on primary displays to development teams
  • Test Suites that grow in length every week
  • QA’s being failed or junior developers that are paid less

and doing those activities quickly leads to no time to do the previously mentioned ‘new approach’ activities that would actually be of more benefit in improving quality for customers.  Change is hard, especially when it appears to mean less testing and more risk.  When done correctly it can actually mean more testing (more unit, less UI) and less risk if many supporting parts are done correctly but moving to this model is very hard and the more established the company and their software development shop, the harder it is.  This is one of the key reasons that small companies and startups continue to disrupt, as it is generally easier to adopt new practices than to change existing practices that were successful in the past.

To summarize, improve quality for customers with

Less End to End testing…

please!

and more quality activities such as…

Code Linting, Code Grading, Code Reviews, Code Education and Training, Immutable Objects, Internal Iterators, TDD Measurement,Well named objects and methods, avoiding premature optimization, Short methods, Short Classes, Single Responsibility, English readable, well stubbed and mocked, SOLID principle based code that is deployed in a modern CI/CD environment that provides immediate automated feedback based on customer activity and provides the facility to revert changes with minimal efforts within seconds or minutes.

Works on ALL my machines !

It’s a familiar situation at work where code ‘works on my machine’… but when another developer… or a staging deploy.. or a production deploy happens, it doesn’t work on that machine.  There are many practices to address this – virtual machines, docker, automated tests, etc, etc.

There is a similar situation in learning new technology skills: The same “I did it once, on my machine and it worked, but when I tried later to do it, it didn’t work.  I don’t remember exactly how I did it before and this time I encounter unexpected problems I didn’t experience before.

This leads to a number of issues:

  • I typed something once.  I’m unlike to remember that in a week
  • At some point I’ll try and use a different machine
  • Dependency hell – stuff seems to be ok but then on machine X it isn’t
  • I didn’t encounter any problems so didn’t learn how to get around them

To address this, I use a practice of switching machines 2-3 times a day.

This approach developed naturally over time to match my daily schedule, i.e. by working from home on a desktop, working from a cafe on a laptop and then working from home on the desktop again.  As with other coding activities I am addressing the pain point by doing the activity more often, not less.  This invokes the lazy programmer in me that will then address ephemeral issues such as local setup and get the experience I need to be able to walk into other situations and make progress having encountered and conquered many different setup issues while learning.  It also ups the amount of source code management that I do through git which is always good practice.  I recently stopping coding within a Dropbox directory ‘cos I need to exclude node_modules/and dropbox doesn’t allow that (you’d have to selective sync for every node projects node/modules directory which is waaaay too much config management for me.

 

git setups

It’s a small thing, but… when I get

There is no tracking information for the current branch. 
Please specify which branch you want to merge with. 
See git-pull(1) for details. 
git pull <remote> <branch> 
If you wish to set tracking information for this branch you can with
git branch --set-upstream-to=origin/<branch> master

it’s easy to fix cos in my ~/.gitconfig file I’ve got

[alias]
setups = !git branch --set-upstream-to=origin/`git symbolic-ref --short HEAD`

which  lets me simply type

git setups

to get

Branch 'master' set up to track remote branch 'master' from 'origin' by rebasing.