Make your project accessible to newcomers

Git is an indispensable tool for recording the history of our source code. This history increases in value the older that project gets; it is a unique archive of collaboration and hard work that describes how the project became what it is today.

On frontend we talk about accessibility a lot, but I’d like to talk about a different kind of accessibility: when a new person joins the team and starts working on the project, they should be able to figure a lot of things out eventually, regardless of their skill level.

In the role of the DX developer I had time to fully focus on making development easier because I wasn’t delivering features anymore. To be clear, this is not for everyone, many people don’t like configuring the development environment.

The oldest projects where the team changed little to none are the most vulnerable to being inaccessible. Because everyone figured it out eventually or they created the project, so they didn’t have to figure anything out, and they don’t see it objectively.

Employees rotate teams, they come and go, it’s essential that they can start development fairly quickly. What were the newcomers struggling the most? Make it more obvious. If you’re the newcomer, once you figure it out, write it down and fix the problem for the next person. If it’s a quick fix, do it immediately because you’ll forget. Once you figure out a small blocker, it instantly stop seeming like a big deal, but you still had to ask a human, and that should happen as rarely as possible.

People shouldn’t have to ask question too much, just like people should be able to use apps without reading a manual, instead it should be self-explanatory and information should come when people need it.

Examples in this blog post are taken from JavaScript projects, but the point extends to all types of projects.

Progressive disclosure

In interaction design there is something called ”progressive disclosure“:

Progressive disclosure is an interaction design pattern that sequences information and actions across several screens (e.g., a step-by-step signup flow). The purpose is to lower the chances that users will feel overwhelmed by what they encounter.

I see no reason not to follow this design pattern when designing projects’ build environments! Avoid having dozens of npm scripts, of which only several are meant to be run by humans. Narrate their logic progressively.


Don’t cram stuff in just because it fits, think about readability. Just because the entire logic of an npm script can be done using CLI doesn’t mean that it should. I copied the following from a project I’m working on:

  "scripts": {
    "build:module": "cross-env BABEL_TARGET=\"js-esm\" babel --out-dir es --ignore **/*.stories.js,**/*.test.js,**/*.storyshot.js,**/__examples__/*.js,**/examples.js src && yarn copy:module",
    "copy:module": "copyfiles -u 1 'src/**/*.md' es && copyfiles -u 1 'src/**/*.js.flow' es && copyfiles -u 1 'src/**/*.json' es"

One could argue that fleshing out the logic of every script makes it easier to understand how to use the project, but I can barely read the scripts above, and once I figure it out I don’t need to see it over and over again. Another thing is that I will never have to run copy:module, it’s only there so build:module can run it, and yet it’s here to crowd up the place. I personally vote for having only npm scripts which are meant to be run by humans. One way to achieve this is to move the underlying logic to gulpfile.js, where you have much more space to express yourself, write nicer code, and export only tasks that you want people to run. It’s especially useful for task composition, i.e. running tasks in series or parallel, instead of using npm-run-all. Read Managing complex tasks with gulp v4 if you’re interested in this approach.


Documentation always helps, especially when it’s as close to the code as possible, best in form of comments. Otherwise it’s prone to being outdated and people tend not to trust it, at least I don’t.

However, good naming strategy and making navigating the codebase more obvious is always better than descriptive documentation, and sometimes you can’t really document some stuff where you want to. One example is package.json, which cannot contain comments, and if document stuff in there in, it’s almost bound to get outdated because as we change npm scripts, we’ll forget to update the documentation.

There are situations when you can update parts of the documentation automatically, take those opportunities to decrease maintenance. One example is documenting which browsers your project supports; you can generate this entirely from your browserslist configuration.

Enforcing requirements and limitations

Run checks automatically, don’t give people opportunity to forget to run some lint script. Today we have tools like husky and lint-staged, so there is no need to leave this up to chance. Just like it’s best not to put something off for tomorrow that you can do today, don’t put something off until CI if you can do it in a git hook like pre-commit or pre-push.

Are there some essential things checks that need to be run before publishing a package? Add a prepublishOnly npm script and stop them! That way the script doubles as a way to educate them. See other npm scripts to find exactly which ones you want, e.g. people often mistakenly use prepare instead of prepublishOnly, which runs bunch of build and test scripts every time dependencies are installed.

Try enforcing all technical requirements, and use only for high-level guidelines.

If your noticed that your development environment fails on Yarn versions lower than 1.22, you can enforce that. One way to do this is by installing the desired version in the project itself:

# alias for "yarn policies set-version" introduced in yarn 1.22
yarn set version 1.22

Another way of enforcing the limitation is to use the engines field in package.json instead:

  "engines": {
    "yarn": ">=1.22"

This will cause yarn install to fail for people who have a lower version of Yarn. But there is a side-effect here: if you’re publishing your project, i.e. you’re working on a library, this limitation will transfer to the consumer! This is useful in case the published code depends on a certain Node version range, but not for package managers like npm and Yarn, so use engines only for apps or root package.json of monorepos.


If something is rightfully failing, but people don’t understand why, add a better error message! Good error messages are not only for users, they are also for developers, especially for newcomers.


Add tests for complex parts of your build environment. It sounds a little meta and possibly overengineering, but it will prevent build problems in the future, which can be even harder to detect than problems in runtime functionality.

Beware of the NIH syndrome

If an existing Node package is popular, that usually means that it’s well-documented and well-tested, it’s unnecessary to find out the hard way why those tests exist. There is no shame is deleting code and replacing it with an existing solution.

Don’t shadow underlying executables with npm scripts

Don’t use the same script name as an underlying executable in node_modules/.bin/eslint, I expect yarn eslint to be the same as running that executable directly, without additional options. While this problem could be avoided with npx eslint, it also doubles as a command which downloads ESLint if it’s not present, which is unwanted in this case, I want the command to fail when the ESLint executable isn’t present.


By improving project’s accessibility you’re improving developer experience (DX) and likeliness that people will stay, but also the versatility of the project for inevitable changes in the team structure. If your project speaks for itself, that means that you have to spend little time on onboarding new team members, which is a win-win.



Using Node.js to find a new...