Computed Object Keys and Function Names In Javascript

For years, I wanted the ability to use variables as object keys in Javascript. Thanks to ES2015, we got the ability to have computed object keys from within the object definition itself.

This isn’t a new or cutting-edge addition, we’ve had it in Javascript for a while now and it is well-supported. The reason for talking about them is a lot of developers do not know about these features or simply forget about them.

In ES5, this wasn’t impossible but you had to do something messier looking like this:

var variableValue = 'A VALUE FROM A VARIABLE.';

var myObject = {};

myObject['This is just: ' + variableValue] = 'But do not worry, it is just a test'

When ES2015 hit the scene, the above could be written like this:

const variableValue = 'A VALUE FROM A VARIABLE.';

let myObject = {
    ['This is just: ' + variableValue]: 'But do not worry, it is just a test'

But, we can make it a bit cleaner. Using template literal backticks, we can remove the string concatenation and do the following instead:

const variableValue = 'A VALUE FROM A VARIABLE.';

let myObject = {
    [`This is just: ${variableValue}`]: 'But do not worry, it is just a test'

I particularly find dynamic object keys useful when working with Aurelia or Vue.js class binding (for dynamic template classes on elements), or when I am working with Firebase and dynamic values.

And one of my favourite features of all is the ability to use this syntax with function shorthand, allowing you to have named functions:

const ADD_USER_FUNCTION = 'addUser';
const REMOVE_USER_FUNCTION = 'removeUser';

let methods = {




This is the kind of syntax that I use when working with state management libraries, as it allows me to name my getters, mutations and actions using constants for consistency.

How To Convert FormData To JSON Object

Recently, whilst working on a project I needed to take HTML FormData and then convert it to JSON to be sent off to an API. By default the FormData object does not work in the way you would want it to for JSON.

Using the for..of syntax introduced in ECMAScript 2015, we can access the form data entries and iterate over them to get key and value pairs.

const formData = new FormData(SomeFormElement);
let jsonObject = {};

for (const [key, value]  of formData.entries()) {
    jsonObject[key] = value;

By calling entries on our form data object, it returns the required iterable data we can then access in the form of key and value. In our above example we actually store each item in an object.

Fortunately, this isn’t a complicated problem to solve. If we had to do this without a for..of loop, then it wouldn’t be nearly as clean as the above solution is (which can still be improved).

Module ES2015 and TypeScript 2.4 Dynamic Imports

Introduced in TypeScript 2.4 is support for the ECMAScript dynamic imports feature. If you didn’t see the announcement or read it properly, you’re probably here because you’re getting the following error.

In my case I use Webpack and I was trying to add in some dynamic import goodness and getting this error: Dynamic import cannot be used when targeting ECMAScript 2015 modules.

TypeScript 2.4 dynamic imports error

You’re probably thinking, this is crazy considering dynamic imports are an ECMAScript feature, not a TypeScript one. The tell is in the error, if your module is set to es2015 you’re targeting ES6 and dynamic imports are a feature not due for release until ECMAScript 2018.

Funnily enough, the TypeScript team did reveal this in their official announcement but if you’re like me, you missed it the first time and hit this issue.

The fix is simply setting the module value in your tsconfig.json file to esnext, like this: "module": "esnext". If you’re using Visual Studio Code, you might get a squiggly in your tsconfig.json file telling you it’s not a valid value, but ignore it because it is.

Efficiently Looping A Javascript Array Backwards

Most of the time when you’re looping over an array of elements, you’ll do it sequentially. However, I recently needed to iterate through an array of objects in reverse just using a plain old for loop.

This will not be a highly informative or groundbreaking post but, I thought I’d share this in case someone wants to solve the same problem and might be confused with the many different ways you can loop over an array in reverse.

I had an idea in mind, I knew Array.reverse would be the ideal candidate but I still Googled to see if smarter developers than me figured something better out.

Turns out, there are a lot of scary alternatives to looping an array in reverse (mostly on StackOverflow). Some people are proponents of decrementing and using while loops, others had different ideas. Why not just use a function that’s existed since the dawn of Javascript?

var myItems = [
    {name: 'Dwayne'},
    {name: 'Rob'},
    {name: 'Marie'},
    {name: 'Sarah'},
    {name: 'Emma'},
    {name: 'James'}

var itemsToIterate = myItems.slice(0).reverse();

for (var i = 0, len = itemsToIterate.length; i < len; i++) {
    var item = itemsToIterate[i];

In our example, we take an array of items and then we use slice to make a copy of our array starting at its first index (zero). Then we call reverse on the cloned array.

I said efficient in the title, but I haven’t benchmarked anything. However, we are using a barebones for loop and you don’t really get much faster than that. Sometimes commonsense and readability beat microoptimisating.

The reason we copy the array is so we don’t modify the original array. Using slice allows us to effectively clone the array and gives us a new instance, there are other ways of doing the same thing but I find this way is the cleanest.

Without slice, you’ll be modifying the provided value to our function and might produce an unintended result doing so. I tend to keep my functions pure for this kind of thing, nothing that gets input should be modified.

Thanks to reverse flipping our array upside down, we iterate like we would normally. Breaking out the reverse functionality into a function might also be a great idea. This would allow us to easily test our functionality from within a unit test.

function reverseArray(array) {
    return array.slice(0).reverse();

One thing to keep in mind is for my use-case, slice worked — if you’re dealing with arrays containing object references or nested arrays or complex objects, slice does not do a deep copy.

Lodash has some great methods for doing recursive and deep cloning of arrays and collections if you need that kind of power. Post your thoughts and suggestions in the comments below.

Configuring Git Pre Commit Hook And TSLint (automatically)

If you’re a TypeScript user and you’re reading this, then you’re using TSLint (most likely). Recently, a situation at work arose where even though TSLint warnings were being thrown in the editor as well as terminal output, some developers (tsk tsk) were still committing these warnings.

Naturally, a pre-commit Git hook is the right candidate for this. Being able to run TSLint to ensure that before a developer can even commit let alone push, only valid code conforming to the tslint.json file can be pushed.

This poses another problem. You can’t automatically add pre-commit hooks into the repository and have everyone automatically pull them down. This is for security reasons, could you imagine if someone committed a hook that deleted a bunch of files/folders?

If you’re using a task runner like Gulp or Grunt, then you can create a clever task that copies a file to the .git/hooks directory for you.

Firstly, let’s create a pre-commit hook. In the root of your application create a new folder called hooks and a new file called pre-commit (with no file extension):


TSLINT="$(git rev-parse --show-toplevel)/node_modules/.bin/tslint"

for file in $(git diff --cached --name-only | grep -E '\.ts$')
        git show ":$file" | "$TSLINT" "$file"
        if [ $? -ne 0 ]; then
                exit 1

Git hooks are actually bash scripts and can be quite powerful. We are creating a path to TSLint in our local application (some Git clients like Github for Windows require this) and using that to call TSLint on our files.

Secondly, let’s create our task. I personally use Gulp, but you can easily adapt the following to any task runner:

var gulp = require('gulp');

gulp.task('install-pre-commit-hook', function() {

gulp.task('default', ['install-pre-commit-hook']);

Running gulp or gulp install-pre-commit-hook will copy over our pre-commit hook and put it into the .git/hooks directory. It is possible on Unix based operating systems that you need to adjust the file permissions using chmod which the fs module offers a method for, but possibly not needed.

Now, throw that task into your Npm build script and anyone else who has the latest changes will get the pre-commit hook every time the task runs. No more warnings from code written by others clogging up your terminal or editor.

Dealing With Tslint Errors/Warnings In Third Party Files

This might be a bit of an edge case for some, but recently I needed to use a third-party script in my Aurelia TypeScript application that wasn’t installable through Npm.

I could have carefully changed it to conform to my TSLint guidelines, but that would have been more effort than I wanted to spend. I just wanted to include the file in a vendor folder and then import it without worrying how it’s written, whether it uses single or double quotes or what indentation setting it uses.

Enter TSLint rule flags.

All you need to do is put /* tslint:disable */ at the top of your file and TSLint will ignore everything after that. You can also selectively ignore certain parts of your file by using /* tslint:disable */ and /* tslint:enable */ to turn parts of your file off and then back on.

You can use TSLint rule flags to ignore certain rules as well, for cases where the rules don’t apply.

Sorting By Vote Count & Recently Added Date In Javascript

Recently whilst working on my web app Built With Aurelia I encountered a situation where I needed to sort by the highest upvoted items, but also sort secondly by the recently added items.

The collection of items would look like this:

  • Item 1 (5 votes)
  • Item 2 (3 votes)
  • Item 3 (2 votes)
  • Item 4 (2 votes)
  • Item 5 (1 vote)
  • Item 6 (1 vote)
  • Item 7 (0 votes)
  • Item 8 (0 votes)
  • Item 9 (0 votes)
  • Item 10 (0 votes)

The code that I wrote to achieve this was the following using the sort method in Javscript:

this.projects.sort((a, b) => {
    return parseInt(b.votes, 10) - parseInt(a.votes, 10) || a.added - b.added;

Essentially the way this works is if the votes for items a and b are different, then the sort is based on the vote count. If the vote values are the same, then the first expression returns 0 (falsey) and the second expression is used.

Technically you could have multiple variables for sorting, but in my case I just needed to sort by vote count and then sort by recently added date (Unix timestamp).

Const or Let: Let’s Talk Javascript

Two of my favourite additions to ECMAScript 2015 were the const and let keywords. Sadly, I see them being misused quite often on many public repositories and even in some projects I have worked on.

Const (constants)

If Javascript is not the first language you have worked with, then you will be familiar with constants in other languages like Java or PHP. A constant is an immutable variable. An immutable variable means its value will always be the same, it never changes (hence why they’re called constants). Also worthy of note is like let block scoped variables, constants are block scoped as well.

But developers are using constants for the wrong reasons. Just because a constant is a read only reference to a value does not mean the value inside of it is immutable. Constants only protect you from reassigning the value, not mutating the value within, such as; Map, Array or Object.

If you come across const myConfig = {myProp: 'somevalue'} one might assume that this would never be allowed to change, but that is not how Javascript works. You can’t assign a new object or change the value, but you can change the properties of the object in the constant. This is because the value inside of the constant is not immutable.

This will work:

const myConfig = {
    someProp: 'myvalue'

myConfig.someProp = 'fkjsdklfjslkd';
myConfig.aNewProp = 'I did not exist when initially defined';

This will not work:

const myConfig = {
    someProp: 'myvalue'

myConfig = {}; // Cannot reassign a constant value

However, you can make an object immutable using Object.freeze which will make mutating the object inside of our constant impossible. However, if your object contains objects of its own, those will not be frozen, so you will need to use Object.freeze to make those immutable as well.

If we take that first example and use Object.freeze we get our desired outcome of making the object immutable as well.

const myConfig = {
    someProp: 'myvalue'


// Original value remains intact
myConfig.someProp = 'fkjsdklfjslkd';

// New property will not be added
myConfig.aNewProp = 'I did not exist when initially defined';

console.log(Object.isFrozen(myConfig)); // true

My personal use of constants is for avoiding the use of magic literals in my code. See below for an example where I am wanting to see if a user has pressed the escape key on their keyboard.

I know that the escape key text is always going to be “Escape” in supported browsers or browsers still using “keyCode” it will always be 27. These values will always be the same, so I define them and use those when comparing. It makes my code way more readable.

const ESC_KEY = 'Escape';
const ESC_KEY_CODE = 27;

document.addEventListener('keydown', evt => {
    let isEscape = false;

    if ('key' in evt) {
        isEscape = evt.key === ESC_KEY;
    } else {
        isEscape = evt.keyCode === ESC_KEY_CODE;

    if (isEscape) {
        window.alert('Escape key pressed');

The bottom line with constants is to be careful that you don’t fall into a false sense of security. They don’t necessarily protect your value from being mutated, especially if used with objects, arrays and so on.

If you need to use an object with constant for whatever reason, make sure you freeze it (and any child objects) to make it immutable.

Let (block scoped variables)

By default standard var variables in Javascript are function scoped. In comparison to other languages like Java, variables are block scoped.

Function scoped variables have been causing headaches for us front-end developers since the dawn of time because they essentially allow you to clobber any existing values or spam your functions with additional variables.

My favourite thing to use let for is loops. A standard for..loop you might be writing might look something like this:

var i = 100; // This is a bit contrived
var array = [1, 2, 3];
for (var i = 0, len = array.length; i < len; i++) {
    console.log('Loop: ', array[i]);

console.log(i); // This will print 3

Let’s clear a couple of things up first. I don’t define an index variable like that, but imagine you were using a variable with the name increment instead and it was defined at the top of your function and used by other parts of your code.

Then you come along and write a for..loop, you don’t realise you already have a increment variable and you end up overwriting the existing variable. All of a sudden your original value is gone.

Enter block scoped variables

var i = 100;

var array = [1, 2, 3];
for (let i = 0, len = array.length; i < len; i++) {
    console.log('Loop: ', array[i]);

console.log(i); // This will print 100

You already have a variable called i define and hoisted to the top of your function. Then you define a loop, use let with the same name and the original value remains intact. Why? Because block scoped variables constrain themselves to the nearest open/closing curly’s {} therefore this i variable is only accessible inside of the loop.

Another example is declaring variables inside of if statements:

let myNumber = 500;

if (myNumber === 500) {
    let myNumber = 1000;
    console.log(myNumber); // Now 1000

console.log(myNumber); // Keepin' it 500

I generally try and use block scoped variables as much as I can, it saves me a lot of problems. Scoping variables to blocks also makes the intent of your code a whole lot more clearer and easier to read code as well.


Both const and let are blocked scoped. You should be using const for immutable values; text labels, numbers and anything that never needs to change. You should be using let to ensure that you are limiting the scope of specific variables where they are being used and avoiding the use of global variables or function scoped variables almost entirely.

Getting Visual Studio Code To Work With Next Versions of TypeScript

Recently whilst helping out a client with an Aurelia TypeScript project, I encountered a situation where the latest development version of TypeScript 2.0 was being used, but some of the newer features like filesGlob support were not being picked up. Module resolution and other things were also an issue.

Turns out you can configure Visual Studio Code to use a local version of TypeScript through a setting directive inside of a project settings file.

Firstly, make sure you have TypeScript installed in your project, I installed via npm install typescript@next -D which saves it as a development dependency.

The best way to make sure this works in your project is to create a folder (if it doesn’t already exist) called .vscode and inside there create a file (if it doesn’t already exist) called settings.json and put the following inside of the curly braces:

"typescript.tsdk": "./node_modules/typescript/lib"

This will then ensure that your Visual Studio Code editor uses your locally installed version of TypeScript opposed to an internal version of it which could be out of date.

Using Async/Await In Aurelia

One of the biggest upcoming additions to Javascript (in my opinion) is support for async/await. If you have worked with a language like C# before, then you will immediately know why async/await is fantastic.

Currently async/await is in stage 3 which means it is almost a completed specification. At the time of writing this post, surprisingly Microsoft’s Edge browser is the only browser which has added in support for async/await.

Support alert:
While async/await will soon be standardised into the TC39 specification by the end of 2016 (hopefully) you still need to use a transpiler like Babel to add in support. At the time of writing this, TypeScript lacks support for async/await, so you need to use Babel with it.

Basic Use With A Promise

Remember async/await is made for use with promises mostly, so it doesn’t replace them, but rather enhances them. Instead of chaining a whole bunch of then callbacks, we can write code that looks and works synchronously, but as a whole is actually asynchronous (like a promise).

export class MyViewModel {
    getData() {
        return new Promise((resolve, reject) => {
            // http is either the aurelia-fetch-client or aurelia-http-client
            this.http.get('/mydata').then(data => {

    async decorateData() {
        let data = await this.getData();
        // Before the following return is run, the await will halt
        // execution before the function continues

        return {version: 1, data: data};

Async/await with Aurelia lifecycle methods

Aurelia allows you to use promises inside of router lifecycle methods like canActivate and activate this means we can also specify them as async functions and use await to halt activation until data is loaded.

export class MyViewModel {
    myData = [];
    async activate() {  
        // http is either the aurelia-fetch-client or aurelia-http-client
        let response = await this.http.get('/mydata');

        this.myData = response.content;


Error Handling Async/Await

You might notice in the above examples we have no concept of error handling. When working with just promises we generally use .then to capture a successful promise resolution or we use .catch to catch any rejected promises.

The way errors are captured in async functions is similar to how you might catch errors with other bits of code you are running in your application (not promise specific).

export class MyViewModel {
    getData() {
        return new Promise((resolve, reject) => {
            // http is either the aurelia-fetch-client or aurelia-http-client
            this.http.get('/mydata').then(data => {

    async decorateData() {
        try {
            let data = await this.getData();
            return {version: 1, data: data};
        } catch(error) {
            return null;

Multiple Awaits

It isn’t uncommon to potentially have a chain of promises all feeding off one another. In an application I am working on I have one such scenario where pieces are loaded based on previous data, this requires chaining a lot of thennables on my promises and it looks horrendous.

I don’t know what you prefer, but I think it is pretty obvious which one wins out of the two identical examples (one using async/await and one using just promises).

Using Async/Await

The following example is a little contrived, it could actually be simpler, but I wanted to make sure it was obvious what was going on.

export class MyViewModel {
    async doStuff() {
        try {
            let data1 = await this.http.get('/myData/1');
            let data2 = await this.http.get('/myData/2');
            let data3 = await this.http.get('/myData/3');

            this.data1 = data1.content || '';
            this.data2 = data1.content || '';
            this.data3 = data1.content || '';
        } catch(error) {

Using Promises

export class MyViewModel {
    doStuff() {
        return Promise.all([
        ]).then(responses => {
            this.data1 = responses[0].content;
            this.data2 = responses[1].content;
            this.data3 = responses[2].content;
        }).catch(error => {


With exception of being able to make Aurelia lifecycle methods async, there isn’t anything you need to specifically do to use async/await in your Aurelia application (except needing a transpiler).