When To Use State Management In Front-end Applications?

As ubiquitous as state management has become in front-end development, it is still a confusing magical black box to most developers. Data goes in, data goes out, and nobody thinks about what happens in-between.

Some developers believe the answer to the question in my title is: always. While some don’t believe in using state management at all and if you’re like me, the answer is: it depends.

When state management gets added to an application that meets the criteria for using it, a weight gets lifted off your shoulders, and things make sense. Prematurely introduce state management or use it in places where you shouldn’t, and your life becomes a tangled mess.

The complexity of state management starts to get even more confusing when questions arise around best practices for working with API endpoints or dealing with forms. The answers are primarily opinion-based once again.

Avoid state management for forms

I cannot stress this enough. I have seen developers implement hacky solutions to working with form inputs and state management, and it’s a clear case of the right tool for the right job. While Redux and other state management solutions have plugins for dealing with forms, why inflict pain on yourself unnecessarily?

You might not agree with me on this one, and that is okay. However, every single time, I can recall seeing state management, coupled with forms, was unnecessary. You only have to Google to find a tonne of people asking for help getting state management to work with forms to see why you shouldn’t.

Forms are often always ephemeral state, meaning the data only exists temporarily. An example of a form might be login form with a username and password or a form for adding a new product to your store. You enter the data and dispatch an action, the form gets cleared, and that’s it.

Instead of replicating and nesting properties in a massive state tree for one specific part of your application that some users might not even use, use local state instead. If you’re working with React, this would be local state within a component (using something like the useState hook) and similar with Aurelia or Vue, local state within your view-model or component.

Just because you can doesn’t mean that you should.

Working with API’s

Depending on your state management solution of choice, the approach for working with API’s can vary depending on plugins and workflow. However, the principle is the same. Your action(s) make an API request and update the state, or you make the request and dispatch an action with the response.

I know in Vue’s VueX state management plugin, many in the community advocate for making your API requests inside of your actions. There isn’t anything wrong with that; however, in Aurelia’s state management library Aurelia Store, I advocate for making the request and then notifying the store.

It doesn’t matter how your data gets into the state, more-so what kind of data you are putting in the state is what truly matters.

Do I need this data again, will I use it more than once?

State management is recycling your data. Will you need that value again in other parts of your application? Use state management. Do you only need to store the value temporarily and reference it in a specific component, only for it to be discarded shortly after that? Don’t use state management.

Asking yourself the following question should be the litmus test you apply to your development workflow. Will you need this value again and will you need it in other parts of your application? Type it up, print it out and hang it up on your wall.

The purpose of state management is not to play the role of “random kitchen drawer full of miscellaneous items”, it exists to make cross-component and cross-application data access easier as well as ensuring the integrity and shape of the data remains intact (in part due to Javascript passing everything by reference).

Using GraphQL?

You might not need state management at all. GraphQL offerings like Apollo offer an all-in-one package for working with data, including state management like functionality that makes syncing and working with your GraphQL server a breeze.

While there is nothing stopping you from using GraphQL with state management libraries, and some GraphQL clients might require them to meet your needs, in many cases you only need one or the other.

State management can introduce unnecessary complexity

If you have ever seen a React + Redux application, you know what I am talking about, a mess of folders and files scattered through your application. You have to open up seven files to change something, and it’s a tonne of cognitive overload.

Something I want to make very clear here: the complexity of using something should never be the deciding factor in whether to use it or not. The next time you start on a new application, don’t be so quick to add in state management but don’t leave it too late.

If you’re validating an idea or prototyping, it can slow you down having to write all of the boilerplate most state management libraries require. Sometimes you need to be “agile” and flexible, and state management can be quite rigid and the opposite of that.

When it comes to state management, do what works for you. Trust your intuition, and if something feels complicated and unnecessary, your gut instinct is probably right. Posts like these are great as a guide, but ultimately you should never take everything as gospel.

Default Exports = Bad

Hello humans. In JavaScript, the worlds most loved and internets favourite client-side language, thanks to modern ECMAScript standards, we have default and named exports.

It’s simple, and you have a file that exports something to be imported somewhere else. A named export is explicit and is only importable by its defined name. A default export is implicit, and you can import it and call it whatever you like.

Now, default exports came about in the CommonJS world of Node.js where you would import a module using const MyModule = require('my-module') to account for uses where exports are default module.exports = MyClass – although, it is worth pointing out that CommonJS does support named exports.

The most persuasive case for named exports

All modern code editors and IDE’s provide autocompletion functionality. If you are using Visual Studio Code (chances are, you are already), then you get some nifty auto-complete functionality out-of-the-box, even if you are not using a superset like TypeScript.

A default export receives no such auto-completion hints because it’s a default export, it could be anything; a class, a function, a constant. A named export explicitly tells your code editor what you’re exporting and importing.

Furthermore, default exports are difficult, if not, impossible for bundlers to tree-shake your code. A default export means that instead of just keeping the code you’re using, the entire file or in some cases an NPM Package is bundled into your code, and therefore adds bloat.

There are a plethora of other interesting issues that have arisen for people, further highlighting the reasons for avoiding default exports. Rich Harris succinctly worded it in his response to an issue on the Rollup repository on GitHub in 2016.

We absolutely would have been. Default exports have caused no end of problems. People get desperately confused by all the different forms of import/export declaration – imagine if we could teach people that you either import { names } or * as namespace, and that you can export either names or declarations. As it stands, it feels like there’s a ton of different variations you have to understand.

Plus the confusion that arises over whether default exports are live or not. I’ve spent more time learning about ES modules than anyone should reasonably be expected to, and I had no idea that the situation is as you’ve described. (Marked this issue as a bug, btw.)

And then there’s the interop headaches. Ostensibly, privileged default exports were meant to make adoption easier for a community that’s familiar with Node modules, which is ironic as nonsense likemodule.exports.default has probably caused more friction than any other aspect of ES modules. I’m sure we could have come up with a better way of importing single-export CommonJS modules. (Though we shouldn’t really call them CommonJS modules – CommonJS modules can only have named exports!)

Unfortunately, we’re stuck with it.

Default exports are lazy

There is no reason to use a default export unless you’re lazy and cannot be bothered taking the extra 5 seconds to add curly braces around your import and make sure your export is named.

There are exceptions when you’re dealing with a third-party package and have no control over how the exports are defined. However, even so, in that situation, a pull request on the repo for the library you’re using might be worth considering.

There is no legitimate reasoning for default exports, but there is plenty of legitimate reasoning against them. Make your life easier and avoid them altogether.

TAD (Test After Development)

Testing is a crucial part of any modern development process. If you’re not testing your code, you might as well be writing it blindfolded and hoping when you push to production that it works, if the company doesn’t go bankrupt from all of the bugs first.

But, I am a realist. Being honest, we all start out wanting to build things right. We want to test everything, writing both unit and integration tests to make sure everything works as intended. The company you work for might even allow for time to write tests until reality hits you on the back of the head.

Priorities change. The higher-ups want the product you’re working on to ship in two months, and there is easily four months of work in Jira, it’s going to be a mission to get it all completed in time.

In my experience, tests are usually the first thing to be removed in the face of more pressing priorities (like actually building the product and making money).

A word on Test Driven Development (TDD)

I have always been a fan of test-driven development. And I have been fortunate to be in a position where I have been able to explore TDD, and when it works, boy does it work. By works, I mean when a company agrees to invest the time into a test-driven development process.

Every project I work on, my first thought is “TDD would be great for this”, but once again, priorities shift, and it becomes hard to justify the short-term investment for the long-term gain that TDD provides. Even if your entire development team wants something, management gets the final word, and it all comes down to money in the end (we’ll talk about that later).

You need tests

In an ideal world, we would all be writing our tests first and then making our code make them pass, make them fail, make them pass. A good test not only helps you design clean code, but it also has the added benefit of documenting your code as well.

If TDD is more time consuming and harder to justify to your company, does this mean you give up on writing tests completely? Heck no.

In any medium to large application, tests are as crucial as decent server infrastructure; they should be the oxygen to your app brain. No oxygen and the brain will slowly die.

Inevitably, many developers end up resorting to TAD (Test After Development) because it’s easier and faster (at least initially). Writing the code first and then going back and writing the tests after the fact. It is not super ideal, but any tests are better than no tests.

Many would argue that if you get into the habit of TDD, over time, you will get faster at writing tests and code, that the long-term benefits outweigh the short-term caveats. The buy-in from stakeholders is the hard part. If it were easy, everyone would be doing TDD.

The longer you work on a project, the more crucial tests become. As the scope widens and feature set increases, things are more likely to break, and some of your architectural decisions early on are bound to come back and bite you (something that TDD would help you most likely avoid).

The whole point of TDD is that initially, work takes longer to complete than not prescribing to TDD, but your code will be more stable and less error-prone.

An experienced surgeon doesn’t rush to cut you open and perform heart surgery right away; they take their time and slowly get the job done. Programming is not heart surgery, but if you’re working on critical systems where functioning clean code is essential (like a space shuttle or nuclear power plant), it’s equally as important you get it right.

However, it all comes down to cost

The deciding factor in any decision you make within your company always comes down to money. The short-term benefits of TDD are hard to quantify, and if you do your job correctly, the number of bugs and refactoring work you need to do will be substantially lower (almost zero), but how do you prove that to non-technical higher-ups?

You can’t. It’s all well and good to say we have fewer bugs since adhering to TDD principles, but it is challenging to prove that TDD is the reason for that and not just increased familiarity and skill-level being the main factor.

  • Does it take more time to finish a task? Yes.
  • Will it cost the company more time and money? Yes.
  • Will there be a learning curve for inexperienced developers (especially juniors)? Yes.

Once again, you could argue that writing the code and then writing the tests, going back and refactoring your code and then having to fix your tests takes a lot more time using TAD: you’re right. Looking at TDD through neutral coloured glasses, there are more benefits than downsides if time is not an option.

But, the industry is weird. Many managers still measure the value and productivity of developers by the number of lines of code they write and commits they’re pushing up. And what produces the most lines of code and commits? TAD. It’s inefficient, but even non-technical people can see you look busy.

Developer A and Developer B side-by-side, Developer A is writing and shipping code faster than the other. It might be laden with errors and horrendous compared to Developer B’s well-architected and clean code, but it works and if there is one thing managers love more than saving money, it’s shipping code on time.

Conclusion

If you can’t convince the company you work for to give TDD a chance, TAD is still an acceptable albeit less than the ideal alternative of no tests. As long as you have tests, there is always room for improvement.

What I Love About Aurelia

There is no shortage of Javascript frameworks and libraries to choose from. Notably, you have Angular, React, and Vue, which are the most discussed and used.

If you are a regular reader of my blog, you know how much I love Aurelia and have blogged about it since early 2015. If you are new, let me quickly introduce myself.

I have been a developer for 11 years now, working in the front-end space. My experience stems back to the likes of; ExtJS, YUI, Backbone, Knockout, Durandal and Angular v1. Believe it or not, I also used to work with React back in 2014/2015.

I still remember seeing the post on Hacker News announcing the Aurelia alpha release. The date was January 26th, 2015. After reading through the website and goals of the framework as well as being familiar with Rob’s previous work, I gave it a go and never looked back.

Since 2015, I have been working with Aurelia daily in my day job (current job as well as my previous job) as well as side-projects and random ideas; there isn’t a project that I haven’t used it on.

Aurelia promotes strong separation of concerns

When I first started in web development, the one thing ingrained into me from the beginning was: separation of logic and templating, or separation of styles and markup also known as Separation of Concerns (SoC).

Web pages by their natural order promote separation by concern. HTML is used to structure and define the page; CSS is used to style the HTML and Javascript responsible for page interaction and logic.

In Aurelia, your business logic gets handled within a view-model, markup handled inside of a view and styling is handled inside of a stylesheet. When you think about it, Aurelia doesn’t try replacing what browsers and specs already give you; it enhances them.

Separation of concerns is a repeated theme throughout the framework. Components, custom attributes and routed views all utilise the same conventions and approach to writing a web application.

Think about how you would develop a web application or page without a framework these days, and it would be an HTML file for the markup, a stylesheet for the styling and a Javascript file which contains any logic for interacting with the page or dealing with data. No conventions, just basic concepts that never change, this is what it is like building in Aurelia.

I do not want to turn this post into a framework/library bashing contest, but this is one of the things I dislike about React. It reminds me of working with PHP applications (not using a framework) where business logic, styling and markup are all intertwined with one another.

There might be other ways these days with later versions of React, but the basic premise is you stuff a bunch of XML like HTML (JSX) inside of a render function which is called by React. You break up you UI into smaller components and use props to pass data through to them. While JSX is not required, it is the most commonly used syntax I have seen for authoring React components.

Let me acknowledge that A) I haven’t worked with React in a few years and B) I am biased because of my use of Aurelia. Please correct me if any of my assumptions and claims about React are inaccurate, and I can amend them.

Batteries included, no glue required

One of my absolute favourite things about Aurelia is the fact it gives you almost everything you need out-of-the-box. It gives you powerful templating, API’s, dependency injection, routing.

Then you have the plethora of plugins on offer for other functionality. Want to use Fetch with more beautiful syntax and error handling? Aurelia Fetch Client. Want to localise your application into other languages? Aurelia i18n. Need a configurable modal in your application? Aurelia Dialog.

If you want to add in state management to your Aurelia application and don’t want to get a frontal lobotomy using something over-engineered and complicated like Redux, there is Aurelia Store which is an RxJS powered state management library compatible with the Redux Dev Tools browser extension.

In my experience working with other frameworks and libraries; some require glueing together a lot of independently maintained dependencies to build what is essentially a framework.

Why waste your time creating a framework piece-by-piece every single time you want to build something. Save yourself the node_modules weight and time, use something that gives you what you need straight away.

Aurelia can be used without needing a bundler or build process

If your experience goes as far jQuery where you drop in a script and start writing code, then the ability to use Aurelia in script form might appeal to you. Thanks to Aurelia Script, you can add in a script and starting authoring powerful web applications.

Not only is this approach useful for quick prototyping on Codepen or Codesandbox, it means you spend less time battling with complicated build process in Webpack and fixing complex issues and extension incompatibilities.

If I am honest, not many of us enjoying configuring Webpack, opting to pull out our teeth with a pair of pliers instead.

It has no Virtual DOM

Thanks to other popular frameworks and libraries, the words Virtual DOM have become synonymous with developers naively believing that it means fast applications. While the Virtual DOM is impressive from a technical perspective, it’s not the only way to achieve fast performing web applications.

In Aurelia, the binding system is reactive; this means that when something changes, it gets updated in your view. Because there is no Virtual DOM, you’re working with just the real DOM, and you can use third-party libraries which need to touch the DOM without any problems.

The more you read into what a Virtual DOM is and how it works, the more you realise it’s not necessarily always the most performant choice.

It’s easy for newcomers and all skill level developers

I have seen developers upskill in Aurelia from a multitude of levels. I have seen back-end developers grasp Aurelia in a few days, I’ve seen front-end developers used to working with jQuery or React also get it quickly, and I have seen newcomers grasp it equally as fast.

I think this is a unique selling point for Aurelia; you don’t need a PhD in Javascript to know what is going on with it. Its conventions based approach means you spend less time configuring Aurelia and more time writing code.

The Aurelia CLI is fantastic

All useful frameworks and libraries these days have a CLI which allow you to get a starting application up and running with the features and coding styles needed: Aurelia is no exception.

There isn’t anything unique about the Aurelia CLI, but it makes your life a lot easier when you’re building a new project.

Familiar syntax, aligned with standards

In an era where framework and library authors are reinventing the wheel or introducing foreign concepts not found in any specification, Aurelia goes against the grain by leveraging standards-based concepts and API’s.

The templating syntax for displaying a value uses the same interpolation syntax ${myVariable} that you would inside of template literals inside of your Javascript/TypeScript code. Binding to properties is equally as intuitive, want to bind a value on a form input? <input type="text" value.bind="bindToThis">

If you want to use Web Components with Aurelia, there is a plugin for that and the next version; Aurelia will be aligned with the HTML Modules specification to give you single-file components like you might find in Vue, only standards-based.

This is one of the most unique features and selling points of Aurelia. It allows developers to work in a way that is familiar, a way that requires very little buy-in with the intent to eventually allow official specifications to replace features found in Aurelia.

Working With An API In Aurelia Store

Unless you’re working on a completely static web application, chances are you’ll be needing to get data from an API or send data to one. In Aurelia, you would usually do this either using the HTTP Client or Fetch Client (you should be using Fetch). However, what about Aurelia Store? The versatile state management library for Aurelia.

You have two approaches you can take when using Aurelia Store with an API, and we’ll discuss the pros and cons of each one. This article is going to be a small and relatively non-technical article.

Make API requests from within your actions

Aurelia Store supports asynchronous actions, meaning you can delay their execution until the async function promise resolves. A simple Aurelia store action might look like this if using async.

async function myAction(state) {
    const newState = { ...state };

    // Do something in here

    return newState;
}

When you return from within an async function, it’s akin to a Promise.resolve the entire function itself becomes wrapped in a promise, but we’re not going to go into specifics of how async/await works in this post.

Making a function async means, you can call an API from within your action and then store the result in your state when it loads. This simplified code assumes that you’re accounting for errors in the loadUsers, hence there not being a try/catch block below.

async function myAction(state) {
    const newState = { ...state };

    newState.users = await api.loadUsers();

    return newState;
}

This approach works well because it means you can load data through actions, they are easier to debug, and you can perform transformations on the data returned. It also makes refactoring your code so much easier as you only need to change one action.

However, the downside is if the request above takes longer than expected. Maybe you’re using an Azure or Google Cloud Function (lambda function), and it takes a few seconds to start because it’s not warmed up, this request might take a few seconds to complete and meanwhile, your application slows down to a crawl. If you’re calling multiple actions, it’ll delay calling the other actions until the current one returns a state.

If you keep an eye on your actions and ensure the requests are tiny, then this should never be a problem. Adding a visual loading indicator into the application can also alleviate any loading issues. However, assumption-led decision making can sometimes backfire.

API Call First, Dispatch Action Second

In the above approach, the request and action are one function. In this approach, you request the data first and then dispatch an action after.

There is an obvious upside to this approach. Your actions execute fast; they don’t have to wait for API calls to resolve before continuing. Another upside is that it ensures your state layer and API logic remains separate. The maybe not-so-obvious downside is you now have to write and maintain two different pieces of code.

async function myAction(state, users) {
    const newState = { ...state };

    newState.users = users;

    return newState;
}

async function loadUsers() {
    const request = await fetch('/users/all');
    const response = await request.json();

    return response;
}

export class ViewModel {
    async activate() {
        const users = await loadUsers();
        dispatchify(myAction)(users);
    }
}

But, which one do I use?

The point of this post is not to tell you that you should use one over the other; it is to point out that the two approaches have upsides and downsides. Either all is fine, you only need to be aware of the caveat of async actions. When you create an asynchronous function, they all become async.

I use the second approach in my Aurelia applications. I like decoupling my state management from my API logic completely. It means if I want to remove state management at a later date or use something else, there is no tight coupling in my code. Remove my actions and call the API functions myself from my view-models (or wherever).

My Thoughts On GitHub Sponsors

It’s hard to believe that it has almost been a year since Microsoft completed its acquisition of GitHub. While a vocal number of people in the community decried the decision and some moved to GitLab since the acquisition Microsoft has made a series of positive moves.

It all started with GitHub making private repositories free in January 2019, up to three collaborators. This is a move that directly competed with competing Atlassian owned source control platform Bitbucket which offered free private repos as well.

And then recently, GitHub announced GitHub Package Registry, an in-preview up and coming rival to npmjs for Node packages, but also extending to other languages and tying in with the repository itself.

And then not too long after that, most recently, GitHub announced GitHub Sponsors allowing individual repository owners to be paid for their open source contributions.

While the latest new feature has been applauded by many, some see it as aggressive and an attempt by Microsoft to push similar open source sponsoring platforms out using its size and power to do so.

One such popular offering for open source is Open Collective, projects like Webpack and Aurelia use it so the community and companies can sponsor and fund development. With GitHub Sponsors, initially, the feature is only available for individuals, but presumably, once it goes public organisations will be able to opt-in to the feature as well.

Another popular option used by the likes of Vue.js (raking in six figures yearly in donations) is Patreon. These platforms, however, are disconnected from the projects themselves, requiring users to signup to a separate service to use.

I see GitHub Sponsors as a brilliant and much-needed move by Microsoft and GitHub. Funding and open source is a constant hot topic and while other services do exist, they are not integrated with GitHub itself. An in-built way of soliciting monetary contributions and all without requiring users to signup to another service, it’s a huge win for open source.

Best of all, GitHub is forgoing fees for the first year and then charging them after. But, it’s a nice incentive to join and try it out without losing anything. If it works, projects will presumably keep using GitHub Sponsors for their projects and if not, use a different offering.

Will GitHub Sponsors make people all of a sudden start paying for the free software and code they’re benefitting from? Who knows. All I know is this is a void that needed to be filled, one that GitLab and Bitbucket failed to address as well (and presumably will copy).

Masked Inputs In Aurelia: The Easy (and reliable) Way

When it comes to adding in masked inputs into a modern Javascript web application, it is easier said than done. The task at hand is simple, yet, under the surface is paved with complexity in a framework with unidirectional data flow.

The problem I am going to describe is also a problem you’ll encounter in Angular, Ember, Vue and any other framework or library which offers two-way binding on input elements or modifying the input itself.

The Problem

You have an input element. You want users to be able to enter a value and automatically format it as they type. This input could be anything, but in my situation, the input was for entering numeric values.

I wanted the value to automatically add a comma for the hundreds, add two decimals to the end and a dollar sign ($) symbol as a prefix to the value.

By default in Aurelia, binding on input elements is two-way. This means the value is both updated in the view and view-model, which in many cases is great.

As you can imagine, in the case of wanting to format an input element automatically you are instantly fighting with the framework. The problem is the plugin that does the formatting is modifying the input, and Aurelia itself is also modifying the input.

Why not a value converter?

You can create a value converter (my first attempt) and leverage the toView and fromView methods for mutating the values going to and from the view.

A value converter gets you quite a lot of the way, but one problem you will encounter is the caret position will jump to the end. When the value is modified in the input, the entire input is effectively refreshed and the cursor jumps to the end of the value which is jarring and annoying.

How about a custom attribute?

My second attempt involved using a custom attribute that listened to the input and blur events. I added in some checks of the value and attempting to work around the caret position by reading it whenever the input was modified and setting the position after.

Ultimately, I fell into some of the same problems the value converter presented to me. Getting and setting the caret position is tricky business and something I ideally do not want to maintain in the long-term, having to work around issues in different browsers is another problem there.

I knew the only solution had to be leveraging an existing input mask library. A library which supports a plethora of formatting options, masks, working with currencies and other types of data and most importantly: solves the caret jumping problem.

The Solution

I tried a lot of different approaches, referencing implementations for not only Aurelia but Angular and Vue as well. However, the common theme in many of these solutions is they were very complicated. One such plugin I found was over 600 lines of code, many of those specifically for getting and setting the caret.

The final solution ended up being laughably simple, in hindsight. I will show you the code and then run through it below. I am using the inputmask plugin which is a battle-tested and highly configurable input masking plugin.

Whatever library you choose to use, the code will end up looking the same if you follow the approach I have taken.

import { inject, customAttribute, DOM } from 'aurelia-framework';

import Inputmask from 'inputmask';

@customAttribute('input-mask')
@inject(DOM.Element)
export class InputMask {
  private element: HTMLInputElement;

  constructor(element: HTMLInputElement) {
    this.element = element;
  }

  attached() {
    const im = new Inputmask({
      greedy: false,
      alias: 'currency',
      radixPoint: '.',
      groupSeparator: ',',
      digits: 2,
      autoGroup: true,
      rightAlign: false,
      prefix: '$'
    });
    
    im.mask(this.element);
  }
}

export class CleanInputMaskValueConverter {
  fromView(val) {
    if (typeof val === 'string' && val.includes('$')) {
      // Strip out $, _ and , as well as whitespace
      // Then parse it as a floating number to account for decimals
      const parsedValue = parseFloat(val.replace('$', '').replace(/_/g, '').replace(/,/g, '').trim());

      // The number is valid return it
      if (!isNaN(parsedValue)) {
        return parsedValue;
      }
    }

    // Our parsed value was not a valid number, just return the passed in value
    return val;
  }
}

export class PrecisionValueConverter {
  toView(val, prefix = null) {
    const parsedValue = parseFloat(val);

    if (!isNaN(parsedValue)) {
      if (prefix) {
        return `${prefix}${parsedValue.toFixed(2)}`;
      } else {
        return parsedValue.toFixed(2);
      }
    }

    return val;
  }
}

The solution in its simplest terms is three parts:

  • A custom attribute applied to input elements called input-mask which instantiates the plugin and applies the masking options
  • A value converter which strips away any fragments of the mask; $, _ and ,, trims whitespace and then parses the value using parseFloat
  • A value converter which formats a value passed from a view-model and adds a prefix (if specified) and converts the value to a precision of 2 decimal places

During this exploration phase, I stumbled upon a powerful and awesome feature in Aurelia that I did not even know was possible, multiple value bindings. You will notice below that I have a value.one-time as well as a value.from-view binding on the input.

The reason for this was I wanted to initially set the value and then not worry about syncing it again. This allows data loaded from the server to be passed in (initial values, etc). The value.from-view binding updates our view-model every time the value changes.

<template>
    <input 
        type="text"
        value.one-time="value | precision" 
        value.from-view="value | cleanInputMask"
        input-mask>
</template>

It makes a lot of sense that you can do this, but it’s not a documented feature and initially, I wasn’t 100% confident it would work. I am an experimenter, so the worst-case scenario was it wouldn’t work. This is what I love about Aurelia, it can handle anything that you throw at it, even things you think might not work, but end up working.

Basically, this approach creates an inline binding behaviour where you control the direction of the updating process, which is quite powerful and cool.

A great addition to Aurelia itself could be a binding attribute which offers this functionality (allowing a once to view and always from view mode).

Conclusion & Demo

The end result can be seen here, as you can see we have a nicely formatted as you type input element. This was built for a specific use-case so it is missing configuration options, however, I have created a repository here which will have a plugin-friendly version of this soon.

Saved By The Aurelia Portal Attribute

Recently at my day job, I encountered a very specific scenario that I wrestled with for quite a bit. I had a routed set of views, which were using a layout view template because it needed a very specific markup for positioning using CSS Grid.

The issue I had was although the route layout had a <slot></slot> element inside of it for projecting the routes, I wanted a custom navigation element to be projected inside of the routed view. Previously there was a bit of duplication to add the custom element into the right area.

I initially tried to use router View Ports to achieve the result I needed, but they’re more designed for rendering sub-views within a view, I needed to render a component into a specific area of the page and have it react to route change events.

Then I remember the Portal attribute by core team member Binh Vo, it has actually been around for a while now. I haven’t had a need for this plugin, until now. In my situation, I needed to leverage the target property of the attribute to tell it where to inject my custom element and everything worked as expected.

My layout view simplified looks like this:

<section>
    <main-nav></main-nav>
    <inner-nav portal="target: #nav-slot;"></inner-nav>
    <slot></slot>
</section>

For each routed view the simplified markup looks like this:

<template>
  <main>
    <div class="dashboard-wrapper">
      <div class="inner-content">
        <section>
          <div id="nav-slot">
            <div class="content">
                <h1>Some content</h1>
            </div>
          <div>
        </section>
      </div>
    </div>
  </main>
</template>

When this routed view is loaded, the portal attribute will inject the element from the router layout into the DIV with the ID nav-slot. Super simple stuff and it does exactly what I needed it to do. The portal plugin can be found on GitHub here and it’s great to see how it all functions behind-the-scenes. A demo of the plugin in action can also be found here.

Creating Your Own Javascript Decorators in Aurelia

Decorators are currently a stage 2 proposal in Javascript and they allow you to decorate classes, class properties and methods. If you have worked with Aurelia for longer than 5 minutes or other frameworks such as Angular, you will already be familiar with them.

At the core of a decorator, it is a function that returns a function. It wraps the context of wherever it is applied. Decorators allow you to add new properties/methods to a class or change existing behaviours.

Aurelia already utilises decorators quite extensively, but it does not rely on them for everyday use. The existence of decorators allows you to mark your applications up using convenient shorthand.

It is important to remember that decorators are a Javascript language construct and not specific to any framework like Aurelia. Given Aurelia leverages ES2015 classes, we can create and use our decorators without changing anything.

Decorator Types

When we talk about decorators, there are three categories. A class decorator (decorates a class), a class field decorator (inline properties on classes) or a method decorator (a function, either standalone or within a class). You can even create a decorator that combines all three types into one. For the context of this article, we will only be focusing on class decorators.

Creating a class decorator

If you have used Aurelia a little bit, decorators such as; @inject, @singleton and @customElement might be familiar to you already. These are examples of a class decorator.

The inject decorator is used in the following manner:

import {inject} from 'aurelia-framework';
import {Router} from 'aurelia-router';

@inject(Router)
export class MyClass {
    ...
}

If you look at the actual implementation for the inject decorator in the Aurelia codebase here you will notice that a class property called inject is being assigned on the class here.

Because decorators in Aurelia are optional and nothing more than convenient shortcuts, you can write the above like this:

import {Router} from 'aurelia-router';

export class MyClass {
    static inject = [Router];
}

Let’s create a simple decorator that adds a property to our class called isAwesome a boolean property which will get set on the class itself.

@isAwesome(true)
export class MyClass {

}

function isAwesome(val) {
    return function(target) {
        target.isAwesome = val;
    }
}

In the context of this decorator, target is the class we are using the decorator on and gives us access to the class itself, including the prototype object.

Creating an Aurelia decorator

The above decorator is quite simple. It adds a property to our class and does not do anything exciting. Now, we are going to be creating a decorator which shorthands the Aurelia router lifecycle method canActivate.

function canActivate(val, redirectTo = '') {
    return function(target) {
        target.prototype.canActivate = () => {

            if (!val) {
                if (redirectTo === '') {
                    window.location.href = '/404';
                } else {
                    window.location.href = redirectTo;
                }
            }

            return val;
        };
    };
}

While this is a very rough decorator that would need some improvement before serving a genuine purpose, it showcases how you can modify a class, even adding new methods to the prototype with ease.

Now, let’s clean it up and make it more useful and delightful to look at using some nice new Javascript syntax.

const canActivate = (resolve) => {
    return (target) => {
        target.prototype.canActivate = () => {
            return resolve();
        };
    };
};

Now, we have to pass through a function which allows us to be more flexible in what we can pass through. Still, it could be a lot more useful. What if we wanted to access the current route or get passed in parameters like we can with a class defined method?

const canActivate = (resolve) => {
    return (target) => {
        target.prototype.canActivate = (params, routeConfig, navigationInstruction) => {
            return resolve(params, routeConfig, navigationInstruction);
        };
    };
};

Okay, now we have a decorator which accepts a callback function and has access to the route parameters, the routeConfig and current navigation instruction.

To use our new fancy decorator, we pass through a function which needs to return a value (boolean, redirect):

@canActivate((params, routeConfig, navigationInstruction) => {
    return !(routeConfig.auth);
})
export class MyClass {

}

Apply this decorator will deny the class getting activated if the route has an auth property of true. We inverse the boolean check when we return it.

Just when you thought we couldn’t improve our decorator anymore, you might have noticed there is a flaw in what we have created. It always assumes that our callback returns something. If it doesn’t, we’ll stop the view-model from executing.

Let’s tweak our decorator slightly:

const canActivate = (resolve) => {
    return (target) => {
        target.prototype.canActivate = (params, routeConfig, navigationInstruction) => {
            let resolveCall = resolve(params, routeConfig, navigationInstruction);

            return typeof resolveCall !== 'undefined' ? resolveCall : true;
        };
    };
};

We still require a callback function, but if the developer doesn’t return anything from their callback function, we’ll return true as to ensure the view-model executes fine.

If you wanted to redirect all visitors to a different route if it’s an authenticated route, you could do something like this:

import {Redirect} from 'aurelia-router';

@canActivate((params, routeConfig, navigationInstruction) => {
    return (routeConfig.auth) ? new Redirect('/robots') : true;
})
export class MyViewModel {

}

You would want to add in some additional logic to check if a user is logged in and then redirect accordingly, our example does not accommodate for this. We have just showcased how easy it is to write a decorator which can leverage existing methods and properties, as well as defining new ones.

Creating a decorator that sets the title based on the current route

When it comes to setting the title of the page based on the current route, it can be confusing for newcomers. So, using what we have learned, we are going to create a decorator which allows us to set the title from within a view-model in an Aurelia application.

We want to create a class decorator which hooks into the activate method and accesses the routeConfig, which contains the current navigation model and a method for setting the title.

Our decorator should support passing in a function which returns a string value or the ability to pass in a string directly.

const setTitle = (callbackOrString) => {
    return (target) => {
        ((originalMethod) => {
            target.prototype.activate = (params, routeConfig, navigationInstruction) => {
                let newtitle = typeof callbackOrString === 'function' ? callbackOrString(params, routeConfig, navigationInstruction) : callbackOrString;
                
                if (newtitle !== null && newtitle) {
                    routeConfig.navModel.setTitle(newtitle);
                }

                if (originalMethod !== null) {
                    originalMethod(params, routeConfig, navigationInstruction);
                }
            };
        })(target.prototype.activate || null);
    };
};

One important thing to note is our callback function (if one is passed in) gets the same arguments that the activate method does. This means we get parameters, the current route configuration and navigation instruction.

Using it

Assuming you have imported the decorator above or it exists in the same file as your view-model, let’s show how this decorator is used.

Callback function

We pass in a function which returns a string. This scenario might be helpful if you want to determine if the current user is logged in and display their name in the title.

@setTitle((params, routeConfig, navigationInstruction) => { return 'My Title'; }) 
export class MyViewModel {

}

String value

Passing in a string value will just set the page to this string without the use of callback functions. Useful for pages which don’t need dynamic titles.

@setTitle('My Title') 
export class MyViewModel {

}

Conclusion

In this article, we didn’t even cover the full scope of what you can do with decorators. We have only just scraped the surface. In future articles, we might explore other types of Javascript decorators. All you need to remember is decorators are functions which “decorate” whatever they’re being used with.

As a further exercise, maybe think of other things you can write decorators for, to save you some time.

Further reading

GitKraken “Could not find a compatible repository” Error Fix

I recently encountered an error in GitKraken after a bad merge occurred when trying to merge in some changes from the main development branch, whilst I had quite a few local changes that GitKraken usually automatically stashes for me.

My problem was I was using Bash Ubuntu on Windows, which has a nasty habit of locking files. The merge and stashing seemed to fail because in the changes I was attempting to merge in, some files were deleted.
I tried closing and reopening GitKraken, but it was clear that GitKraken wasn’t going to let me open up that repo again.

The fix

I realise this is a bit of a nuclear fix, but you’ll need to open up PowerShell to fix this. For me, it was simply a matter of navigating to the project directory and running: git reset --hard however, if you need changes, your repo will be interactable just fine on the command line.

As far as I could see with everything I tried, GitKraken won’t ever fix itself, the command line is the only solution. The above, once I ran it and opened up GitKraken it worked just fine again as nothing had happened.