Announcing The Aurelia 2 Book

Buy the Aurelia 2 book here.

With the Aurelia 2 alpha coming very shortly, I have had plans for a while to write another Aurelia book, this time around on Aurelia 2. I learned a lot writing my first book and admittedly, made a few mistakes. The learning experience was invaluable.

With my first book, it came at a time when the Aurelia documentation was subpar. The book served as more of a stand-in for the lack of detailed and concise documentation. With Aurelia 2, extensive documentation work has been undertaken to the point where a book telling you about every little thing makes no sense.

The Aurelia 2 documentation which is updated regularly can be found here if you are wondering where it is located.

This time around, I wanted to write a book that is different from the first, something fun. So, I set out one night to start writing the Aurelia 2 book, not really having a clear goal in mind and after a couple of days, I had one.

The book will touch upon the fundamentals without parroting the documentation too much, it will support TypeScript and you will build an application using Aurelia 2 to get acquainted. The app you will be building is an online cat pictures store, where you can buy pictures of cats.

What You’ll Learn

  • How to generate a new Aurelia 2 app from scratch
  • How to configure Aurelia 2, globalising resources and including plugins
  • How to work with the new direct router
  • How to structure your Aurelia apps
  • How to adopt a component-driven mindset to developing complex UI’s
  • How to leverage new Aurelia 2 features and API’s
  • How to write unit tests using Jest as well as mocks
  • How to test Aurelia 2 components, staging them, querying HTML
  • How to write end-to-end tests with Cypress

The book aims to go beyond just being solely about Aurelia 2, giving you practical skills.

What You’ll Build

An online store for selling cat pictures. It’ll have a homepage, category page, a detail page, a checkout screen and logged in/logged out functionality as well. An accompanying local server running SQLlite will allow your app to feel real as things are persisted.

The application will touch upon how you can add in authentication, how you can create routes, creating secured sections and other fundamentals that will translate across to actual real-world applications.

By the end of the book you’ll have a semi-real online store driven by Aurelia on the front-end, complete with tests and all. The idea is you’ll learn Aurelia 2 by building an application piece-by-piece.

A Work In Progress

Please be aware that the book structure is still being finalised, chapters added and moved around. There will be typos, mistakes in the code and possibly even things that change in Aurelia 2 itself which get removed or added to the book.

As Aurelia 2 evolves in development, so too, will the book. For the entire life of Aurelia 2, the book will be updated in step to remain an up-to-date resource. You only purchase once and all future updates to the book are free.

Buy the Aurelia 2 book here.

Protected User Uploadable Files With Firebase Storage

Recently, while I was building an application on Firebase and implementing Firebase Storage, I realised that I have to always consult old Firebase projects I have built or scour the web for blogs and documentation that explain not only how to do uploads to Firebase Storage but also write rules to protect them.

My use-case is as follows:

  • Users can create listings
  • A listing can have one or more images
  • An image is uploaded to Firebase Storage in the form of listing-images/userId/filename.extension
  • I only want to allow users to write to this folder if they are that user currently logged in. This folder becomes a bucket for all of the users listing images.

Uploading Files

const storage =`listing-images/${this.auth.currentUser.uid}/${}`);
const upload = storage.put(image);

    // Uploading... 
    (snapshot) => {
        this.frontImageUploading = true;
    // Error
    () => {
        this.frontImageUploading = false;
    // Complete
    () => {
        this.frontImageUploading = false;

Uploading files with Firebase’ Javascript SDK could not be any easier. You create a reference to where you want to store a file and then you call the put method providing a file to upload.

We then listen to the state_changed event which can take three callback functions. The first is called as the file is uploaded, the second is the error callback and the third is called when a file has been uploaded.

Firebase Storage Rules

My storage.rules file ended up looking like this. Your use-case might differ, but the premise is the same.

I want to allow all reads as these images are public, so I create a rule called allow read and provide if true as the value.

For image writes, I first check if the currently logged in user matches the userId provided in the image path. I then check if the size of the image is less than 5mb and if its content type is an image.

rules_version = '2';

service {
  match /b/{bucket}/o {
    match /listing-images/{userId}/{allPaths=**} {
      allow read: if true;
      allow write: if request.auth.uid == userId && request.resource.size < 5 * 1024 * 2014 && request.resource.contentType.matches('image/.*');

This allows users to only be able to upload to their specific folder inside of listing-images. Anyone can read it, but only the logged-in user can upload here. It’s simple and it works.

Accessing The Uploaded File

Once the file has successfully uploaded, you most likely want to access some information about it like where it uploaded and so on. We can use the reference we created to get that information.

const storage =`listing-images/${this.auth.currentUser.uid}/${}`);
const upload = storage.put(image);

    // Uploading... 
    (snapshot) => {
        this.frontImageUploading = true;
    // Error
    () => {
        this.frontImageUploading = false;
    // Complete
    async () => {
        this.frontImageUploading = false;

        const meta = await storage.getMetadata();

We made the success callback async and then we awaited the getMetadata method which is called on the ref itself. This gives us information like which bucket the file is in, the fullPath, it’s md5Hash, size and other useful values.

If you want to generate an image URL string to the file which can be used in the browser, you can call getDownloadURL(); like so.

const url = await storage.getDownloadURL();

How To Convert An Object To An Array In Vanilla Javascript

I do quite a lot of work with Firebase and when you are working with authentication claims, they will be returned as an object containing your simple values (usually booleans).

Thankfully, since ES2015 landed, each Javascript release has introduced a new and easier way to work with objects and convert them into arrays. You don’t need any libraries like Lodash, this is all native and well-supported vanilla Javascript.

To convert an object of properties and values, you can use Object.entries however, it will return an array of arrays, which isn’t terrible but might confuse Javascript novices as to how you would even work with these.

const claims = {
    admin: true,
    superAdmin: false

const claimsArray = Object.entries(claims);

Now, if you were to console.log claims array from above, this is what you would get.


To work with this nested array, you can simply use a for..of loop like this:

for (const [key, value] of claimsArray) {
    console.log(key, value)

The key is the object property name and the value is the value.

It’s amazing how easy modern Javascript makes doing things like these. To think only a few short years ago we were still using jQuery, supporting old versions of Internet Explorer and crying out for a better web.

How To Store Users In Firestore Using Firebase Authentication

As much as I love Firebase, especially it’s easy to implement authentication, for some things Firebase can be a bit confusing when you go to implement them.

For Firebase Authentication, sadly, you cannot store any additional information and easily query it for authenticated users. You can leverage custom claims to add little pieces of meta to a user (like roles), but for things such as profile data, you can’t.

Fortunately, there is a solution you can easily implement using Firebase Functions and triggers.

The Workflow

A user signups for your application using Firebase Authentication. At the same time, you want to also create a new document inside of Firestore for that user and store some of their user information like uid as well as any custom claims, display name and so on.

This will then allow you to query Firestore for any additional data for this user. It might be fields like; date of birth, country, first and last name, their likes, a bio, anything.

Firebase Triggers

Inside of Firebase Functions, you can add a trigger for onCreate which will get called when a user creates an account in your application. This will be called when the user logs in using social oAuth (Google, GitHub, etc) as well as email/password and other login methods.

export const createUser = functions.auth.user().onCreate((user) => {
    const { uid, displayName, email } = user;

    return admin.firestore()
        .set({ uid, displayName, email })

The code is fairly easy to understand. When a new user is created, we pull out their uid as well as displayName (if social oAuth login) and email. The technique here is creating a new user, using their uid as their unique identifier (document name).

Something to keep in mind is this trigger will only fire once. You will have to completely delete the user and force them to sign up again if you want it to trigger again.

Cleaning Up

If a user deletes their account, you also want to make sure that you use the onDelete trigger to remove the user from your database.

export const deleteUser = functions.auth.user().onDelete((user) => {
    return admin.firestore()

Defining Rules

When you are working with Firestore, it is important to properly create rules to prevent potential security issues in your application. You don’t want just anyone being able to read your users database and leaking sensitive information.

Because Firestore rules allow you to define helper methods, we are going to create two methods isAuth and isUser. The auth method will check if a user is logged in and the second method will allow us to pass in a uid and compare that with the currently logged in user.

rules_version = '2';

function isAuth() {
    return request.auth != null && request.auth.uid != null;

function isUser(uid) {
    return request.auth.uid == uid;

service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if false;

    match /users/{userId} {
        allow read: if isUser(userId);
        allow create: if isAuth();
        allow update: if isUser(userId);
        allow delete: if isUser(userId);

We only want the logged in user to be able to read their own document. For creating new users, a user has to be logged in. For updates and deletes, we want to verify the logged in user matches the document.

With that, you now have a functional auto-workflow that will create new user documents in Firestore whenever someone signs up to your application and the data will be protected.

Expanding this out, you would add in additional checks to also allow admins to view and manage users as well, but that goes beyond the scope of this simple article.

Lo-fi Music Is The Perfect Genre To Work & Study To

Whenever I am working I listen to a wide-variety of different genres of music. My dominant genre is metal and other derivatives of heavy music which I enjoy. Then I also listen to blues as well as rap/hip-hop and instrumental music too.

One of my more recent genre additions is Lo-fi. Admittedly, I am late to the party on the Lo-fi music genre, but it has been a game-changer for me and how I work. It’s not a genre of music I would have listened to a couple of years ago.

To me, Lo-fi is like a modernised version of elevator music (only more innovative and not shitty), combined with other elements which make it the perfect background music that sits somewhere in the back of your mind and doesn’t distract you like other forms of music do.

As you can see, interest in Lo-fi music has exploded in the last five years. As more people find themselves working from home and finding ways to relax and concentrate on work, you can see 2020 was a big year for Lo-fi music.

While platforms like Soundcloud have increasingly seen more Lo-fi type tunes added, my goto platform for Lo-fi is surprisingly YouTube. People were uploading Lo-fi to YouTube five years ago before it was even really that popular.

Perhaps one of the most popular mixes on YouTube is the Rainy Days In Tokyo mix.

It was uploaded in 2017, but the majority of comments are far more recent than that which is pretty indicative of recent interest. I would be interested in knowing how many of those 11 million views were in the first year or two of listening (probably not many).

Another popular Lo-fi mix on YouTube is this one titled, 1 A.M Study Session.

And perhaps one of my favourites is this live stream YouTube mix, it just perpetually keeps playing inoffensive Lo-fi tunes. Whenever I do code live streaming on Twitch (follow me here), this is my goto video for background music, playing other forms of music usually results in my videos being muted for copyright matches.

The one thing I find impressive about the video above is that it is relatively new in comparison to other videos and it has 2.1 million thumbs up and 43k thumbs down. That is an insanely good ratio of likes to downvotes for a video. Word of advice, avoid the comments section, it’s a cesspool of the worse the internet has to offer.

To me, the appeal of Lo-fi is how bland it is. When you’re listening to it, you’re not being impressed by any stand out elements. Lo-fi is largely vanilla, it’s the John Smith and Plain Jane of the music world. But, like white noise, it perfectly taps into that part of our brains that craves repetition and calm.

Do you listen to Lo-fi?

How To Make Face Masks

In these uncertain pandemic times, seeing mask shortages and other shortages has our family thinking about self-sustainability. What can we rely on when supply chains fail? Not just food, but things like clothing and the hot topic right now: masks.

A decent mask requires three layers of protection and material. You have the outer layer, the middle layer and then the inner layer. The ear loops are the final step, but they don’t offer protection.

As mask shortages became an urgent topic, the likes of the World Health Organisation as well as numerous governments released guides on how you can make your own face masks during the pandemic.

One of the guides that we found helpful, was one on the Australian Department of Health website here. If that link doesn’t work for you, let me know and I’ll send you the PDF.

While it is possible to make a mask by hand sewing it, I highly recommend getting yourself a sewing machine to not only ensure a neater sew line, but to also ensure the pieces of fabric are correctly joined. The effectiveness of your mask will depend on how well those three layers are joined to one another.

It is surprisingly easy to make your own mask, so if you can’t find one at the shops, making one of your own will be more effective than not wearing a mask whatsoever.

Why Remote Work Is Better

I think many would agree that 2020 has been a terrible year on multiple fronts. One of the biggest dampers on 2020 was COVID-19, which has changed how we live, interact, work and go about our daily lives.

Perhaps the biggest positive and upside to COVID-19 is the remote work revolution that was forced upon everyone. For some of us, we were ready for it and either already remote working or wanting too. Companies had a choice, allow their employees to work remotely or not work at all.

You’ll find articles arguing for both sides. However, there are more upsides than downsides for remote work, which makes it the clear choice for those who want it.

Saving Money

The biggest benefit of remote working is how much money you will save. It’s not until you’re no longer in a conventional office environment that you realise you spend more than you know on incidentals like coffee, lunches and other little pocket picking expenses. Then you have transport costs (public transport, fuel) as well as things like parking, it all adds up.

And it’s not only employees saving money, it’s the businesses themselves. If you require less office space, you are saving on leasing fees, utilities like electricity and internet, in-office server space and redundancy solutions. Reducing overheads, especially in uncertain times is a win-win for everyone.

Reduced Distractions

Offices are distracting. It doesn’t matter if you’re working in an open-plan office or you have your own cubicle, distractions in the office are rampant and they will always find a way to hunt you down.

Either someone is constantly coming up to your desk (even with noise cancelling headphones on) and asking you for something, or loud phone calls and conversations are embedding themselves into the deepest parts of your ears.

Impromptu meetings are also another distraction that will derail productivity faster than an air raid siren in a library.

Remote Teams Get More Shit Done

More flexibility leads to better productivity. Combined with reduced distractions, remote teams are less distracted and more productive: remote teams get more shit done.

A traditional office is a 9-5 thing, some places offer “flexible” working hours, but what they really mean is, a majority of your office working hours still need to align with everyone else. Instead of starting at 9, some places might let you start at 10 instead.

Everyone, especially in the development/programming industries works on different time cycles. I do some of my best work in the afternoons, so I tend to start later. Some do their best work early in the morning.

We Were Already “Sort of” Working Remotely

Step into any tech office of developers and designers prior to the pandemic and the first thing you would have noticed is everyone is wearing noise cancelling headphones (a large majority anyway). When you realise that we were already working remotely, your perspective changes.

If you are in an office and you’re talking to John who is a few meters away from you over Slack instead of an in-person conversation, that’s basically working remotely in the office.

Many of my interactions before working remotely were done over Slack. In-person conversations in open offices are kind of frowned upon because of their destructive impact on those around them.

A Wider Talent Pool

Here is the often understated benefit of remote working, you’re not limited to your local city to find new talent. Hiring can be hard at the best of times, finding someone suitable for the position is tricky business. With remote work, you can hire someone from the other side of the world.

This works both ways, once again. Companies have a wider hiring pool, but job seekers also are afforded the same benefit. This means you no longer have to pin your hopes on somewhere local, spin the globe and throw a dart and see where it lands.

Employees Are Just Happier

Whenever I would tell people I spend the majority of my working week working from home (this was prior to the pandemic), people were in awe and surprised. While many companies have been slowly adapting and offering some form of remote work, many do not.

Even as some parts of the world return to normal, companies were itching to get people back into the office as quick as they could. This in part comes down to trust issues from management and bosses of the company too scared to trust their employees working remotely.

It’s a good feeling knowing that company you work for trusts you to do work even when not in an office, that they are affording you the flexibility to choose your start and finish times, and to get your work done. Trust makes everyone happy.

When you work remotely, you get the sense you’ve got the best job in the world.

Everyone Should Get A Choice

Unless you’re knowingly applying for a 100% remote work company, some people don’t want to work remotely or work remotely all of the time. Everyone should get a choice. For companies who are not already permanent remote, they should offer employees the option of an office or remote.

How To Make Slime

COVID-19 has changed how we live and how we work, it has also changed how we parent. As parents have been thrown into the unknown as schools are closed or parents take their kids out over fears of bringing the virus home.

My wife and I have two children; a five-year-old boy and a one-and-a-half-year-old girl. Keeping our active son entertained during moments of quarantine has been very challenging, to put it mildly.

One thing our son loves to do is make slime. It’s one of those simple things that kids love and the slime itself once made will keep them entertained for days.

The basics of slime are simple: glue and borax, with some water and food colouring mixed in.

In terms of glue, you want a PVA based glue and one of the best and perhaps easily accessible glues is Elmer’s School Glue. Office Works has a 3.8l bottle of it for $30 here.

  1. Pour in about 120 millilitres of the glue into a bowl. If you are using the small bottle of Elmer’s glue (the squeezable kind, it’s about this much)
  2. Pour in 120 millilitres of warm water. If you want to add some drops of food dye, add them in here.
  3. Mix 1 teaspoon of Borax with 1/2 cup of warm water and dissolve. Pour it into the glue.

Mix it all together thoroughly and you should see the slime start to form. That’s it. It’s simple and effective, the kids love it and it’s not overly messy either.

GitHub Was Never About Fun

A few days ago I came across an article by Jared Palmer titled GitHub isn’t fun anymore besides the somewhat clickbait-y title he talks about the changes that GitHub has made to the trending section and how GitHub doesn’t feel fun any more.

Sure, the trending page is a cool little gimmick section where you can see popular repositories (or used to be able too), but GitHub was never about fun or non-code features. GitHub is a tool.

Since Microsoft acquired GitHub they have introduced a lot of great new features, one of which I find extremely useful is GitHub Actions. The code review workflow is awesome, protected branches, free private repositories and more.

Why does everything have to be gamified? I am 32 and part of a generation that has short attention spans and inability to do mundane tasks. Like children, my generation seemingly needs instant gratification, karma, scoreboards, points and other features to keep us engaged.

The way that trending used to work was too easily gamed and did not necessarily mean the quality of the repos was good. I am glad they changed how it works, how it works now is properly more indicative of popularity than the previous way it worked.

If you think GitHub isn’t fun, you should try Bitbucket, it is terrible and literally the worst source management platform around. You’ll know what funless really feels like using Bitbucket where projects go to die.

How To Deploy Aurelia 2 Apps To GitHub Pages (gh-pages)

You have yourself an Aurelia app (or you will soon), and you want to host it on GitHub Pages because GitHub provides a generous free hosting solution that gets powered from the Git repository itself.

Fortunately, the process couldn’t be more straightforward. A lot of this post will apply to other frameworks and libraries besides Aurelia 2. However, we will be focusing on Aurelia 2 only.

This article assumes the following:

  • You already have a Git repository for your Aurelia project
  • You are using GitHub to host your Git repository

Create a new branch

We need to create a new branch called gh-pages which GitHub will load our site from. If you use the command line, first run git branch gh-pages followed by git checkout gh-pages to switch to the gh-pages branch. If you’re using a GUI, follow the appropriate steps to create a new branch and switch to it.

Modify .gitignore

If you used the recommended way to initialise a new Aurelia 2 application, your .gitignore file by default will ignore the dist folder where your project files are built.

Inside of this file remove the entry ignoring dist and save it. Now commit and push your changes to the repository.

Build & Deploy

Building your Aurelia 2 application is a simple matter of running npm run build (or appropriate build command) which will then build the files into the dist directory.

Once the build has completed, all you need to do is push up the changes to the dist folder to the gh-pages branch and GitHub will serve them at the following URL — where github-username is your GitHub username and my-repo is the name of your repository on GitHub.

To push the contents of the dist folder run the following: git subtree push --prefix dist origin gh-pages and visit your site to see it in its deployed glory.

Fixing The Certbot Issue “The client lacks sufficient authorization/404 Not Found…”

I am a huge fan of Let’s Encrypt and their free SSL certificate service using Certbot. However, recently whilst setting up a new domain name and attempting to get a certificate, I encountered an error I had never experienced before.

The client lacks sufficient authorization :: The key authorization file from the server did not match this challenge

It couldn’t access the folder where it stored the secrets and was resulting in a 404 error. I manually created the folder and I could access it, so why Certbot couldn’t was a mystery.

After some investigation and dead-end Googling, I found the problem and fixed it. I use Linode for my hosting and use the default DNS entries option when adding a new domain.

Well, it turns out by default Linode will add IPv6 AAAA entries to the server and if you do not have Nginx configured to handle IPv6, it will not resolve properly.

It looked something like this:

The culprit was the second entry for the domain with the weird value 2400:8902::f03c:91ff:fe59:f74c this is an IPv6 address and unless you have your server configured to support them, it’ll result in an error when trying to create an SSL certificate.

The fix ends up being rather simple. Either update your server to support those types of addresses or remove the IPV6 entries from your DNS settings and make sure you wait a good 10-20 minutes before trying again.