As Alfred Pennyworth once profoundly said in The Dark Knight Rises:
Some front-end developers just want to watch the world burn.
Alfred Pennyworth, The Dark Knight Rises
As developers we are constantly learning, always growing and sometimes whether we realise it at the time or not, we are always making mistakes. Sometimes we make mistakes however small that pile on-top of one another which can result in some interesting consequences for our applications performance.
Here are a few tips, most of which you might already have read elsewhere on how to write performant Javascript and just even thinking about some of the things you might be doing in your applications.
Getting off the jQuery pony
I’ll admit it. If jQuery is installed on a site I am working on, I have been known to use it for a few things purely for the sake of convenience. It isn’t the fact that I suck at writing Javascript (debatable), it is for the sake of convenience and for a lot of things, jQuery is a time-saving library (especially for AJAX requests), but using it for everything can actually hamper performance in your application quite a bit.
Stay home and use jQuery for everything, Kip
Napoleon Dynamite
Are you guilty of doing things like this for the sake of convenience?
$('#somediv').hide();
instead ofelement.style.display = 'none';
$('#somediv').show();
instead ofelement.style.display = '';
$('#myinput').val();
instead ofelement.value
Don’t feel bad, we have all been there. Surprisingly in IE9+ there are a lot of native implementations for working with the DOM in similar fashion to that of jQuery. Cleanse yourself of jQuery for simple tasks and you will notice performance gains (in a large application especially).
I am not saying you should abandon jQuery entirely, but you shouldn’t use it for everything. Simple showing and hiding tasks, even binding to events and changing classes can be done easily using native API methods without the added overhead of jQuery.
I might sound hypocritical, but I am conscious of using native methods (where benchmarks can prove they are more performant) over solutions in jQuery. The team working on jQuery is incredibly talented, so some of the methods in jQuery can actually be faster than native implementations, but not always.
The benefit of jQuery besides allowing you to perform tasks that might take multiple lines of code in one line (like an AJAX) is the fact that it standardises a whole bunch of API inconsistencies in different browsers. However, unless you need to support IE8 (which some of us still need to do from time-to-time), the gap between browsers is quite narrow in 2015.
We have to remember when jQuery first made its debut, we were still supporting IE6, things were an absolute mess and jQuery was absolutely needed, it was a breath of fresh air. It served its purpose well, but these days the need to use jQuery for plugging browser inconsistencies is lessened and it has become more of a time-saving library than a cross-browser patch for specification standards.
If you are interested in knowing how you can achieve jQuery like tasks in native Javascript, this handy site has you covered.
We’ll get into it a bit later, but try and avoid modifying an elements style using the style
like in the above examples. We touch upon why it can be bad for performance still, shortly.
Cache DOM Lookups
An already well-established fact amongst most developers, but still worthy of a quick mention.
This is something you should be doing regardless of whether or not you are using jQuery or native Javascript operations. Instead of looking up the same DOM node(s) potentially multiple times, you can perform your DOM query and then store it in a variable or array and reference it later on instead of performing another lookup.
I’ll be back, so cache this lookup
The Terminator on caching DOM lookups
Calling document.getElementById('someid')
multiple times when you can do it once and reference a variable obviously makes much more sense than calling it multiple times. The same trick can also be applied to anything that requires a lookup in Javascript like functions and outside scope values.
Keep your scopes close and your scope even closer
Scoping in Javascript can be very important. When you reference a variable or function with your Javascript file, the Javascript engine does a lookup something called the scope chain. The idea is the closer your variables are to the code you are referencing them from, the faster the browser can resolve its location within the chain.
I want his performance DEAD! I want his CPU DEAD! I want his browser burned to the GROUND! I wanna go there in the middle of the night and I wanna PISS ON HIS ASHES!
Al Capone, The Untouchables
While browsers these days are generally pretty good when it comes to optimising this kind of thing, locally scoping your variables can provide an added advantage of reducing code ambiguity. If another developer were to look at your code, they can clearly see when a variable is being referenced which variable that is.
If you were to define a variable at the top of your script called name and then within a function or closure you were to define a variable called name, the locally scoped variable takes precedence over the global one. You can’t actually access the variable you define at the top (unless you saved a reference to it). This is because local variables have a higher priority over global variables and it can be frustrating and confusing if you ever run into this issue.
When working with jQuery especially, I always encapsulate my code within a self-executing anonymous function and pass in references to global variables; jQuery, window and document to reduce the amount of scope chain lookup required to resolve them.
See below for an example of how I start out most of my ES5 Javascript projects.
[code]
;(function($, window, document, undefined) {
})(jQuery, window, document);
[/code]
Going back on the previous point for a moment, caching DOM lookups is a good idea, but also creating local references to functions can give you a little bit of added performance too.
See if you can spot any improvements that could be made to the following example:
function consumeFoods(foods) {
for (var i = 0, len = foods.length; i < len; i++) {
eat(foods[i]);
}
}
function eat(food) {
console.log('You just ate: '+food+'');
}
While the above example might look fine and dandy, as well as make you slightly hungry, there is a big improvement you can make here.
function consumeFoods(foods) {
var eatFunc = eat;
for (var i = 0, len = foods.length; i < len; i++) {
eatFunc(foods[i]);
}
}
function eat(food) {
console.log('You just ate: '+food+'');
}
What we did in the second example is we created a local reference to the eat function. Now whenever we want to call the eat function, the browsers engine doesn’t have to search that far to find the function because we’ve locally scoped it.
The chain in this instance is really short which results in better performance. Don’t believe me? Check out this JSPerf test I created.
It is important to note that localised scoping will not give you massive performance gains. Any advantage you get from using them is small and in some cases you might not even notice a difference.
However, I am a stickler for performance, so any increase however small is a win for me. As you can see from the linked test and other tests lying about on JSPerf, sometimes the difference is a few points.
While it might not be a dramatic performance increase, if you factor in all of the other parts of your application, this saving will stack upon others and result in a more performant application in the long run. Every little bit helps and matters.
Optimise your loops
Use the right loop for the right job. If you are using jQuery.each, in most cases it is going to come benchmark pretty poorly in comparison to a for loop, while loop, for..in loop or even a native forEach.
If you are iterating an array, in most cases you are going to find you will achieve the best performance outcome using a while or standard for loop. If you are iterating an object, ensuring that you use Object.hasOwnProperty() to see if the property being iterated directly belongs to the object and is not inherited.
Never under any circumstance nest a loop within another loop. This is probably one of the biggest mistakes you can make with loops and doing so will result in very poor performance.
Special mention: for..in loop abuse
The deepest circle of hell is reserved for betrayers and developers who use for..in loops to iterate arrays.
Captain Jack Sparrow, The Pirates of The Caribbean
In Javascript one of the biggest mistakes I often see is the for..in loop being abused and used for the completely wrong purposes. While it definitely has its place, this appears to be a really misunderstood loop.
If you need to iterate an Object, using a for..in loop fills the need. However, under no circumstance should you be iterating an array using a for..in loop. This is due to the fact all enumerable properties are checked, duplicates of properties are checked and THEN the loop begins enumerating.
Use a DOM DocumentFragment
This one time at band camp, I inserted lots of HTML directly into my DOM
That orange head girl from American Pie
If you’re not using DocumentFragments when inserting multiple HTML elements into your DOM, then rinse your mouth out with soap and come back when you’re done.
Essentially a DocumentFragment is a DOM node that lives outside of your actual DOM and exists solely in memory, it is kind of like a ghost, a representation of a real DOM without all of that UI overhead. Something that can be treated like the actual DOM and once you’re done, inserted into the real DOM or discarded.
This means you can modify the contents of a DocumentFragment by inserting multiple pieces of HTML into them without causing page reflows which will result in better performance than directly modifying the DOM multiple times in a short period of time.
Consider the following unrealistic example
[code]
// Assuming you have an element with the ID of ‘content’
var content = document.getElementById(‘content’);
var myFragment = document.createDocumentFragment();
for (var i = 0; i <= 50; i++) {
var div = document.createElement(‘div’);
div.textContent = ‘This is my inner HTML being added to my DIV (‘+i+’)’;
myFragment.appendChild(div);
}
content.appendChild(myFragment);
[/code]
You can see we are looping 50 times and creating a DIV, modifying its text node and then appending it to the fragment. Once the loop is done we insert the fragment into the page and we just avoided a potential reflow nightmare by batching our inserts into one operation. High five!
The opposite of this example is less pretty and makes me cringe thinking about it where we would loop 50 times and on each successful loop we would insert a DIV into the page causing a reflow each time. BAD!
Don’t touch my DOM, bro
Please don’t touch this unless you absolutely need too.
MC Hammer on modifying the style attribute of a DOM element in Javascript.
As promised earlier, I was going to explain why you shouldn’t modify the style
property of an element unless you absolutely need too or you have a death-wish. The reason is simple: depending on what it is you are trying to do, you are going to cause reflow’s and repaints; left, right and centre.
What is a repaint/refresh?
When the appearance of an element is changed, the background colour for example, it causes that element to be redrawn aka repainted, refreshed. Your browser basically says, “Oh, cool, you’ve changed your appearance, let me check it and update your mugshot” – changes that affect the visibility of an element cause a repaint.
But the thing that makes a repaint one of the second most expensive operations is that in most browsers the repaint isn’t isolated to just the one element, the browser must validate the visibility of all other nodes in the DOM tree. If you have a lot of elements, that’s one expensive operation. Check please.
What is a reflow?
The most expensive operation of em’ all, the infamous reflow. Essentially when the dimensions of an element change, its position or anything else that affects its size or position within the page, it causes a waterfall of shitty performance to rain down on your page. All child nodes, element ancestors and elements following the element have their details recalculated. Effectively the entire page is re-rendered.
What causes a reflow?
Far too many things can cause a reflow.
- Resizing the window
- Changing a font
- CSS pseudo classes like :hover
- Changing the class attribute
- Calculating offsetWidth and offsetHeight
- And the biggest one: modifying the
style
attribute of an element
I suggest reading up on Google’s great document entitled Minimizing browser reflow it has some great tips if you’re wondering how you can mitigate reflow and make your application faster.
If you need to modify the style of an element, try and use a CSS class instead. This means you can make multiple modifications to an element in one go, similar to that of the DocumentFragment which allows you to batch changes and apply them all at once.
Use event delegation
Instead of attaching an event to multiple nodes in your DOM, event delegation works on the premise of listening for an event change on a parent node and then listening for changes bubble up from one or more child nodes.
My Mama always said, Javascript performance was like a box of chocolates; you never know what you’re gonna get.
Forrest Gump
The jQuery way
In jQuery event delegation is dead simple to do by writing something like the following:
[code]
$(‘#links’).on(‘click’, ‘a’, function(e) {
// This code listens for events triggered on links within our DIV
});
[/code]
This is opposed to listening for an event on links or a particular class. Event delegation if you use jQuery is one of the best habits to get into as a developer. In most cases there should be no reason not to use.
The native way
As always with most jQuery methods, there is a relatively easy native way of doing the same thing (only the performance is better). Event delegation in conventional Javascript is actually a bit more convoluted, but results in better performance.
[code]
document.getElementById(‘links’).addEventListener(‘click’, function(event) {
if (event.target && event.target.nodeName === ‘A’) {
// Event triggered
}
});
[/code]
There is a rather interesting JSPerf test here (that I didn’t create) which compares jQuery event delegation to native event delegation and the results definitely speak for themselves.
requestAnimationFrame is your friend
If you have the urge to perform animations in Javascript, then you have probably used a setTimeout or setInterval to achieve this. This is bad for a whole number of reasons, the biggest one being they will suck performance from your visitors browser and flatten their little phone batteries.
Don’t do it this way
[code]
setInterval(function() {
// Maybe an animation in here?
}, 1000/60);
[/code]
If you are still using timers in this way, then stop and take a breather. Then read the documentation on requestAnimationFrame.
The benefits of requestAnimationFrame
- They are browser optimised, so they can operate in most cases at 60fps (aka the sweet spot for animation)
- Animations taking place in inactive tabs are stopped. Meaning if a user navigates away from your tab and you have an animation, it releases the stronghold it has on the CPU.
- Because of the aforementioned performance and respect, battery life and page responsiveness are better on low-end devices and mobile browsers.
If you must animate using Javascript, do it this way
[code]
function myFunc() {
// Do whatever
requestAnimationFrame(myFunc);
}
requestAnimationFrame(myFunc);
[/code]
The idea is you call requestAnimationFrame initially to kickstart the process and then call the requestAnimationFrame from within the function being called thus creating a loop.
Browser support is pretty good, the only caveat being support is nonexistent in IE9 and below, but a polyfill can take care of that for you.
Switch it up
That’s one small step for a if, one giant leap for switch statements
Neil Armstrong
Did you know that multiple if statements can hamper Javascript performance? And that using switch/case statements in most cases I have tested results in better performance?
The reason for switch/case statements being more performant is the browsers engine is able to optimise a switch/case statement better and this is actually the case in most languages (PHP especially).
I created a JSPerf test here showing the differences between if statements and switch/case statements. You can see that the if statements are definitely slower.
Too many elements
Did you know the more elements you have in your DOM, the heavier your application will be? As a general rule of thumb the more DIV’s and span tags, etc you have in your page, the larger the DOM tree grows.
This section on Yahoo’s developer documentation under performance gives you a little insight into what happens and how big Yahoo’s own homepage is in terms of number of elements.
Conclusion
There is plenty more I could go on about, but the point was just wanting you as a developer to be more aware of certain pitfalls and performance traps you might not realise you are falling into. Not to mention, advocating the use of native Javascript over libraries like jQuery which can on one hand make things easier for you, but also cost you performance points.
I would like to point out prematurely optimising an application for the sake of gaining a little bit of performance is not what I am advocating here. You should always use the appropriate tools, benchmark and analyse your code for actual performance problems and optimise that way. Don’t solve problems before you have them, but at the same time, don’t knowingly write code you know is bad and will end up causing you problems for the sake of readability or time.
I am just trying to get developers thinking about all of the different ways you can skin a cat in Javascript and to think about the trade offs. This isn’t a religious guide advocating that you should do something my way or no way. I am just legitimately concerned we are in an era of developers who lack proper understanding of the underlying languages they are using, in this case: Javascript.
Some modifications have been made to this article since it was originally published. There were a couple of inaccuracies in the section on loops which have been rectified, one such embarrassing mistake was calling .length on an Object. I fixed them instead of leaving them to prevent confusion. Thank you to all who pointed this out.
That local scoped func jsfittle doesn’t show any performance improvement on most browsers.
Also, it seems like you should be optimizing the hot spots, not every line of code when it comes to about half of these examples. Clearly, request animation frame should be standard operating procedure.
@EJ,
My example doesn’t really go into excruciating detail. You are right there is not a massive performance difference in my JSPerf test specifically, but for example this test which uses nested functions for dramatic effect actually shows the benefits of locally scoping a whole lot better than my example: http://jsperf.com/scope-chain-test-local
There are plenty of scenarios where the effect of localised variables (like the above linked one) can truly be shown. The point of my example wasn’t to complicate things, but rather just explain the purpose of localised scoping and how it compares to referencing variables globally or further up the scope chain.
These examples are NOT meant to be set-in-stone do it this way or no way, they are just meant to serve as a guide and provoke some thought around ways developers can increase performance in their Javascript applications.
This was more half educational post and half thought piece getting developers to start thinking about how they write their loops, structure their code and build their applications. I could have easily rambled on for a few thousand for words talking about all of the different ways and means of optimising a Javascript application.
Thank you for stopping by and voicing your opinion, I appreciate you taking the time to do so.
About the For(++) vs For(in), the two statements don’t do the same thing.
For(++) is used to iterate through an array. the indices has to be contiguous, this implies to use
splice
to remove elements.For(in) doesn’t respect the order of the array (see MDN). But it enumerates only the existing indices. This allows to use an array as a map with auto-generated keys. With ES6, this use case is replaced by
Symbol
.Performance : http://jsperf.com/iterating-arrays-for-in-vs-for (other code, same result).
I too find the part about “locally scoped” variables bad. In my case, the JSPerf test in the post ran (slightly) faster (!) with the supposedly slower global case and the exaggerated example you provided in your comment still showed no difference (tested on latest FF).
In the example with the function, you’re really making another reference to the same function, so it’s hard to notice any difference. In the example with the variable, it’s a matter of correct use. You should simply define a variable in the scope that needs to access it. If you override it locally, you cannot change it globally anymore; if you don’t need to change it globally, why not define it locally in the first place.
I’d say that what’s going on here is you found a browser optimisation (not even present in every browser, as your own tests prove) and are trying to base your coding style on it. This is bad, correct scoping isn’t something new and mystical, it exists in all languages and best practice is to simply define variables globally enough to be accessible from everywhere they are needed and locally enough to not pollute the scope any more than needed. Leave the rest to the browser, otherwise you will be stuck with messier code and no guarantees for better performance.
Nice and useful article – Have to admit I often find native JavaScript easier to write,understand and debug than JQuery . This article at least helps me in the defence of going native! But maybe as you say all modern browsers are becoming equivalent in terms of HTML5 and JS, perhaps the time is coming for simpler libraries that we choose for a particular job.
the scoping example with the variable is 0,000000006 seconds faster on my mobile.
Furthermore it does sth different as it does not change the global variable.
What is the performance gain between $(element).hide() and element.style.display = ” ?
How many millions of elements do I have to hide in one run for a noticeable difference?
However I agree with the reflow aspect as it can give you huge performance gains.
About scoping: Scoping in sense of using .call()/.apply() has big performance issues. See this JSPerf http://jsperf.com/call-scoping
You could use .bind() from ES5 but it’s performance is even worse – at least in firefox.
Hey there, very nice collection of good practises, thanks for preparing this up.
Me being a firm non-believer I played around with your last test (switch vs if). Shows a variety of different results, based on browser. For Chrome it sometimes shows that if statements are slower (less than 10%). For Firefox they are mostly the same (which leads me to believe they result in the same code after they are being interpreted). For Edge, surprisingly, switch statements are often >50% slower.
Wanted to bring this up because I think this is something of a moot point. As a long term Java developer I know not to write my code based on how the virtual machine optimizes it. This is a generally bad practise, since virtual machines do tend to change, especially with the variety of platforms and operating systems.
I’d also like to mention that 50% performance gain for something like a switch is really a premature optimization, rather than an improvement. Especially since the test had to test more than a billion operations per second in order to catch the difference …
Loved the DocumentFragment example. Would also be very nice if you can elaborate on the CSS vs style change approach. Haven’t researched that, but I am kind of wondering – why is there a difference?
@Tihomir
There is absolutely a bit of premature optimisation in this article, especially with the whole switch/if statement thing. Javascript engines are constantly evolving and getting better at catching inefficient code to the point where a few of these things might be a bit redundant.
The end goal is just to get developers thinking about the code they’re writing, but at the same time being reasonable as well. I definitely don’t always follow everything written here, the dynamic nature of development means you’re always writing bad code whether you’re diligent about it or not. I mean, realistically, not many of us are working with billions of operations.
I am thinking this article might need a bit of an update. It was only posted in 2015, but a lot has changed since then.
DocumentFragment is fantastic. It definitely can help you out when you’re writing to the DOM quite a bit, like creating a chat message application or something to that effect.
In regards to the styling, when you make changes that cause an element to repaint/reflow thus causing parts or the whole page to be re-parsed/rendered by the browser, it can impact performance (especially on mobile).
The approach to styling is to batch all of your changes together and apply them at once, opposed to incrementally. This is where using a class can be convenient, instead of mutating style properties directly, a class encapsulates them and they get applied in the same update cycle. If you applied changes incrementally, you would be triggering reflows/repaints multiple times.
On a powerful gaming PC running Chrome, you probably wouldn’t notice any performance issues, but on a handheld/mobile device, you definitely would.
Really love your quotes and highly readable font!
1.1. `$(‘#somediv’).hide();` instead of `element.style.display = ‘none’;`
Well, these are not equivalent snippets. If you don’t have an `element` reference on hand, you need to get it. Therefore we should compare jQueries variant with `document.getElementById(‘somediv’).style.display = ‘none’;`
1.2. It would be good to keep a handle on the element after retrieving it the first time instead of keeping traversing DOM once per field to (re)set. This I can agree on.
2. `$(‘#somediv’).hide();` is much much more readable than `document.getElementById(‘somediv’).style.display = ‘none’;` which IMO violated the Law of Demeter. If you don’t like how jQuery performs, still wrap DOM accessing logic into a meaningfully named function.
Nice points, yet I grow really tired of the wide-spread false advertisement of documentfragment. The point is, that the element one is expanding on should not be attached to the DOM, yet if it is actually a fragment or an element does not matter at all.
Imagin creating a list (ul) with 100 entries (li). Whether I add all entries to the list, while it is not attached to the DOM or add all to a fragment first, makes no difference in performance – actually in the fragment version I need an extra effort to create it unnecessarily.
Thanks for the nice article. I hope it is okay that I reference it sometimes in code review. Are you able to add some anchors for each heading/section?
These are mini optimisations that can be very helpful