Introduction
While we develop AJAX applications, we often carelessly ignore giving up bad practices, which cause effects which are not so significantly visible when the site is not so large in volume. But, it often causes severe performance issues in the case of sites that make heavy use of AJAX technologies such as Pageflakes, NetVibes etc.
There are so many AJAX widgets in one page that minor memory leak issues combined may even result in a site crashing with a nasty “Operation aborted”. If there are a lot of WebService calls, and a lot of iterations among collections, inefficient coding in can make a site very heavy, resulting in the browser eating up a lot of memory, requiring very costly CPU cycles, and ultimately causing unsatisfactory user experience. In this article, many of such issues will be demonstrated in the context of ASP.NET AJAX.
Use more "var"
Less use of "var
" can result in wrong calculations as well as errors in the logic. And also, the JavaScript interpreter will find it hard to determine the scope of the variable if var
is not used. Consider the following simple JavaScript code:
function pageLoad()
{
i = 10;
loop();
alert(i);
}
function loop()
{
for(i=0; i<100; ++i)
{
}
}
Here, the loop uses the variable i
that was used before in pageLoad
. So, it brings a wrong result. Unlike .NET code, in JavaScript, variables can go along with the method calls. So, better not confuse the interpreter, and use more var
s in your code:
function pageLoad()
{
var i = 10;
loop();
alert(i);
}
function loop()
{
for(var i=0; i<100; ++i)
{
}
}
Reduce scopes
It’s not pretty common. But, if you ever encounter such code, be sure it’s a very bad practice. Introducing more scopes is a performance issue for JavaScript interpreters. It adds a new scope in the ladder. See the following sample scope:
function pageLoad()
{
scope1();
function scope1()
{
alert(’scope1');
scope2();
function scope2()
{
alert(’scope2');
}
}
}
Introducing more scopes forces the interpreter to go through more sections in the scope chain that it maintains for code execution. So, unnecessary scopes reduce performance, and it’s a bad design too.
Careful with DOM element concatenation
This a very common bad practice. We often iterate through arrays, build HTML contents, and keep on concatenating into a certain DOM element. Every time you execute the block of code under the loop, you create the HTML markups, discover a div
, access the innerHTML
of a div
, and for the +=
operator, you again discover the same div
, access its innerHTML
, and concatenate it before assigning.
function pageLoad()
{
var links = ["microsoft.com", "tanzimsaqib.com", "asp.net"];
$get(‘divContent’).innerHTML = ‘My favorite sites:<br />’
for(var i=0; i<links.length; ++i)
$get(‘divContent’).innerHTML += ‘<a href="http://www.’
+ links[i] + ‘">http:
}
However, as you know, accessing a DOM element is one the costliest operations in JavaScript. So, it’s wise to concatenate all HTML contents in a string and finally assign it to the DOM element. That saves a lot of hard work for the browser.
function pageLoad()
{
var links = ["microsoft.com", "tanzimsaqib.com", "asp.net"];
var content = ‘My favorite sites:<br />’
for(var i=0; i<links.length; ++i)
content += ‘<a href="http://www.’ + links[i]
+ ‘">http:
$get(‘divContent’).innerHTML = content;
}
Avoid using your own methods when there is a bult-in one
Avoid implementing your own getElementById
method that will cause a script to DOM marshalling overhead. Each time you traverse the DOM, looking for certain HTML elements requires the JavaScript interpreter to marshall the script to DOM. It’s always better to use the getElementById
of the document
object. So, before you write a function, check if similar functionality can be achieved from a built-in function.
Avoid using Array.length in a loop
It's a very common reason for performance issues in AJAX. We often use code like the following:
var items = [];
for(var i=0; i<items.length; ++i)
;
It can be a severe performance issue if the array is so large. JavaScript is an interpreted language, so when the interpreter executes code line by line, every time it checks the condition inside the loop, you end up accessing the length property every time. Wherever applicable, if the contents of the array does not need to be changed during the loop’s execution, there is no necessity to access the length property every time. Take out the length value in a variable and use it in every iteration:
var items = [];
var count = items.length;
for(var i=0; i<count; ++i)
;
Avoid string concatenations, use array instead
Don't you think the following block of code has been written keeping every possible good practice in mind? Any option for performance improvement?
function pageLoad()
{
var stringArray = new Array();
stringArray.push('<div>');
stringArray.push('some content');
stringArray.push('</div>');
var veryLongHtml = $get('divContent').innerHTML;
var count = stringArray.length;
for(var i=0; i<count; ++i)
veryLongHtml += stringArray[i];
}
Well, as you see, the innerHTML
of the div
has been cached so that the browser will not have to access the DOM every time while iterating through stringArray
, thus costlier DOM methods are being avoided. But, inside the body of the loop, the JavaScript interpreter has to perform the following operation:
veryLongHtml = veryLongHtml + stringArray[i];
And, the veryLongHtml
contains quite a large string which means, in this operation, the interpreter will have to retrieve the large string and then concatenate it with the stringArray
elements in every iteration. One very short yet efficient solution to this problem is using the join
method of the array like the following, instead of looping through the array:
veryLongHtml = stringArray.join('');
This is very efficient than the one we were doing, since it joins the array with smaller strings, which requires less memory.
Introduce function delegates
Take a look at the following loop. This loop calls a function in each iteration and the function does some stuff. Can you think of any performance improvement here?
for(var i=0; i<count; ++i)
processElement(elements[i]);
Well, for sufficiently large arrays, function delegates may result in significant performance improvement to the loop.
var delegate = processElement;
for(var i=0; i<count; ++i)
delegate(elements[i]);
The reason behind performance improvement is, the JavaScript interpreter will use the function as a local variable and will not lookup in its scope chain for the function body in each iteration.
Introduce DOM elements and function caching
We have seen DOM caching before, and function delegation is also a kind of function caching. Take a look at the following snippet:
for(var i=0; i<count; ++i)
$get('divContent').appendChild(elements[i]);
As you can figure out, the code is going to be something like:
var divContent = $get('divContent');
for(var i=0; i<count; ++i)
divContent.appendChild(elements[i]);
That is fine, but you can also cache a browser function like appendChild
. So, the ultimate optimization will be like the following:
var divContentAppendChild = $get('divContent').appendChild;
for(var i=0; i<count; ++i)
divContentAppendChild(elements[i]);
Problem with switch
Unlike .NET languages or any other compiler languages, the JavaScript interpreter can not optimize a switch
block. Especially when a switch
statement is used with different types of data. It's a heavy operation for the browser due to conversion operations occurring in sequence; it's an elegant way of decision branching though.
Conclusion
In this article, we have seen many techniques for performance optimization in AJAX applications, and of course, these are not new and unique ideas, so you might find similar ideas else where as well.