Background
Few days ago, one of my friends asked me how Gmail changes its URL while user operates inside it without page refreshing. I had no idea about that at that time, he then shared a link Ajax Pattern - Unique URLs which deep dives into this topic, as the article mentioned: Unique URL makes your website's link "Bookmarkable, Linkable, Type-In-Able", plus Sharable IMHO, easy to be shared in Social network which is extremely important nowadays.
Implementation
The key technology to achieve the ""Unique URL" goal could be summarized into two points:
- If the content in the page has been updated by Ajax significantly enough, update the URL (
location.hash
) as well.
location.hash = 'Blogs&Page5';
- Every time the Ajax page loads, JS should understand the URL and render related content as well.
<body onload="restoreAjaxContent()">
<script type="text/javascript">
function restoreAjaxContent(){
var urlHash = location.hash;
var curPageNo = urlHash.replace('Blogs&Page','');
}
</script>
</body>
What I want to emphasize is the hash value, i.e., the content behind #
is originally expected an HTML element's name attribute used for In-Page navigation. It is completely the contract between client Browser, HTML content and JavaScript, server side cannot get the information directly except we explicitly pass the value to the server side (hidden post, URL query string, Ajax, etc.), therefore,if some user accesses the unique URL, your website's client side JS should parse the hash and retrieve relevant data from server side.
Advantages
- Better user experience: Every time user accesses the unique URL Ajax page, the fixed part of this page is loaded first, and then the main content loads asynchronously. If the main content is large enough, for example, contains images or rich media content, the "async loading" is much better than the page loading blocked by downloading those images/medias. For instance, originally loading one specific page requires 2 seconds totally, after applying this "async loading", the fixed part cost 0.4 seconds to be loaded and the main content costs 1.8 seconds, from user's point of view, usually the latter case would be better because the user sees your page is partially loaded within a short period (0.4) which feels good enough, plus a graceful loading/splash screen, eventually the UX got improved significantly!
- Better SEO support (Performance Aspect): The page rendering speed is an important fact for a search engine's crawler, by applying "async loading", the crawler will deem the page it is crawling has a good loading speed - 0.4 seconds.
- Easy to support W3 web standard This is sort of kidding. :) Since your main content is Ajax loaded, W3 validator (so does Search Engine) won't validate the main content which very likely does not strictly adhere to all the standard rules.
Disadvantages
- Main content cannot be indexed by Search engines All main content is loaded by JavaScript, Search engine won't crawl those contents, this is a serious problem! However, it is easy to walk around, build a traditional page without Ajax, store it into sitemap.xml and submit to search engine Web Master tool.
- Harder to development and to maintain Client JavaScript/Ajax development is more complex and less convenient comparing to server side technology like ASP.NET, JAVA EE or PHP. Although there are jQuery (write less, do more), Prototype.js (makes developing JS in a more OO way), DoJo and so on, it still might not be very happy while a developer is struggling with mixed HTML/CSS/JavaScript:)
P.S. I spent two days updating my blog (http://WayneYe.com), revised the paging style from traditional into the Ajax Unique URL pattern above, it is not doing Ajax paging and update URL like "http://wayneye.com/#Blogs&Page5", it definitely "Bookmarkable, Linkable, Type-In-Able and Sharable", plus a loading panel and content fades in effect, I believe its UX is much better than before.