When it comes to high performance, there is sheer elegance when it comes with simplicity. In this article, I would like to tackle high performance forms in vanilla HTML. In this field, there are many ways to solve a problem. But, it is the simple solution that prevails in my opinion.
For starters, let’s define high performance:
- HTTP responses in less than a second
- Minimal amount of HTTP requests
- Minimal payload per request
I think the three criteria above define high performance in a succinct way. There may be those who disagree with less than one second HTTP responses. But where network latencies can vary from high-speed fiber optics to 3G, you should not afford anything more. It is unkind to present the gentle user with the spinning wheel of death, since there is more to life than starring at a screen.
So, what does it take to deliver neck bending high performance in vanilla HTML? Let’s find out.
The Basic Workflow
I made up what I present here, but it is applicable across the board. It is your basic form of a Person
with a Friend
and some Addresses
. Here is the object graph in C#:
new Person
{
Id = 1,
Name = "Jane Doe",
Friend = new Person
{
Id = 2,
Name = "Jon Doe"
},
Addresses = new Address[]
{
new Address
{
Id = 1,
City = "Athens",
State = "Texas"
},
new Address
{
Id = 2,
City = "Paris",
State = "Texas"
}
}
};
My strategy is to break each individual piece of the data graph into separate components. What I’m doing here is taking a large problem and breaking it down into small little problems. Turns out, this is great for performance.
So, for example, one can see that that each data model should go in a separate HTML form. What we get from this, is a system of HTTP requests and responses that flow well within the entire data model.
In ASP.NET MVC, let’s turn to the person model from the perspective of the MVC controller:
public ActionResult PersonEdit()
{
var p = personRepo.Find(1);
var vm = new PersonViewModel
{
Id = p.Id,
Name = p.Name
};
return View(vm);
}
[HttpPost]
public ActionResult PersonEdit(Person p)
{
personRepo.Update(p);
return RedirectToAction("Index");
}
As you can see, this part of the controller has a single responsibility in serving and updating the person model. Let’s take a look at the friend:
public ActionResult FriendEdit(int id)
{
var p = personRepo.FindByPersonId(id);
var vm = new FriendViewModel
{
Id = p.Id,
PersonId = id,
Name = p.Name
};
return View(vm);
}
[HttpPost]
public ActionResult FriendEdit(Person p)
{
personRepo.Update(p);
return RedirectToAction("PersonEdit");
}
When I post the friend’s data, the system routes the response back to the person, which is the parent model, with a redirect. This creates a natural flow that is intuitive for users. The friend’s GET
request expects the relational data to link itself back to the parent. Finally, let’s turn to the addresses:
public ActionResult Add(int id)
{
var list = repo.Find(id);
var vm = new AddressViewModel
{
PersonId = id,
Addresses = list.ToArray()
};
return View(vm);
}
[HttpPost]
public ActionResult Add(Address a)
{
repo.Add(a);
var list = repo.Find(a.PersonId);
var vm = new AddressViewModel
{
PersonId = a.PersonId,
Addresses = list.ToArray()
};
return View(vm);
}
I hope you can see the pattern. We are only concerned with each specific piece of the entire workflow. We are leveraging HTTP requests to tells us where we are in the workflow while keeping a certain separation of concerns. As a result, we cut both HTTP requests and payload per request.
I'll spare you the implementation details of each individual view since it should not be hard to imagine. Below is a high level overview of all the views. You are more than welcome to explore the code at the end of this article.
An Optimization Trick
The perceived latency in the browser depends on a few factors. The most important being web resources like JavaScript and CSS files the browser needs to render the page. One way to virtually drop this hurdle is to bundle and cache resources inside the browser. I used the bundling engine in ASP.NET to do this heavy lifting for me.
@Styles.Render("~/Content/css")
I ended up not needing JavaScript but the same can done for your this as well. Once this is complete, you should see this header sent to the browser:
Expires: Thu, 21 Jul 2016 02:18:26 GMT
With this, the browser will get the resource and keep it cached for about a year since the time of this writing. ASP.NET bundling does automatic cache busting. So, if you update the resource, the engine will bust the browser cache and send in the update.
All Together Now
With the technical explanation out of the way, let’s see this baby in action:
And what do the HTTP requests look like? Let’s see them throughout the entire workflow.
With an unprimed browser cache, CSS resources have to get downloaded once. The beauty here is once the browser knows what resources it needs, it never asks for them again. The HTTP protocol gets reduced to only deal with the HTML forms, which is what we care about. From the user’s perspective, they’ll notice screens jump into action based on form submission. 302 responses are redirects to different parts of the workflow. The green arrows over to the left are POST HTTP requests with a 200 or 302 response. What I find so radical is how much we leverage from the standard protocol. This is evidence of good design principles. It feels like HTTP is working for me, instead of a sloppy architecture that bends over backwards to fit in.
To geek out a bit, we can measure the performance based on the entire workflow:
Request Count: 10
Bytes Sent: 3,437 (headers:3,370; body:67)
Bytes Received: 11,545 (headers:4,031; body:7,514)
Sequence (clock) duration: 00:01:03.801
These numbers get hyped up since it’s not talking to a database server. But, every individual HTTP request comes in well under a second. If you sum up the entire workflow, we are still coming in just over a second. So in average, we are looking at about 100ms per request. Of course, we are able to squeeze it further if we turn on gzip compression. The point I’m showing here, is we now have a highly scalable solution that doesn’t infringe on my users nor standard protocols. All great news!
But, But, What About Ajax?
Trust me, I love Ajax. And knowing how to love a great tool is also knowing when to let it go. The technique tends to get used and abused. The sort of abominations I find with web forms are precisely due to too many Ajax requests. It’s like Ajax gets put on a pedestal and becomes the one solution to all. I understand the payload reduction you get from partial page updates. But, having an HTTP request fire per field update makes for bad performance. The point is to reduce HTTP requests, including Ajax requests. Ajax has its proper place, but high performance is not always one of them.
Having said that, I do like how much capability we get from vanilla HTML forms, without JavaScript. These forms are accessible, fast, and intuitive. There is yet a way to introduce innovative Ajax through progressive enhancement. I’ll leave that to your imagination.
Final Thoughts
I hope you get the gist on how to build high concurrency with low latency systems. Vanilla HTML over HTTP is a good candidate because you get to leverage the standard protocol. Each individual request gets fast responses because of the stateless nature of the protocol. As a result, the load on the server is minimal so this scales to a large number of concurrent users.
If interested, you may find the entire demo up on GitHub.
The post High Performance HTML Forms appeared first on BeautifulCoder.NET.