Presents a case study on optimizing mass queries in Blazor applications by improving the interoperability between JavaScript and C#. It explores practical techniques to enhance performance, specifically focusing on handling large datasets efficiently. Through step-by-step explanations and real-world examples, the article guides developers on how to streamline data processing in Blazor, resulting in faster, more responsive applications.
The truth is that having NuGet packages in production is much more stressful than it initially seemed. When others start using your packages, they begin reporting issues or requesting new features. Additionally, I frequently use these packages in my other projects, whether personal or for the company I'm working for.
In my NuGet package for handling IndexedDB from Razor, primarily intended for use in Blazor WebAssembly applications, I encountered a challenge in a personal project: a website to showcase my movie collection https://movies.sergiortizgomez.com/. I wanted to store all the relationships between movies, actors, and directors in my local database. This resulted in a table with over 27,237 records just for the relationship between movies and actors, and another 489 records for the relationship between movies and directors.
For the movies and directors table, data retrieval was fairly quick, taking just over a second. However, for the movies and actors table, the website would freeze for 29 seconds before becoming responsive again. This was an unacceptable delay!
Identifying the Problem
The first thing I did was measure how long it took to execute the code using the Stopwatch
object. This helped me identify that the problem was in the data retrieval process with my DrUalcman-BlazorIndexedDb NuGet, which took over 26 seconds to return the list of data.
Determining the Source of the Error
My initial suspicion was that the issue could be on the JavaScript side—a language I love (with a hint of sarcasm). So, I attempted to optimize the JavaScript code while continuing to measure results from C#. Although I managed to reduce the times slightly, it was still too slow.
I then decided to directly measure how long JavaScript took to read the data from IndexedDB. To my surprise, I found that reading the 27,237 records took less than a second, sometimes even under half a second. This ruled out JavaScript as the source of the problem.
Starting to Seek Solutions
Knowing that JavaScript wasn't the culprit, I focused on how the data was sent to C#.
First, I changed my approach in C#. Instead of trying to receive the object directly:
List<TModel> data = await jsRuntime.GetJsonResult<List<TModel>>("MyDb.Select", Setup.Tables.GetTable<TModel>(), Setup.DBName, Setup.Version, Setup.ModelsAsJson);
I started requesting the data as a JsonElement
. This reduced the time from 26 to 14 seconds, indicating that the issue was more with how C# handled the data rather than JavaScript. However, converting from JsonElement
to the desired object increased the time back to 25 seconds, which was still unacceptable.
Then I thought: instead of sending a JSON object from JavaScript, why not just send text and then deserialize it in C#? This change reduced the time to 20 seconds, but it was still not enough.
I considered compressing the data in JavaScript to reduce the amount of information being sent, but this solution required implementing a lot of additional code in both JavaScript and C#, which wasn't ideal.
Finding the Solution
Finally, I came up with an idea: why not send bytes instead of text from JavaScript?
async function JsonToBytes(data) {
const jsonString = JSON.stringify(data);
const encoder = new TextEncoder();
return encoder.encode(jsonString);
}
And when reading the data from IndexedDB:
request.onsuccess = async () => {
db.close();
const data = await JsonToBytes(request.result);
resolve(data);
};
This allowed me to do the following in C#:
byte[] dataBytes = await jsRuntime.InvokeAsync<byte[]>(
"MyDb.Select",
Setup.Tables.GetTable<TModel>(),
Setup.DBName,
Setup.Version,
Setup.ModelsAsJson);
List<TModel> data = JsonSerializer.Deserialize<List<TModel>>(
dataBytes,
new JsonSerializerOptions {
PropertyNameCaseInsensitive = true,
AllowTrailingCommas = true,
ReadCommentHandling = JsonCommentHandling.Skip
});
The additional options in JsonSerializerOptions
also helped improve the deserialization performance. As a result, the times improved significantly:
- Data retrieval from JavaScript took no more than 1 second, even for the 27,237 records.
- Converting the data to the desired list took less than 9 seconds. Although this time could be improved by Microsoft, I'm satisfied with the result.
- Other queries that previously took about a second or more now execute in less than half a second. It was truly a significant improvement!
Conclusions
This is how I conducted my investigation and optimized the code, which is now available in the latest version of my NuGet package.
The lessons learned are:
- It is better to send bytes from JavaScript to C# instead of a JSON object.
- Deserializing from
byte[]
in C# is much faster than from a string.
These points are especially relevant if you need to improve response speed or minimize user interface blocking while working between JavaScript and C#. I haven't shown much code here because this is just a personal experience. If you want to see the improvements in detail, you can review the code before and after in this commit of the repository.