Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / Languages / C#

Leveraging MemoryCache and AOP for Expensive Calls

5.00/5 (5 votes)
20 May 2013CPOL3 min read 25.7K  
How to use AOP and System.Runtime.Caching.MemoryCache to improve application performance
If you ever found yourself writing repetitive code to squeeze additional performance improvements from your code, this article might help in opening an entire new area of possibilities.

Background

Since I have found out about SharpCrafters' PostSharp, I've immediately became a customer and a big fan of the tool and the Aspect Oriented Programming model in general. I will not bother you with the details about how much boiler plate code it eliminates, there have been volumes written about it already. This article is also not about what AOP is and how PostSharp weaves the additional code into your compiled application. If you are interested in those details, you will have to visit their support forums and blogs. I highly recommend it anyway.

Using the Code

Recently, I have encountered an interesting performance problem. My application required a heavy rules processing engine from a 3rd party provider, and as a considerate developer, I've opted to properly dispose all Disposable objects after use. However, that led to a double and quadruple increase in processing times. You might also encountered similar scenario, during expensive SOA or database calls, so the technique I am about to introduce you to, will help in scenarios where given the same set of parameters and a time frame, the response will always be the same.

First, let's start by creating a caching attribute using PostSharp's AOP:

C#
using System;
using System.Linq;
using System.Reflection;
using System.Runtime.Caching;
using PostSharp.Aspects;
using PostSharp.Extensibility;

namespace InRuleLocalStressLoad.Lib.Attibutes
{
    [Serializable]
    public class CacheAttribute : MethodInterceptionAspect
    {
        [NonSerialized] private static readonly MemoryCache Cache;

        [NonSerialized] private CacheItemPolicy cachePolicy;
        private string _methodName;
        private string _declaringType;
        private string _prefix;

        static CacheAttribute()
        {
            if (!PostSharpEnvironment.IsPostSharpRunning)
            {
                Cache = MemoryCache.Default;
            }
        }

        public override void CompileTimeInitialize(MethodBase method, AspectInfo aspectInfo)
        {
            _methodName = method.Name;
            _declaringType = method.DeclaringType != null ? 
                                 method.DeclaringType.FullName : String.Empty;
            _prefix = String.Concat(_declaringType, ".", _methodName, ":");
        }

        public override void RuntimeInitialize(MethodBase method)
        {
            cachePolicy = new CacheItemPolicy {SlidingExpiration = TimeSpan.FromMinutes(15)};
        }

        public override void OnInvoke(MethodInterceptionArgs args)
        {
            var key = BuildCacheKey(args.Arguments);
            args.ReturnValue = Cache.Get(key, null) ?? Cache.AddOrGetExisting(key, 
              args.Invoke(args.Arguments), cachePolicy, null) ?? Cache.Get(key, null);
        }
        
        private string BuildCacheKey(Arguments arguments)
        { 
            return String.Concat(_prefix, 
                String.Join("_", 
                  arguments.Select(a => a != null ? a.ToString() : String.Empty)));
        } 
    } 
} 

Few points about this sample:

  • As you can see, PostSharp allows for two types of initializations, Compile and Runtime. I have decided to put the expensive Reflection calls in CompileTimeInitialize to avoid performance degradation during execution. I have left the CacheItemPolicy creation in RunTimeInitialize, because eventually I want it to be configured using application configuration file.
  • The MemoryCache class is thread safe and can be used in a multithreaded application. The call to AddOrGetExisting will execute the call to args.Invoke() and will return a null value if the value was added, hence the double use of the null-coalescing operator. This was a surprise as I was expecting MemoryCache to behave like ConcurrentDictionary, where the call to Invoke would not be executed if the value already existed.
  • BuildCacheKey will "always" be unique, given a class, method and parameter values. If you need to ensure that it will be 100% unique, you will have to develop your own hashing function.

Once you create the Cache attribute, the rest is very simple. Say you have a class that has a heavy dependency in its constructor on a file:

C#
var myInstance = new HeavyObject(someFileName)      

All you need to leverage in memory caching is to create a helper method and decorate it with the [Cache] attribute:

C#
[Cache]
private HeavyObject GetMyHeavyObject(string fileName){
    return new HeavyObject(fileName);
}  

///later in code 
var myInstance =  GetHeavyObject(someFileName);  

That's it. PostSharp takes care of weaving all the necessary code around your compiled source. You write less code, which is more readable, and easier to maintain.

Points of Interest

The fact the starting with .NET 4 we have access to the ASP.NET like cache, opened a whole new area of possibilities for improvements in my distributed system. Next step will be to integrate it with file and database monitors, so cache content does not have to be reloaded on a prefixed schedule (like in this article).

History

  • 15th November, 2012: Version 1.0 posted

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)