Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Ever Had to Tackle An OutOfMemory Exception?

29 Jan 2010 1  
Finding and fixing troublesome code can be a time-consuming and frustrating task. Florian Standhartinger recently encountered such an experience and recounts how he managed to use a memory profiling tool to get the problem fixed. Read the full story.

This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers.

Preface for Quick Readers

I know people don't often have time to read complete articles, so you can get a very quick overview of what I'm saying here if you just scan down and read only the headers and the red comments put directly into the images. Together with the Conclusion at the very bottom, this should give you a rough-but-intelligible outline of what I cover in the complete article. However, I naturally extend a warm welcome for you to read the whole article – I promise it isn't too long.

Introduction

I think everyone who has ever had an OutOfMemory problem in .NET will know that finding and fixing troublesome code can be a time-consuming and frustrating task. I myself have had this not-so-fun experience quite recently, and I had to teach myself how to use memory profiling tools to get the problem fixed. What I'd like to do here is share what I've learned about solving memory-related problems in .NET applications by describing a real-life case I solved with the help of ANTS Memory Profiler.

image001.jpg
Fig.1 – The ANTS Memory Profiler UI

The problem occurred in the development version of our product, a successful Advertisement Management System that's fairly widespread in Germany's advertisement industry. During some internal tests a few weeks ago, we noticed that the import function in our latest build led to OutOfMemory Exceptions when used with larger data sources. Originally slated as a half-day bug fix, the solution to this problem finally turned out to take about one week (but at least it taught us a lot about memory usage in .NET). Before we continue, as a very brief introduction to .NET memory management, just bear in mind two simple rules:

  1. Every object that is referenced by any other object will stay in memory, and occupy some space in your RAM.
  2. To get rid of the object and win back the occupied RAM, you need to make sure that no one else references your object. The most common way to archive this is to set the field or property holding a reference to your object to "null."

About the Technologies We Use in our Project

To understand this example, it may be necessary to know a little bit about the technologies we use, so I'll try to keep this as short and simple as possible:

1 – The "CAB" Architecture Guideline

The whole project is architecturally built around the Composite Application Block, which is an architecture framework recommended by the Microsoft Patterns and Practices Team. That basically means that there is one WinForms application (called the "Shell") which hosts all the parts of our application as plug-ins. Because of the modular structure of the Composite Application Block, it's quite easy for us to separate logic from GUI code, and reuse components.

2 – The "XPO" Object Relational Mapper

To save us from having to write all the tiny SQL statements needed to store and retrieve data to and from the database, we use the XPO O/R-Mapper from DevExpress. For a quick example of database mapping with XPO, say we need to retrieve all the customers that are stored in our DB; we simply call XPO and ask it to create a list of Customers for us.

var uow = new UnitOfWork();
var customers = new XPCollection<Customer>();

XPO then goes and creates a SQL statement to query the Customers table from the DB, and fill its internal cache (which is stored in the UnitOfWork object) with Customer objects. It then hands those objects over to us to do whatever it is we wanted them for. Rather nicely, the same thing is true the other way around, too:

var uow = new UnitOfWork();
var newCustomer = new Customer(uow) { Name = "Florian" };
uow.CommitChanges();

New Rows are created in the database by calling the constructor of the Customer database mapping class, and after telling the UnitOfWork to commit the changes, the newly created customer will be sent to the database. What is worth noting is that the objects which XPO retrieves from the DB, or stores in memory for a final commit to the db, are all stored in the UnitOfWork. As a result, the UnitOfWork will be quite "heavy" in terms of memory usage because it stores references to quite a lot of database objects!

3 – The Import Engine

The import function that was causing memory problems consists of a relatively commonly-used Import Engine and several different Import Plugins. The division of labour is pretty unsurprising, with the Import Engine handling stuff like matching objects with the existing database to avoid duplicates, and managing updates or inserts to our database. The several Import Plugins provide the logic to read data from various sources and transform it into objects that XPO can map into our database.

image002.jpg
Fig.2 – A model of the flow of data from various sources, into our Application Database

The Problem

As you can imagine, the Import function that was causing OutOfMemory exceptions created a lot of objects, and told XPO to store them in the database. This is why we decided to clean up the UnitOfWork data container from time to time during the import process. By that I mean: let's say that every time 100 objects have been imported, we throw away the UnitOfWork, with all the database objects it holds, and replace it with a new (empty) one.

image003.jpg
Fig.3 – The Persistent Problem

Unfortunately, we had to accept that even after running an import with just a couple of hundred objects, the available RAM was still all filled up, and we still got an OutOfMemory Exception.

Narrowing Down the Problem Space

Here are the steps that were necessary to solve the problem:

To get rid of everything that could distract me from the real problem, I tried to simplify the application that was to be profiled, and run only the code that was directly involved with the immediate problem. So, I extracted the import function from our main application and called the same code from a console application. To further lower the number of code-lines that could harbour the problem, I replaced the real life import plug-in with a simple dummy plug in that instantiated randomly generated objects instead.

image004.jpg
Fig.4 – Focusing down on the problem space

Running this simple program and seeing that it still caused out of memory exceptions ensured that the problem really had to be somewhere in the import engine. By the way, since profiling will always slow down an application by an appreciable amount, having the profiled code reduced to a minimum also made the wait for results a lot more bearable.

Where Can We Have Clear Expectations about Memory Usage, and Take Memory Snapshots?

In this case, we have a central loop in the import engine that iterates over all the objects from the data source while handling the import. In this loop, from time to time, we do our UnitOfWork replacements to keep the memory weight down, and I found the line right after that replacement to be a good position for taking memory snapshots with the profiler. This was a spot where I could have a clear expectation of what the memory occupation should look like, and any differences in memory occupation between the different iterations of the loop could be a trace that led me to the memory leak.

So, I inserted a Message Box to give me a comfortable reminder to do my profiling snapshot, and the code looked kind of like this (simplified):

int i = 0;
foreach(var objectToImport in m_importPlugin.GetRecords())
{
   //... do stuff like matching with existing records to avoid duplicate records and
   //copy records into the database
   DoMatching();
   CopyToDatabase(objectToImport);

   //replace unitofwork objects to get rid of the objects we don’t want to be referenced
   //any more
   if (i++ == 100)
   {
     ReplaceUnitOfWorks();
     //*********** EVERY TIME WE HIT THE FOLLOWING LINE WE SHOULD HAVE THE SAME
     //MEMORY OCCUPATION
     MessageBox.Show("Now take memory profiler snapshot");
   }
 }

Running the Profiler and Taking Snapshots

I used the ANTS Memory Profiler for finding the memory leak, mainly because at the moment it seems to be the only memory profiler that can display the references keeping an object alive as a visual tree. Compared to most of the other well-known profilers, I also found it to be the only one that enabled me to profile a fairly big program in a tolerable amount of time and with acceptable additional memory consumption. ANTS Memory Profiler comes with a handy Visual Studio integration so you can start it right from your IDE.

image005.jpg

Fig.5 – ANTS Memory Profiler handy Visual Studio menu

While running this stripped-down code, I waited until my alert popped up, took a memory snapshot, cleared the alert, and waited for the next one so I had two memory snapshots to compare.

Compare Snapshots by Inspecting the Data-Type Whose Memory Usage Grew Most

As the header suggests, I started my search for the problem by looking at the data-type with the biggest growth in memory consumption.

image006.jpg
Fig.6 – The likely problem

A mysterious class called RBTree<K>+Node<int>[] seemed to have grown most in memory usage since my last snapshot, and as I wanted to know more about the instances of this type that occupied my precious memory, I clicked on the Instance List button.

image007.jpg
Fig.7 – The mysterious RBTree<K>-Node<ink>[]

Given the position in the code where I was triggering the first and the second snapshots, I was quite sure that I didn't want anything growing much in memory consumption between those snapshots.

I assumed that even if it was OK that there were lots of RBTree<K>-Node<int>[] objects temporarily living in my RAM, they should at least eventually go away as my main loop iterated towards the end of the "objects to import" enumerable. If they weren't going away until the end of the loop, and were just growing more and more, it would clearly mean that an OutOfMemory exception would occur in a really long loop sooner or later.

As a result, I was particularly interested in the objects that already existed when I took my first snapshot, and were still in memory for the second snapshot. To see who was still keeping these tricky objects alive, I inspected one of them in the Object Retention Graph.

image008.jpg
Fig.8 – Finding out what’s keeping my objects alice.

The Object Retention Graph (Fig. 9) shows who keeps an object referenced, and thus "alive;" at the very bottom you have your object and you can see who references it stacked on top of it.

In our case we can see from the graph that there is a private static field of type Dictionary<Session, ProviderType> (Note: Session is the base class of UnitOfWork). That dictionary was keeping a UnitOfWork object alive (remember: we were trying to get rid of old UnitOfWork objects from time to time by replacing them with fresh ones), which then kept RBTree<K>+Node<int>[] alive via a couple of references.

Find Unwanted References that Keep Objects In Memory

image009.jpg
Fig.9 – Found it!

I actually introduced that Dictionary field quite a while ago for some caching reasons when doing performance optimizations. Obviously, I forgot to clear the Dictionary when replacing the UnitOfWorks, and thus created a memory leak that kept all my UnitOfWork objects alive while I expected them to be replaced. To test my theory, I changed into the Class List view in ANTS profiler, and filtered by the UnitOfWork type.

image010.jpg
Fig.10 – My suspicions are confirmed

As expected, there were many more UnitOfWork objects alive than there should have been.

Fix The Code

All I needed to do was to introduce some code into the ReplaceUnitOfWorks() method to reset the Dictionary that was keeping my UnitOfWork objects alive.

public void ReplaceUnitOfWorks()
{
   //HERE IS THE FIX:
   ExtKey.ClearFrozenContextDictionary();

   //HERE IS THE CODE THAT HAS ALREADY BEEN THERE:
   this.UnitOfWork = new UnitOfWork();
}

...

//Somewhere else:
public class ExtKey 
{
    public static void ClearFrozenContextDictionary()
    {
         m_frozenProviderContexts = new Dictionary<Session, ProviderType>();
    }
}

Run Profiler Again to Ensure the Problem is Solved

After building the fixed code and repeating the profiling procedure, I found the number of surviving UnitOfWork objects in the second memory snapshot to be reduced to just one instance. Also, the memory consumption over time (which can be seen in the graph in the top area of the ANTS Memory Profiler) looked much more stable compared with the graph from my earlier profiling sessions.

image011.jpg
Fig.11 – Victory!

Eureka! The memory problem was solved!

Conclusion

If your .NET application has more than a few thousand lines of code, finding a memory leak can be a really hard task if you try it without the help of a proper tool. In my opinion, the chances are pretty big that you'll end up fixing methods that aren't really the problem, and introducing workarounds that nobody would ever have needed.

Tips and Good Practice

Here a few tips and best practices that I've picked up when working on this memory problem. All of them are just my own personal opinion, so feel free to write comments if you disagree with any of them:

  • Let "Write solid, readable and beautiful code" be an important directive while coding. Certainly more important than assumptions about tiny performance and memory optimizations. Performance or memory optimized code tends to be less readable, and in most cases unreadability of code is a much worse issue than any performance or memory problems you might have. If your beautiful code behaves well enough within your customers' requirements (in terms of speed and memory consumption), there is no need to change it. If it is not behaving, then it's time to start your profiler.
  • When profiling, always let the profiler guide you to the biggest problem and solve that one first. If there are only minor problems left, and the customer can live with them, there may not be a need to invest any more work.
  • Try to narrow down the problem space by only profiling the exact module which causes trouble. If possible, turn off any unrelated stuff such as user interfaces, logging mechanisms and so on.
  • Don't guess what's wrong and change your code based on these assumptions; let the profiler tell you what's wrong. Often your guesses (at least mine) may turn out to lead you down the wrong track, and consume a lot of time that gives no benefit.
  • Try to identify good places in your code for taking memory snapshots. If your OutOfMemory exception is thrown within a long running loop, the bottom lines within this loop are often good places for taking snapshots (because the memory consumption needs to stay more or less consistent during the loop's iterations).
  • Use the WeakReference class whenever you identify a place in your code where a reference to an object only stands for its optional (or rather, unimportant) storage. For example, if you want to keep a cache only for as long as the objects are alive, but don't want the cache to keep the objects alive.
  • Be careful when using static fields and properties. All objects referenced from static fields stay alive for the whole lifetime of your application (more precisely, the lifetime of your AppDomain), or until the reference is removed (for example, set to null).
  • Be careful with .NET events. An event is actually only syntactic sugar-coating for a delegate, and a delegate is kind of a syntactic sugar-coating for the combination of a reference to an instance and a callback method on this instance.
    As a result, every object that subscribes to an event will be kept alive by the object that publishes that event until either the event-publishing-object isn't referenced from anywhere, or the event is deregistered.
  • Before profiling an application, try to turn off as much of your multithreading stuff as possible, and make the calls sequential instead. Multithreading may behave in an apparently nondeterministic way, and create different results each time you profile the application.
  • If you are using CAB or any other dependency injection container (IOC) framework, make sure you unregister your objects from the container once they aren't necessary any more. In CAB this means calling WorkItem.Items.RemoveObject(obj), WorkItem.WorkItems.Remove(...) and so on.
  • Be aware that calling the Dispose() method on an object does not necessarily clean the object out of your memory. If there is still a reference to your disposed object, it will still consume memory.
  • Whenever possible, stay away from doing too much memory management yourself in a .NET environment. Calling the Garbage Collection on your own with GC.Collect() is almost always a bad idea, and tends to keep your memory occupied for longer than with the automatic GC mechanisms. Also, the usage of finalizers / deconstructors will, in most cases, also just make your objects stay in memory longer. Finalizers should only be employed if you use unmanaged stuff that needs to be freed explicitly.

I hope there is someone out there I've helped by describing this special case of memory profiling a .NET application. Please feel free to post comments, criticisms and ideas on the topic.

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here