Introduction
Before I get blasted, let me mention Stoyan Damov's article on object pooling. Although my method was developed independently, it does use several of the same techniques.
This code makes use of the WeakReference object to perform object pooling. The assumption is made that if the .NET runtime attempts to reclaim memory before the object is needed again, it is cheaper to reclaim the memory. This assumption is only valid under the assumption I made when designing this class, i.e. this type of object pooling works well for applications that require large number of expensive objects, fast. In fact, the only resource these objects occupy is large sections of memory, of which consecutive sections are expensive to allocate. While I cannot really see an application for this in business settings, primarily because so much of ASP.NET is object creation happy, I can foresee many applications in the scientific realm, especially graphics processing and AI. Although I have not been able to use this code in a professional setting, timings indicate a significant speed improvement over just creating objects over and over.
I am not going to spend a lot of time itemizing the code in this article. Instead it will be included as an attachment. Please read the basic algorithm first (below) and then examine the source. I think this particular project will be more fun to probe and pick apart without direct guidance.
Using the Code
The basic algorithm of the ObjectPool
class is as follows:
- Request a new object via generic method
ObjectPool.GetInstance().GetObjectFromPool(null);
- Loop through the linked list to find an unused object
- Pass parameters to the object's
SetupObject
method - Return reference
- On dispose, add to linked list
using System;
using System.Collections.Generic;
using System.Text;
namespace ObjectPool {
public interface IObjectPoolMethods : IDisposable {
void SetupObject(params object[] setupParameters);
event EventHandler Disposing;
}
public class ObjectPool<T> where T : IObjectPoolMethods, new() {
private const int MAX_POOL = 2000;
private static ObjectPool<T> me = new ObjectPool<T>();
private int mMaxPool = ObjectPool<T>.MAX_POOL;
private LinkedList<WeakReference> objectPool =
new LinkedList<WeakReference>();
public static ObjectPool<T> GetInstance() {
return me;
}
private void Object_Disposing(object sender, EventArgs e) {
lock (this) {
Add(new WeakReference(sender));
}
}
private void Add(WeakReference weak) {
objectPool.AddFirst(weak);
if (objectPool.Count >= mMaxPool) {
objectPool.RemoveLast();
}
}
private void Remove(WeakReference weak) {
objectPool.Remove(weak);
}
private void TypeCheck(WeakReference weak) {
if (weak.Target != null &&
weak.Target.GetType() != typeof(T)) {
throw new ArgumentException
("Target type does not match pool type", "weak");
}
}
public T GetObjectFromPool(params object[] setupParameters) {
T result = default(T);
#if DISABLE_POOL
result = new T();
result.SetupObject(setupParameters);
return result;
#else
lock (this) {
WeakReference remove = null;
foreach (WeakReference weak in objectPool) {
object o = weak.Target;
if (o != null) {
remove = weak;
result = (T)o;
GC.ReRegisterForFinalize(result);
break;
}
}
if (remove != null) {
objectPool.Remove(remove);
}
}
if (result == null) {
result = new T();
result.Disposing += new EventHandler(Object_Disposing);
}
result.SetupObject(setupParameters);
return result;
#endif
}
}
}
The linked list is of a fixed maximum size in this code, however, with the immediate dispose
method used by the testing application provided, I rarely see a total number of objects created that exceed the number of threads used. If the runtime has disposed an item in the list, the entire list is truncated since the order of the list is done by last time used. While this isn't actually correct because of the nature of the .NET Garbage Collector, it is effectively correct.
Something that I have noticed with this algorithm/code is that as the memory allocated decreases, the difference between Object Pool and non-object pool goes away (ok, duh) but as the number of objects required in a given unit of time increases, without the pooling, the slower the application runs as the OS begins to thrash. The dispose
method does not seem to be called.
Conclusion
Like I said in my brief introduction above, I haven't found a good application for this yet. I might try plugging it into my memory graph function solver algorithm (see my Bridges article for a simple implementation of the solver), but until then it was just a fun proof of concept. I would be interested in comments describing how similar algorithms have been used in a high object count scenario (as opposed to Connection Pools which have low object counts but are resource heavy).