Introduction
Typically we start off coding a project not thinking about making it thread safe. And in many cases, it isn't really necessary. But with the advent of PLINQ
and the Task Parallel Library, we find that there is clear performance advantages to thinking 'thread-safe' from the beginning. Using classes like ConcurrentDictionary
can be a huge time-saver, but we may be handed a collection that needs synchronization that isn't inherently thread safe.
Considerations
When dealing with repeated possibly time consuming calculations it may be better to run the iterations in parallel. And for more straight forward arithmetic it is likely better to stay in a single thread.
LazyInitializer
Happily, there are a few existing classes that help with thread safety. When I can, I initialize my accessor properties using
LazyInitializer.EnsureInitialized(ref item, ()=>{})
. But this has a few serious limitations:
- It does not allow returning null.
- It does not inherently allow for synchronizing other properties.
Thread Safe Patterns
After some deepening research, I upgraded my best practices for using locks, and Monitor.TryEnter
. I always knew the double check locking (DCL) thread safe pattern to check for a condition,
if the condition exists, get (or attempt) a lock, then check the condition again before execution. But until Lambda expressions, I didn't coalesce a way to make
utility functions to handle it.
Locking Every Property in a Class
If you have a class that can be accessed by multiple threads, you will need to apply thread safety to all the public properties and likely some of the private ones. I got really
tired of writing:
readonly object _propA_lock = new Object();
Optimizing Thread Safe Disk I/O (ReaderWriterLockSlim)
I then wrestled with maximizing file read/write performance. This was much more difficult than I thought it would be and was not an absolute solution, but has the potential
to seriously upgrade throughput. You still need to catch and retry for I/O exceptions where a file is being accessed outside of your application. I also had a difficult
time avoiding I/O exceptions if I cleaned up the ReaderWriterLockSlim
objects too aggressively...
I've provided a ReadWriteHelper
class which acts as a key based locking mechanism using ReaderWriterLockSlim
objects. This was my solution to optimizing re-use of each lock object. By allowing each lock object to persist for a period of time, it allowed for re-use and would only dispose them when it made sense.
ReadWriteHelper
inherits from DeferredCleanupBase
which uses a Timer to delay cleanup till needed.
And lastly, I've provided extensions to ReaderWriterLockSlim to help fool proof the read/write code.
Using the Code
ThreadSafety
exposes a set of static methods that act like the lock
keyword, but with some extra functionality. Including conditional timeout locking.
It also allows for more complex conditions than a simple null value that the LazyInitializer
provides. (See "Important Notes" on proper
conditional implementation.)
object value;
if(!dictionary.TryGetValue(key,out value)) {
lock(dictionary) {
if(!dictionary.TryGetValue(key,out value)) {
dictionary.Add( value = newValue );
}
}
}
Reduces down to:
object value;
ThreadSafety.LockConditional( dictionary,
()=> !dictionary.TryGetValue(key, out value),
()=> dictionary.Add( value = newValue ) );
Or the more performance optimized read/write version:
object value;
ThreadSafety.SynchronizeReadWrite( dictionary, key,
()=> !dictionary.TryGetValue(key, out value),
()=> dictionary.Add( value = newValue ),
5000 ,
false );
Note: The optimized version above (requires a key) has been tested to perform equally as well applied to Dictionary<TKey,TValue>
as it is with ConcurrentDictionary<TKey,TValue>
's built in GetOrAdd
method.
...
ThreadSafety.Helper
can also be initialized as an instance class which automatically and safely creates/leverages locks for you. Here is how I use it as an instance:
readonly ThreadSafety.Helper SyncHelper = new ThreadSafety.Helper();
void Example() {
SyncHelper.Lock("[keyName]",()=>{ });
lock(SyncHelper["[keyName]"]) { }
SyncHelper.LockConditional("[keyName]",()=> {return property==null && !foo},()=>
{ });
}
As long as you are not trying to simultaneously read or write to a stream object more than once, optimizing file access is easy:
ThreadSafety.File.Read(filePath,()=>{
});
ThreadSafety.File.Write(filePath,()=>{
});
I typically use these methods to wait for file access and then initialize a stream within the Action
.
Keep in mind you will need to apply a while
/try
/catch
/sleep
retry strategy for file access within your local
function in case something else accesses the file outside your application. I could have built the exception handling in, but the diversity of possible implementations
is too vast to really make this robust within the utility... In some cases, you may be writing a file and an IoException
occurs where you need to handle that error
and do complex cleanup before continuing. You may not want to retry. I've included a ThreadSafety.File.GetFileStreamForRead
method which can assist in typical usage.
Points of Interest
I left the LockCleanupDelay
open for you to experiment with and tune. Delaying cleanup seemed to be the happy solution to avoiding file access collisions
and worrying about excessive locking. You can set this to zero for it to execute cleanup after every run, but in my tests this is prone to IoException
s.
Important Synchronization Notes
With or without the ThreadSafeHelper utility, when implementing a conditional lock (which uses a double check locking pattern), be certain to not alter
the condition used until the end of your code block because it will prematurely negate the condition before the synchronized code is finished.
object result;
ThreadSafety.LockConditional(dictionary,()=> !dictionary.TryGetValue(key, ref result),()=>
{
object temp;
dictionary.Add(key,result = temp);
});
ThreadSafety.LockConditional("[keyName]",()=> _value==null,()=>
{
object result;
_value = result;
});
In the above examples, if for any reason you added the value anywhere but at the end of the function, you may end up returning that value before it is ready.
Thread.MemoryBarrier()
has been suggested to avoid processor
reordering issues. From MSDN: "It synchronizes memory. In effect, flushes the contents of cache memory to main memory, for the processor executing the current thread." But apparently
the lock
keyword (Monitor.Enter
/Exit
) implicitly creates a full memory fence and therefore Thread.MemoryBarrier()
is not needed.
Any questions, comments, criticisms, suggestions, and improvements are very welcome!