Headers fix.
"Knowledge is limited. Imagination encircles the world." A.E.
Introduction
This is a library.
I designed a new
class of mutex:
mutable_recursive_mutex
.
It is an extension of: boost::upgrade_mutex
and
associated locks. This code is fully functional and tested against Windows7. While I was developing it I noticed a great increase of performances of my software. It became 10 times faster than if using standard mutex.
This can be easily explained : I have 12 threads using the same shared resource. They spend almost 90% of their time just reading data. With a standard locking all access would have been serialized... All these threads are occasional writers : main point.
Let me present you my solution.
I mean access are mutable : promotable and demotable.
For instance, one can begin with a shared access then change it to an exclusive and back to shared and finally releases the mutex.
Features:
- 100% based on boost library it is an extension of boost::thread
- No system's specific function called.
- Fully recursive.
- Possible to suspend/resume locking.
- Fair Locking.
- Optional : define a main thread : from a functional point of view this makes it uninterruptible.
- All transitions are allowed when using associated locks :
mutable_xxxxx_lock
. - Should work on all platforms supported by boost library.
Note
: I did not implement all the functions. Just needed (to me) ones.
But if you like this, feel free to extend it.
Locks:
Background
Readers must be familiar with Boost Thread libraries and with c++. This is done for advanced users. I won't explain threading neither why locking a common resource is compulsory. There are very nice tutorials that explain it.
Using the code
It's
a standard recursive mutex with
some extra refinements.
And
it works as following.
#include "mutable_recursive_mutex.hpp"
mutable_recursive_mutex my_mutex;
int _tmain(int argc, _TCHAR* argv[])
{
my_mutex.set_main_thread_id();
mutable_shared_lock read_lock(my_mutex);
mutable_unique_lock write_lock(my_mutex);
return 0;
}
Fair locking
This
just means the thread that waited most to gets the ownership
Main Thread
If used, this thread bypasses the fair locking and it will always take the ownership.
It has been designed to process in priority the most important data.
More the system guarantees the integrity of data when moving from shared to unique (and back).
Except if “suspend/resume” is called.
Mutable_suspend_lock
The
purpose of this lock is to release the mutex regardless of its
state
and locking recursion level.
It
is mainly designed to be able to stop the application without risking
a dead-lock.
Just
like other states the “suspended”
state is recursive.
When
entering “suspended”
state all the other locking/unlocking
functions are deactivated. The only working functions
are
:
{
mutable_suspend_lock lock(my_mutex);
}
my_mutex.suspend();
All
subsequent calls to :
my_mutex.lock()
my_mutex.lock_shared()
my_mutex.unlock()
my_mutex.unlock_shared()
are
ignored. Recursion stack is not modified until the end of scope and/or call to : my_mutex.resume()
.
How it works
I
define 3 thread states plus one meta-state : lst_suspended
.
typedef enum
{
lst_none,
lst_shared,
lst_unique,
lst_suspended
} lock_state_t;
These
states are stored in std::dequeue
:
typedef std::deque<lock_state_t> thread_recursion_stack_t;
Each
thread has its own stack :
boost::thread_specific_ptr<thread_recursion_stack_t> m_thread_states;
When
a stack is created for thread the first element pushed on the stack
is always : lst_none
.
Which means that the thread has none access to the mutex. It is
similar to lst_suspended
.
But
lst_none
is used once and only once. Unlike lst_suspended
.
This
is the heart of this work. When a thread wants to gain an exclusive
access, promote function will be internally called.
It
will first try to take the ownership. If it is refused (by
upgrade_mutex
). It will wait for a condition : m_promote_condition
.
The
condition is signaled each time a thread unlocks. Then all other
threads are woken-up and try again to take the ownership.
Let
see at it now:
void mutable_recursive_mutex::promote(thread_recursion_stack_t * stack)
{
boost::thread::id thread_id = boost::this_thread::get_id();
boost::mutex helper_mutex;
boost::unique_lock<boost::mutex> helper_lock(helper_mutex);
bool bnotify = (stack->front() == lst_shared);
boost::unique_lock<boost::mutex> promote_lock(m_promote_mutex);
if (m_is_main_thread_id_assigned && (thread_id == m_main_thread_id))
m_waiting_promote_threads.push_front(thread_id);
else
m_waiting_promote_threads.push_back(thread_id);
if (bnotify)
m_mutex.unlock_shared();
while (true)
{
if (m_waiting_promote_threads.front() == thread_id)
{
bnotify = false;
if (m_mutex.try_lock())
{
m_waiting_promote_threads.pop_front();
break;
}
}
if (bnotify)
m_promote_condition.notify_all();
bnotify = false;
promote_lock.unlock();
m_promote_condition.wait(helper_lock);
promote_lock.lock();
}
}
The only purpose of helper_lock and
helper_mutex
is to make “m_promote_condition.wait(helper_lock);
”
working. They lock nothing at all.
if (m_is_main_thread_id_assigned && (thread_id == m_main_thread_id))
m_waiting_promote_threads.push_front(thread_id);
else
m_waiting_promote_threads.push_back(thread_id);
This
is the fair locking subsystem with main thread.
If one looks at the whole code he will
see that promote function is called from following states :
When
it's lst_shared
one
can't say we will get the lock immediately. Thus the shared access is
released. And the rule is when an access is released the condition
must be signaled.
=> For speed reason I take it as late as possible.
=> If I had left this work like this. It would have beed at deadlock risk. Let me explain where is the problem and how I solved it. The issue is only about the 2 following lines:
promote_lock.unlock();
m_promote_condition.wait(helper_lock);
These 2 lines may drive to a deadlock because they are not executed atomically. More, most of boost library is done in user space and calls OS specific functions that do atomic work.
So if you don't want to develop at kernel level and without external help the problem has no solution.
I simply solved it by using a thread that signals the condition periodically : this is the external help. Look at : unlock_thread
:
void mutable_recursive_mutex::unlock_thread()
{
boost::chrono::milliseconds ms(100);
try
{
while (true)
{
boost::this_thread::sleep_for(ms);
boost::unique_lock<boost::mutex> promote_lock(m_promote_mutex);
if (!m_waiting_promote_threads.empty())
m_promote_condition.notify_all();
}
} catch (boost::thread_interrupted)
{
}
}
How to
If you plan to use Fair locking and/or "Main Thread" please define : FAIR_LOCKING
1) One that wants to use these classes must be able to answer following questions:
- Do I have a thread on which everything is based on ?
- Or are they all equivalent ?
Answer will tell you if you need to define a "main thread". I don't recommend to change "main thread" even if it is safe.
I strongly recommend to define the "main thread" one for good !
2) Policy
From a general point of view looping threads should look like this. Thanks to Espen Harlinn (who told me to clarify this point):
void std_thread()
{
boost::chrono::milliseconds ms(20);
my_mutex.set_main_thread_id();
while (true)
{
boost::this_thread::sleep_for(ms);
mutable_shared_lock read_lock(my_mutex);
}
my_mutex.reset_main_thread_id();
}
This way, work can begin in parallel as long as possible. This may increase the performances.
And when needed ask to promote it an exclusive. This is done using:
mutable_unique_lock write_lock(my_mutex);
3) Tips and tricks
One should pay attention to this: asking for promotion is time expensive. Thus the point is : where can I the most lately ask for a promotion. And avoid as far as possible to do it multiple times.
Whatever, once a thread acquires a unique lock, while remaining in the scope of this lock other unique lock requests are very fast (for this thread) : the mutex knows that the thread has a unique access. I'll just update the recursion stack.
Please note: this solution provides more features than a standard recursive but there is price to pay : it's time consuming.
Sample project
If asked I can make a sample project.