Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / programming / memory-management

Robust C++: Object Pools

4.83/5 (17 votes)
5 Mar 2022GPL313 min read 46.3K   517  
Recovering from memory leaks
When a system runs for a long time, using a shared heap can eventually result in a crash because of gradual fragmentation and memory leaks. By allocating objects from pools of fixed-size blocks, a system can limit fragmentation and use a background garbage collector to recover leaked blocks. This article presents an ObjectPool class that provides these capabilities.

Introduction

A system that needs to be continuously available must recover from memory leaks so that it doesn't need to be periodically shut down for "routine maintenance". This article describes how object pools help to meet this requirement. Here, object pool does not refer to a pool of shared objects that are never destroyed. Rather, it refers to objects whose memory is allocated from a pool of fixed-size blocks instead of the heap.

Background

In C++, an object allocated from the heap, by the default implementation of operator new, must be explicitly deleted to return its memory to the heap. A memory leak results when deletion doesn't occur, usually because the pointer to the object gets lost. The addition of smart pointers in C++11 has reduced this risk, but it still exists (if a unique_ptr gets trampled, for example).

Another risk is memory fragmentation. Holes develop as memory blocks of differing sizes are allocated from, and returned to, the heap. Over a long period of time, this reduces the amount of effectively available memory unless the heap manager spends time merging adjacent free areas and implementing a best-fit policy.

The use of object pools allows a system to recover from memory leaks and avoid escalating fragmentation. And as we shall see, it also enables some other useful capabilities.

Using the Code

As in Robust C++: Safety Net, the code that we will look at is taken from the Robust Services Core (RSC). But this time, the code is more amenable to being copied into an existing project and modified to meet its needs. If this is the first time that you're reading an article about an aspect of RSC, please take a few minutes to read this preface.

Overview of the Classes

Unlike many techniques, it is often possible to introduce object pools to a large legacy system without the need for significant reengineering. The reason is that allocating and freeing an object's memory is encapsulated by operators new and delete. By tweaking the class hierarchy, the usual versions of these operators, which use the default heap, are easily replaced with versions that use an object pool.

ObjectPool

ObjectPool is the base class for object pools, and each of its subclasses is a singleton that implements one pool. These subclasses do little other than invoke ObjectPool's constructor with the appropriate arguments. Each subclass, when created, gets added to an ObjectPoolRegistry that tracks all of the system's object pools. The blocks in each pool are allocated during system initialization so that the system can focus on its work once it is up and running.

Pooled

Pooled is the base class for objects whose memory comes from a pool. It overrides operator delete to return a block to its pool when an object is deleted. It also defines some data members that a pool uses to manage each block.

ObjectPoolAudit

ObjectPoolAudit is a thread which periodically wakes up to invoke a function that finds and returns orphaned blocks to the pool. Applications are still expected to delete objects, so the audit exists to fix memory leaks that could gradually cause the system to run out of memory. The audit uses a typical mark-and-sweep strategy but could be termed a background, rather than a foreground, garbage collector. It can run less frequently than a regular garbage collector, without freezing the system for an unbecoming length of time while performing its work.

Walkthroughs

The code in this article has been edited to remove things that would distract from the central concepts. These things are important in some applications, less so in others. If you look at the full version of the code, you will run across them, so a summary of what was removed is provided in Deleted Code.

Creating an Object Pool

Each singleton subclass of ObjectPool invokes its base class constructor to create its pool:

C++
ObjectPool::ObjectPool(ObjectPoolId pid, size_t size, size_t segs) :
   blockSize_(0),
   segIncr_(0),
   segSize_(0),
   currSegments_(0),
   targSegments_(segs),
   corruptQHead_(false)
{
   //  The block size must account for the header above each Pooled object.
   //
   blockSize_ = BlockHeaderSize + Memory::Align(size);
   segIncr_ = blockSize_ >> BYTES_PER_WORD_LOG2;
   segSize_ = segIncr_ * ObjectsPerSegment;
   for(auto i = 0; i < MaxSegments; ++i) blocks_[i] = nullptr;

   //  Initialize the pool's free queue of blocks.
   //  
   freeq_.Init(Pooled::LinkDiff());

   //  Set the pool's identifier and add it to the registry of object pools.
   //
   pid_.SetId(pid);
   Singleton<ObjectPoolRegistry>::Instance()->BindPool(*this);
}

Memory::Align aligns each block to the underlying platform's word size (32 or 64 bits). Similar to a heap, an object pool needs some data to manage its block. It puts a BlockHeader at the start of each block:

C++
//  The header for a Pooled (a block in the pool).  Data in the header
//  survives when an object is deleted.
//
struct BlockHeader
{
   ObjectPoolId pid : 8;       // the pool to which the block belongs
   PooledObjectSeqNo seq : 8;  // the block's incarnation number
};

const size_t BlockHeader::Size = Memory::Align(sizeof(BlockHeader));

//  This struct describes the top of an object block for a class that
//  derives from Pooled.
//
struct ObjectBlock
{
   BlockHeader header;  // block management information
   Pooled obj;          // the actual location of the object
};

constexpr size_t BlockHeaderSize = sizeof(ObjectBlock) - sizeof(Pooled);

For reference, here are the members of Pooled that are relevant to this article:

C++
//  A pooled object is allocated from an ObjectPool created during system
//  initialization rather than from the heap.
//
class Pooled : public Object
{
   friend class ObjectPool;
public:
   //  Virtual to allow subclassing.
   //
   virtual ~Pooled() = default;

   //  Returns the offset to link_.
   //
   static ptrdiff_t LinkDiff();

   //  Overridden to claim blocks that this object owns.
   //
   void ClaimBlocks() override;

   //  Clears the object's orphaned_ field so that the object pool audit
   //  will not reclaim it.  May be overridden, but the base class version
   //  must be invoked.
   //
   void Claim() override;

   //  Overridden to return a block to its object pool.
   //
   static void operator delete(void* addr);

   //  Deleted to prohibit array allocation.
   //
   static void* operator new[](size_t size) = delete;
protected:
   //  Protected because this class is virtual.
   //
   Pooled();
private:
   //  Link for queueing the object.
   //
   Q1Link link_;

   //  True if allocated for an object; false if on free queue.
   //
   bool assigned_;

   //  Zero for a block on the free queue or that has just been claimed by
   //  its owner.  Incremented each time through the audit; if it reaches a
   //  threshold, the block deemed to be orphaned and is recovered.
   //
   uint8_t orphaned_;

   //  Used by audits to avoid invoking functions on a corrupt block.  The
   //  audit sets this flag before it invokes any function on the object.
   //  If the object's function traps, the flag is still set when the audit
   //  resumes execution, so it knows that the block is corrupt and simply
   //  recovers it instead of invoking its function again.  If the function
   //  returns successfully, the audit immediately clears the flag.
   //
   bool corrupt_;

   //  Used to avoid double logging.
   //
   bool logged_;
};

The constructor initialized a queue (freeq_) for blocks that have yet to be allocated. RSC provides two queue templates, Q1Way and Q2Way. Their implementation differs from STL queues in that they never allocate memory after the system initializes. Instead, the class whose objects will be queued provides a ptrdiff_t offset to a data member that serves as a link to the next item in the queue. That offset was the argument to freeq_.Init().

A key design decision is the number of pools. The recommended approach is to use a common pool for all classes that derive from the same major framework class. RSC's NodeBase, for example, defines an object pool for the buffers used for inter-thread messaging. Note that the size of a pool's blocks must be somewhat larger than, for example, sizeof(MsgBuffer) so that subclasses will have room to add data of their own.

Using a common pool for all classes that derive from the same framework class significantly simplifies the engineering of pool sizes. Each pool must have enough blocks to handle times of peak load, when the system is maxed out. If every subclass had its own pool, each pool would need enough blocks to handle times when that subclass just happened to be especially popular. Having the subclasses share a pool smooths out such fluctuations, reducing the total number of blocks required.

Increasing the Size of an Object Pool

Whether a pool is creating its initial pool of blocks during system initialization, or whether it is allocating more blocks while in service, the code is the same:

C++
bool ObjectPool::AllocBlocks()
{
   auto pid = Pid();

   while(currSegments_ < targSegments_)
   {
      //  Allocate memory for the next group of blocks.
      //
      auto size = sizeof(uword) * segSize_;
      blocks_[currSegments_] = (uword*) Memory::Alloc(size, false);
      if(blocks_[currSegments_] == nullptr) return false;
      ++currSegments_;
      totalCount_ = currSegments_ * ObjectsPerSegment;

      //  Initialize each block and add it to the free queue.
      //
      auto seg = blocks_[currSegments_ - 1];

      for(size_t j = 0; j < segSize_; j += segIncr_)
      {
         auto b = (ObjectBlock*) &seg[j];
         b->header.pid = pid;
         b->header.seq = 0;
         b->obj.link_.next = nullptr;
         b->obj.assigned_ = false;
         b->obj.orphaned_ = OrphanThreshold;
         EnqBlock(&b->obj);
      }
   }

   return true;
}

Note that blocks are allocated in segments of 1K blocks each, such that an individual block is addressed by blocks_[i][j]. This makes it easy to allocate more blocks when the system is running: just add another segment. The fact that blocks in a segment are contiguous is useful for other reasons that will appear later.

Creating a Pooled Object

Let's turn our attention to creating an object that resides in an ObjectPool's block. This is done in the usual way, but the call to new ends up invoking an implementation in the framework class that uses the pool:

C++
void* MsgBuffer::operator new(size_t size)
{
   return Singleton<MsgBufferPool>::Instance()->DeqBlock(size);
}

Maybe you noticed this in Pooled, the base class for all pooled objects:

C++
static void* operator new[](size_t size) = delete;

This prohibits the allocation of an array of pooled objects, which would have to be implemented by finding a set of adjacent free blocks and removing each one from wherever it happened to be on the free queue. Pass.

The code invoked by each base class's implementation of operator new looks like this:

C++
Pooled* ObjectPool::DeqBlock(size_t size)
{
   auto maxsize = blockSize_ - BlockHeaderSize;

   if(size > maxsize)
   {
      throw AllocationException(mem_, size);
   }

   //  If the free queue is empty, invoke UpdateAlarm, which will also
   //  allocate another segment.
   //
   auto empty = false;

   if(freeq_.Empty())
   {
      UpdateAlarm();
   }

   auto item = freeq_.Deq();

   if(item == nullptr)
   {
      throw AllocationException(mem_, size);
   }

   --availCount_;
   return item;

}

When an object doesn't fit into its pool's block, the easiest solution is to increase the size of the blocks. A small increase should be tolerable, but if one class's objects are much larger than the other objects that use the pool, too much memory could be wasted. In that case, the offending class must use the PIMPL idiom to move some of its data into a private object. By default, this object will be allocated from the heap. However, it is also possible to provide auxiliary data blocks for this purpose. These also use object pools and come in various sizes, such as small, medium, and large. They are not difficult to implement, so RSC will eventually include them.

Deleting a Pooled Object

When delete is invoked on a pooled object, it eventually finds its way to this:

C++
void Pooled::operator delete(void* addr)
{
   auto obj = (Pooled*) addr;
   auto pid = ObjectPool::ObjPid(obj);
   auto pool = Singleton<ObjectPoolRegistry>::Instance()->Pool(pid);
   if(pool != nullptr) pool->EnqBlock(obj, true);
}

Here, ObjectPool::ObjPid obtains the object pool's identifier from BlockHeader.pid, which appeared earlier and is located immediately above obj. This allows the operator to return the block to the correct pool:

C++
void ObjectPool::EnqBlock(Pooled* obj)
{
   if(obj == nullptr) return;

   //  If a block is already on the free queue or another queue, putting it
   //  on the free queue creates a mess.
   //
   if(!obj->assigned_)
   {
      if(obj->orphaned_ == 0) return;  // already on free queue
   }
   else if(obj->link_.next != nullptr) return;  // on some other queue

   //  Trample over some or all of the object.
   //
   auto nullify = ObjectPoolRegistry::NullifyObjectData();
   obj->Nullify(nullify ? blockSize_ - BlockHeader::Size : 0);

   //  Return the block to the pool.
   // 
   auto block = ObjToBlock(obj);

   if(block->header.seq == MaxSeqNo)
      block->header.seq = 1;
   else
      ++block->header.seq;

   obj->link_.next = nullptr;
   obj->assigned_ = false;
   obj->orphaned_ = 0;
   obj->corrupt_ = false;
   obj->logged_ = false;

   if(!freeq_.Enq(*obj)) return;
   ++availCount_;
}

Note that we trampled the object before returning it to the pool. The amount of trampling is determined by a configuration parameter that is read into nullify. This allows the entire block to be filled with something like 0xfd bytes during testing. In released software, time is saved by only trampling the top of the object. This is similar to the "debug" version of many heaps. The idea is that, by trampling the object, stale accesses (to deleted data) are more likely to be detected. Even if we only trample the top of the object, we corrupt its vptr, which will cause an exception if someone later tries to invoke one of its virtual functions.

Auditing the Pools

Earlier, we noted that ObjectPoolAudit uses a mark-and-sweep strategy to find orphaned blocks and return them to their pools. Its code is simple because ObjectPoolRegistry actually owns all of the object pools. The thread therefore just wakes up regularly and tells the registry to audit the pools:

C++
void ObjectPoolAudit::Enter()
{
   while(true)
   {
      Pause(interval_);
      Singleton<ObjectPoolRegistry>::Instance()->AuditPools();
   }
}

The audit has three distinct phases. The current phase_ and the identifier of the pool being audited (pid_) are members of ObjectPoolAudit itself but are used by this code:

C++
void ObjectPoolRegistry::AuditPools() const
{
   auto thread = Singleton<ObjectPoolAudit>::Instance();

   //  This code is stateful.  When it is reentered after an exception, it
   //  resumes execution at the phase and pool where the exception occurred.
   //
   while(true)
   {
      switch(thread->phase_)
      {
      case ObjectPoolAudit::CheckingFreeq:
         //
         //  Audit each pool's free queue.
         //
         while(thread->pid_ <= ObjectPool::MaxId)
         {
            auto pool = pools_.At(thread->pid_);

            if(pool != nullptr)
            {
               pool->AuditFreeq();
               ThisThread::Pause();
            }

            ++thread->pid_;
         }

         thread->phase_ = ObjectPoolAudit::ClaimingBlocks;
         thread->pid_ = NIL_ID;
         //  [[fallthrough]]

      case ObjectPoolAudit::ClaimingBlocks:
         //
         //  Claim in-use blocks in each pool.  Each ClaimBlocks function
         //  finds its blocks in an application-specific way.  The blocks
         //  must be claimed after *all* blocks, in *all* pools, have been
         //  marked, because some ClaimBlocks functions claim blocks from
         //  multiple pools.
         //
         while(thread->pid_ <= ObjectPool::MaxId)
         {
            auto pool = pools_.At(thread->pid_);

            if(pool != nullptr)
            {
               pool->ClaimBlocks();
               ThisThread::Pause();
            }

            ++thread->pid_;
         }

         thread->phase_ = ObjectPoolAudit::RecoveringBlocks;
         thread->pid_ = NIL_ID;
         //  [[fallthrough]]

      case ObjectPoolAudit::RecoveringBlocks:
         //
         //  For each object pool, recover any block that is still marked.
         //  Such a block is an orphan that is neither on the free queue
         //  nor in use by an application.
         //
         while(thread->pid_ <= ObjectPool::MaxId)
         {
            auto pool = pools_.At(thread->pid_);

            if(pool != nullptr)
            {
               pool->RecoverBlocks();
               ThisThread::Pause();
            }

            ++thread->pid_;
         }

         thread->phase_ = ObjectPoolAudit::CheckingFreeq;
         thread->pid_ = NIL_ID;
         return;
      }
   }
}

Auditing the Queue of Available Blocks

In its first phase, AuditPools invoked each pool's AuditFreeq function. This function begins by marking all of the pool's blocks as orphaned, after which it claims the blocks that are already on the free queue.

A corrupt queue is likely to cause continuous exceptions. It is therefore up to AuditFreeq to detect if the free queue is corrupt and, if so, repair it:

C++
void ObjectPool::AuditFreeq()
{
   size_t count = 0;

   //  Increment all orphan counts.
   //
   for(size_t i = 0; i < currSegments_; ++i)
   {
      auto seg = blocks_[i];

      for(size_t j = 0; j < segSize_; j += segIncr_)
      {
         auto b = (ObjectBlock*) &seg[j];
         ++b->obj.orphaned_;
      }
   }

   //  Audit the free queue unless it is empty.  The audit checks that
   //  the links are sane and that the count of free blocks is correct.
   //
   auto diff = Pooled::LinkDiff();
   auto item = freeq_.tail_.next;

   if(item != nullptr)
   {
      //  Audit the queue header (when PREV == nullptr), then the queue.
      //  The queue header references the tail element, so the tail is
      //  the first block whose link is audited (when CURR == freeq_).
      //  The next block to be audited (when PREV == freeq_) is the head
      //  element, which follows the tail.  The entire queue has been
      //  traversed when CURR == freeq_ (the tail) for the second time.
      //
      //  Before a link (CURR) is followed, the item (queue header or
      //  block) that provided the link is marked as corrupt.  If the
      //  link is bad, a trap should occur at curr->orphaned_ = 0.
      //  Thus, if we get past that point in the code, the link should
      //  be sane, and so its owner's "corrupt" flag is cleared before
      //  continuing down the queue.
      //
      //  If a trap occurs, this code is reentered.  It starts traversing
      //  the queue again.  Eventually it reaches an item whose corrupt_
      //  flag *is already set*, at which point the queue gets truncated.
      //
      Pooled* prev = nullptr;
      auto badLink = false;

      while(count <= totalCount_)
      {
         auto curr = (Pooled*) getptr1(item, diff);

         if(prev == nullptr)
         {
            if(corruptQHead_)
               badLink = true;
            else
               corruptQHead_ = true;
         }
         else
         {
            if(prev->corrupt_)
               badLink = true;
            else
               prev->corrupt_ = true;
         }

         //  CURR has not been claimed, so it should still be marked as
         //  orphaned (a value in the range 1 to OrphanThreshold).  If it
         //  isn't, PREV's link must be corrupt.  PREV might be pointing
         //  back into the middle of the queue, or it might be a random
         //  but legal address.
         //
         badLink = badLink ||
            ((curr->orphaned_ == 0) || (curr->orphaned_ > OrphanThreshold));

         //  If a bad link was detected, truncate the queue.
         //
         if(badLink)
         {
            if(prev == nullptr)
            {
               corruptQHead_ = false;
               freeq_.Init(Pooled::LinkDiff());
               availCount_ = 0;
            }
            else
            {
               prev->corrupt_ = false;
               prev->link_.next = freeq_.tail_.next;  // tail now after PREV
               availCount_ = count;
            }

            return;
         }

         curr->orphaned_ = 0;
         ++count;

         if(prev == nullptr)
            corruptQHead_ = false;
         else
            prev->corrupt_ = false;

         prev = curr;
         item = item->next;

         if(freeq_.tail_.next == item) break;  // reached tail again
      }
   }

   availCount_ = count;
}

Claiming In-Use Blocks

In its second phase, AuditPools invoked each pool's ClaimBlocks function. It is this function that must claim in-use blocks so that the audit will not recover them. A pool must therefore be able to find all of the objects that could own an object from the pool, so that each owner can claim its objects.

The process of claiming objects is a cascade through the system's object model. Earlier, we noted that NodeBase had a pool for the buffers used for inter-thread messaging. The following code fragments illustrate how in-use buffers are claimed.

Threads own the buffers used for inter-thread messaging, so the buffer's pool implements ClaimBlocks by telling each thread to claim its objects. To do this, it goes through ThreadRegistry, which tracks all of the system's threads. This initiates the cascade:

void MsgBufferPool::ClaimBlocks()
{
   Singleton<ThreadRegistry>::Instance()->ClaimBlocks();
}

void ThreadRegistry::ClaimBlocks()
{
   //  Have all threads mark themselves and their objects as being in use.
   //
   for(auto t = threads_.cbegin(); t != threads_.cend(); ++t)
   {
      auto thr = t->second.thread_;
      if(thr != nullptr) thr->ClaimBlocks();
   }
}

Invoking ClaimBlocks on a thread soon reaches the following, in which the thread claims any message buffers queued against it: 

C++
void Thread::Claim()
{
   for(auto m = msgq_.First(); m != nullptr; msgq_.Next(m))
   {
      m->Claim();
   }
}

void Pooled::Claim()
{
   orphaned_ = 0;  // finally!
}

Recovering Orphaned Blocks

In its third and final phase, AuditPools invokes each pool's RecoverBlocks function, whcih recovers orphans. To guard against unforeseen race conditions, a block must remain orphaned for more than one audit cycle before it is reclaimed. This is the purpose of the constant OrphanThreshold, whose value is 2.

C++
void ObjectPool::RecoverBlocks()
{
   auto pid = Pid();

   //  Run through all of the blocks, recovering orphans.
   //
   for(size_t i = 0; i < currSegments_; ++i)
   {
      auto seg = blocks_[i];

      for(size_t j = 0; j < segSize_; j += segIncr_)
      {
         auto b = (ObjectBlock*) &seg[j];
         auto p = &b->obj;

         if(p->orphaned_ >= OrphanThreshold)
         {
            //  Generate a log if the block is in use (don't bother with
            //  free queue orphans) and it hasn't been logged yet (which
            //  can happen if we reenter this code after a trap).
            //
            ++count;

            if(p->assigned_ && !p->logged_)
            {
               auto log = Log::Create(ObjPoolLogGroup, ObjPoolBlockRecovered);

               if(log != nullptr)
               {
                  *log << Log::Tab << "pool=" << int(pid) << CRLF;
                  p->logged_ = true;
                  p->Display(*log, Log::Tab, VerboseOpt);
                  Log::Submit(log);
               }
            }

            //  When an in-use orphan is found, we mark it corrupt and clean
            //  it up.  If it is so corrupt that it causes an exception during
            //  cleanup, this code is reentered and encounters the block again.
            //  It will then already be marked as corrupt, in which case it
            //  will simply be returned to the free queue.
            //
            if(p->assigned_ && !p->corrupt_)
            {
               p->corrupt_ = true;
               p->Cleanup();
            }

            b->header.pid = pid;
            p->link_.next = nullptr;
            EnqBlock(p);
         }
      }
   }
}

Points of Interest

Now that we have seen how to implement an object pool that can repair a corrupt free queue and recover leaked objects, a few other capabilities of object pools should be mentioned.

Iterating Over Pooled Objects

ObjectPool provides iteration functions for this purpose. In the following code fragment, FrameworkClass is the class whose subclasses reside in pool's blocks:

C++
PooledObjectId id;

for(auto obj = pool->FirstUsed(id); obj != nullptr; obj = pool->NextUsed(id))
{
   auto item = static_cast<FrameworkClass*>(obj);
   //  ...
}

Validating a Pointer to a Pooled Object

ObjectPool provides a function that takes a pointer to a pooled object and returns the object's identifier within the pool. This identifier is an integer between 1 and the number of blocks in the pool. If the pointer is invalid, the function returns NIL_ID:

C++
PooledObjectId ObjectPool::ObjBid(const Pooled* obj, bool inUseOnly) const
{
   if(obj == nullptr) return NIL_ID;
   if(inUseOnly && !obj->assigned_) return NIL_ID;

   //  Find BLOCK, which houses OBJ and is the address that we'll look for.
   //  Search through each segment of blocks.  If BLOCK is within MAXDIFF
   //  distance of the first block in a segment, it should belong to that
   //  segment, as long as it actually references a block boundary.
   //
   auto block = (const_ptr_t) ObjToBlock(obj);
   auto maxdiff = (ptrdiff_t) (blockSize_ * (ObjectsPerSegment - 1));

   for(size_t i = 0; i < currSegments_; ++i)
   {
      auto b0 = (const_ptr_t) &blocks_[i][0];

      if(block >= b0)
      {
         ptrdiff_t diff = block - b0;

         if(diff <= maxdiff)
         {
            if(diff % blockSize_ == 0)
            {
               auto j = diff / blockSize_;
               return (i << ObjectsPerSegmentLog2) + j + 1;
            }
         }
      }
   }

   return NIL_ID;
}

Distinguishing a Block's Incarnations

BlockHeader was introduced in Creating an Object Pool. EnqBlock incremented its seq member, which acts as an incarnation number. This is useful in distributed systems. Say that a pooled object receives messages from another processor and that it gives that processor its this pointer. That processor includes the pointer in each message that is to be delivered to the object. The following can then occur:

  1. The object is deleted and its block is quickly assigned to a new object.
  2. Before the other processor learns of the deletion, it sends the object a message.
  3. The message arrives and gets delivered to the new object.

Instead of providing this as its message address, the object should provide its PooledObjectId (as returned by the above function) and its incarnation number. That way, it is easy to detect that a message is stale and discard it. Note that if the object is not pooled, some other mechanism is needed to detect stale messages.

An object obtains its PooledObjectId from ObjectPool::ObjBid, above. The reverse mapping is provided by ObjectPool::BidToObj, and ObjectPool::ObjSeq allows the object to obtain its incarnation number.

Changing the Size of an Object Pool

Each pool's size is set by a configuration parameter whose value is read from a file when the system initializes. This allows the size of each pool to be customized, which is important in a server.

When the system is running, a pool's size can be increased by simply using the CLI command >cfgparms set <parm> <size> to change the value of its configuration parameter.

Because a pool's in-use blocks could have been allocated from any of its segments of 1K blocks, decreasing the size of a pool while the system is running is not supported. If the configuration parameter is set to a lower value, it does not take effect until the next restart that reallocates the pool's blocks. (A restart is a reinitialization that is less severe than a reboot: see Robust C++: Initialization and Restarts.)

A pool raises an alarm when its number of available blocks drops below a threshold. The alarm's severity (minor, major, or critical) is determined by the number of available blocks (less than 1/16th, 1/32nd, or 1/64th of the total blocks in the pool). The alarm acts as a warning that the pool's size was probably under-engineered.

Although the system can survive a moderate number of allocation failures, it is unlikely to survive a flood of them. A pool therefore automatically increases its size by one segment (1K blocks) when DeqBlock is invoked and no free blocks are available.

Deleted Code

The full version of the software includes aspects1 that were removed for the purposes of this article:

  • Configuration parameters, mentioned in Changing the Size of a Pool.
  • Logs. The code generated a log when recovering an orphaned block, but there are many other situations that also result in logs.
  • Statistics. Each pool provides statistics such as the number of times a block was allocated from the pool and the low watermark for the number of available blocks remaining in the pool.
  • Alarms, mentioned in Changing the Size of a Pool.
  • Memory types. RSC defines memory types based on whether the memory will be write-protected and what level of restart it will survive. When it invokes ObjectPool's constructor, a subclass specifies the type of memory to use for the pool's blocks.
  • Trace tools. Most functions invoke Debug::ft in their first statement. It supports a function trace tool and other things mentioned in Robust C++: Safety Net. There is also a trace tool that records object blocks being allocated, claimed, and freed.

Notes

1 Most of these would indeed be aspects in aspect-oriented programming.

History

  • 7th September, 2020: Add section on modifying a pool's size
  • 3rd September, 2019: Initial version

License

This article, along with any associated source code and files, is licensed under The GNU General Public License (GPLv3)