Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

MetaAgent, a Steering Behavior Template Library

0.00/5 (No votes)
21 May 2003 2  
Library for creating autonomous agents that have (fun) life-like behaviors.

Introduction

This articles present MetaAgent, a C++ library for creating steering behaviors.

metaagent logo

Some history about behaviors

In 1986, Craig Reynolds was writing boids, a computer model for animated animal motion, such as bird flocks and fish schools. He published a technical paper about it in [Craig Reynolds, 87]. His method was quite astonishing by it's simplicity since the model was based on 3 simple rules:

  • separation diagramSeparation: a boid should avoid his neighbors. To do so, you just need to steer the boid away from the center of his nearest neighbors.
  • separation diagramAlignment: a boid tends to align his velocity with his neighbors.
  • separation diagramCohesion: a boid tends to go towards his neighbors

The resulting forces of these 3 rules were merged together by summing (with weighting) them together and applying to the boid. The Craig Reynolds boids have been and are still flying on his personnal web page[^].

Since then, Craig Reynolds has released another great paper [Craig Reynolds, 99] describing a number of behaviors to "give life" to autonomous characters: target seeking, obstacle avoidance, wandering, etc...

MetaAgent and OpenSteer

MetaAgent is not the only project around about steering behaviors. In fact it is the little sister of another project, OpenSteer initiated by Craig Reynolds.

OpenSteer Logo

Why another library ?

First of all, playing with autonomous characters is fun and is a great project if you plan to learn C++. That is basically how MetaAgent started: a playground for testing generic programming and meta-programming.

The real reason for building another library was that OpenSteer was mainly a collection of C function wrapper into some C++ classes (ok I'm exagerating...). MetaAgent plans (and hopefully will succeed) to use the full power of the C++ and Generic Programming to create behaviors.

MetaAgent guidelines

Here are the some of the guidelines that the project tries to follow:

  1. Break down all classes into orthogonal policies (I will talk about it later)
  2. Use signals and slots for rendering,
  3. Use as much as possible STL and Boost
In this article, I will focus on the Policy concept. Signals and slots are for the next :).

Policies Class Design

I have run into Policy class design in the famous book of Andrei Alexandrescu "Modern C++ design", see [Alexandrescu, 2001]. The basic idea is to assemble a class with complex behavior by combining little classes ( called policies, each of which takes care of only one behavior or structural aspect). The curcial point of the process is the choice of policies decomposition.

Andrei Alexandrescu spends an entire chapter about the Policy design, I will try to illustrate it below on the agent-behavior creation.

How an autonomous agent works ?

The agent is basically a body (dynamics) that moves accordingly to his brain ( behavior ). It can be broken into several parts:

  • the body, that implements the dynamics
  • the brain, that is composed by a behavior

Building an agent using policies

Policies decomposition

"It is as if [some_host_class] acts as a little code generation engine and you configure the ways in which it generates code". Andrei Alexandrescu.

Let's start by building the dynamic model of our agent. This body must be able to move and react to a steering force (that will be given by the behavior).

As told previously, we want to use policies. So we want to decompose that model into orthogonal policies. Let's take a look at the fact that

  • Regardless of the dynamical model type, you can always retreive the state of the center of mass.
  • The dynamic model does not need to know what happens into the "brain" of the agent, it just needs the resulting steering
Splitting agent in policies

The dynamic model and the behavior can be seen as policies:

template< 
   typename ModelPolicy,
   typename BehaviorPolicy
>
class agent : public ModelPolicy, public BehaviorPolicy

As you can see, agent inherits from ModelPolicy and BehaviorPolicy and thus it inherits all their methods! agent is called a host class since it is built from policies.

Determining the interface

Unlike classic interfaces (collection of pure virtual methods), policies interfaces are loosely defined. Just use the policies methods in the host class without any prior declaration, if they are not defined in the policies classes, the compiler will fire an error. Hence, we simply write a method that makes the agent and think and act:

template< 
   typename ModelPolicy,
   typename BehaviorPolicy
>
class agent : public ModelPolicy, public BehaviorPolicy
{
public:
    void think_and_act()
    {

First step, think and compute the steering. This will be the job of the BehaviorPolicy.

        // vec is some 2D vector

        vec steering_force = think( get_acceleration(), get_velocity(), get_position() );

Second step, apply the computed steering force to the model and integrate the equations:

        act( steering_force );  // move according to the steering force -> ModelPolicy

    };
};

Great, we have just defined the interface for ModelPolicy and BehaviorPolicy.

Implementing the ModelPolicy

A class that implements a policy is called a policy class. The simplest dynamic model is the point-mass model integrated by an explicit euler scheme:

class point_mass_model
{
public:
    void act( vec steering )
    {
        m_acceleration = m_steering / m_mass;
        m_velocity += m_acceleration;
        m_position += m_velocity;
    }
protected:
    vec m_acceleration;
    vec m_velocity;
    vec m_position;
};

Remarks: one might argue that the integrate should be separated from the model and this is totally true. But for the sake of clarity, I merged them together in this example.

The class point_mass_model is now almost ready to be used. We just need to add some getters for the states (get_acceleration, etc...) since they are need by the BehaviorPolicy:

class point_mass_model
{
...
   vec const & get_accelartion() const { return m_acceleration;};
...
};

Implementing the BehaviorPolicy

The behavior policy classes only need to implement the think method. The following behaviors makes an agent

  • go in circle (by taking the perpendicular to the velocity):
    // this class makes the agent go round
    
    struct circle_move_behavior
    {
        // this is the interface to implement
    
        vec think( 
            vec const& acceleration,
            vec const& veloctiy,
            vec const& position ) const    
        {
             return -perpendicular( velocity );
        };
    };
    
  • seek towards a target (by pointing the velocity towards the target)
    struct seek_behavior
    {
        // the target
    
        vec m_target;
    
        // this is the interface to implement
    
        vec think( 
            vec const& acceleration,
            vec const& veloctiy,
            vec const& position ) const    
        {
             return m_target - position;
        };
    
    };
    

Remarks: Of course there is a problem with the steering norm, but, again, for the sake of clarity it is not addresed here.

Merging together policies in the host class

Here comes the magic of the policies. By merging different policies, we create totally different agent:

// agent will go round

agent< point_mass_model, circle_move_behavior > circle_mover;
// this agent will track a target

agent< point_mass_model, seek_behavior > seeker;

Better still, you can change the target simply by doing:

seeker.m_target = new_target;

Since seeker is inherited from seek_behavior and m_target is a public attribute.

Small conclusion

Using policies, creating agents with different dynamics and behaviors is as simple as changing some templates parameters. This is a major idea behing MetaAgent.

Merging dragon model with sheep model

Want to participate ?

The above was a very rough description of the possibility for constructing behaviors using policies. For example seeking a target can be decomposed into:

  1. predict the target collision point: PredictoryPolicy
  2. track the predicted collision point: TrackerPolicy.

You can then combine all kinds of predictors and trackers with great flexibility.

If you are interrested, you can go the MetaAgent WikiWikiWeb[^] and learn/contribute to the project.

Here are some snapshot of MetaAgent demonstration applications:

wanderer moving randomly on the screen
Wander behavior.
 
Seeking: still target,
arrival acceleration modification, moving target
Seeking behaviors variants.

History

05-20-2003Fixed image links to the new site
05-12-2003 Initial publication

Reference

[MetaAgent] http://metaagent.sourceforge.net[^]
[Craig Reynolds, 87] http://www.red3d.com/cwr/papers/1987/boids.html[^]
[Craig Reynolds, 99] http://www.red3d.com/cwr/papers/1999/gdc99steer.html[^]
[OpenSteer] http://opensteer.sourceforge.net[^]

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here