Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / OpenCV

Robotic's Motion Planning and Navigation : Bug Algorithm

5.00/5 (1 vote)
2 Sep 2013CPOL7 min read 24.5K  
IntroductionIn the article we will look at implementation of bug 2 algorithm for motion planning Bug AlgorithmsThe aim of path planning algorithm is to complete a collision free path from initial to goal position. Bug algorithms are simplest type of path planning algorithms.

Introduction  

In the article we will look at implementation of bug 2 algorithm for motion planning

Bug Algorithms

The aim of path planning algorithm is to complete a collision free path from initial to goal position. Bug algorithms are simplest type of path planning algorithms.

In Bug algorithms no global model of the world is assumed ,the location and shapes of the obstacles are unknown.Only information acquired through sensing is known.The Bug algorithms assume local knowledge of environment and a global goal.

The environment is a Two-dimensional scene filled with unknown obstacles. The perimeter of any obstacle is finite, and that the number of obstacles is finite and can be of arbitrary shape.

We consider a point robot that moves among arbitrary obstacles in a planar environment. We assume that the workspace is bounded.

The robot is employed with ideal localization module.The robot can always keep track of position of robot. Thus we can always measure the distance between the source and goal.

The robot is employed with tactile touch or a range sensor sensor.The touch sensor can detect contact with the object .

Sample workspace configurations are shown in .The red dot is robot center and green dot is the goal.


Bug algorithm

The robot is equipped with a range sensor.Range sensor with a small radius can be considered to be equivalent to a tactile sensor.

At any given point of time robot is assumed to be in one of following low level states Motion, Boundary Detection,Navigate Boundary,Done

Motion  

In this mode the robot moves along a specified heading direction which is specified in terms of angle wrt global co-ordinate system.

At initialization we have information only about the source and the goal position. The shortest distance between two points is a straight line. The heading direction is along this line from source towards the goal.

The robot begins to move along the straight line towards the goal.

To specify displacement to robot to move along the x and y directions, the unit vectors along the heading direction are computed.Let denote the unit vectors

Then robot take a small displacement in the heading direction by taking a small steps proportional to components of unit vector along the x and y directions. Each time position is incremented the displacement is recorded Let us call this displacement

C#
// function that perform motion towards the goal operation
void Motion()
    
    if(m.value==m.Motion)
        
        // compute components of unit vector along the heading direction        
        float dx=cos(heading*PI/180); 
        float dy=sin(heading*PI/180);
        //update the current position
        position.x=position.x+1*dx;
        position.y=position.y+1*dy;
        //store the change in postion
        prevdelta=FPoint2f(dx,dy);
        
]] 

Boundary Detection 

The robot is equipped with a range sensor.The sensor range is has a finite radius R.If a obstacle is encountered the range sensor gives the direction and distance to the object. Let is assume that direction resolution of sensor is 1 degree.

Thus if a range sensor detects some object lying within the radius R.The robot enters the boundary detection state.

Since range is limited.The robot cannot estimate if single or multiple obstacles are encountered.

It now enters the navigate and starts moving along the heading direction. <scan.size();i++) if(scan[i].index="=best_index)" if(scan[i].position.x!="-1" scan[i].position.y!="-1)" heading="best_index+90;" m.value="m.Nav;">

C#
void Boundary()

  if(m.value==m.Boundary)
  
  //change the heading direction by 90 deg
  //to best scan line
  for(int i=0;i<scan.size();i++)

    if(scan[i].index==best_index)
   
      if(scan[i].position.x!=-1 && scan[i].position.y!=-1)
    
        heading=best_index+90;
        // change state to boundary navigation
        m.value=m.Nav;
        return;
    
      else
    
        break;
   
]] 

Navigate Boundary 

Now the robot has to navigate along the boundary of the obstacle till some predefined criteria is satisfied.

Thus robot must change it heading direction.The direction along with shortest distance to the boundary recorded by the range sensor is determined.

The robot changes its heading along line perpendicular to direction along shortest distance to the boundary point.

In this stage the robot still can sense obstacle with its range and it determines the heading direction at each instant as direction perpendicular to direction along shortest distance to obstacle. It continues to move along the heading direction.

C#
if(scan.size()==0)

    //if robot moves away from boundary,try to go to previous position
    //move along the edge
     position.x-=prevdelta.x;
     position.y-=prevdelta.y;
     heading=best_index;

else

     //move direction perpendicular to best scan line
     heading=best_index+90;

//computing components of unit vector along heading
float dx=cos(heading*PI/180);
float dy=sin(heading*PI/180);
//compute the new position of robot
prevdelta=FPoint2f(dx,dy);
position.x=position.x+5*dx;
position.y=position.y+5*dy; 

]]  

While moving along the heading direction a point is reached where no obstacles are encountered. This can happen if robot has reached a sharp edge or it has moved away from the boundary while navigating the boundary due to inaccuracies in determining the shortest distance along the boundary.

The new position of robot is determine on the same lines as that described in motion towards the goal section. At each instant the current heading direction and previous displacement information is stored.

We take a step back using and go back to the last position.And begin the move in a direction perpendicular to the current heading direction. This is done by remaining in the same state and changing the heading direction.

This will cause the robot either to move back to boundary or move along the edge.

In this state the robot will circumnavigate the boundary of the obstacle.

To move towards the goal,at some point based on some predefined criteria robot will leave the navigation state and enter the motion towards the goal state.

Different bug algorithms differ in the way they define transition from boundary following to motion towards goal. 

Simulation Environment 

A basic simulation environment is developed to test the algorithm. The input to the simulation environment is a image file containing the obstacles. Thus obstacles of any complexity can be included for testing.

The sensor also needs to be simulated. Thus given a location if a point lying along the circle with center at robot present location lies within the obstacle needs to be determined. Also how far inside the obstacle the point lies determines the distance of the robot from the obstacle.

Simulation environment plots the obstacles, robot and goal positions, trajectory, heading direction, direction along which obstacles are detected. Debug information is also displayed along with plots.

OpenCV libraries are used for plotting .To determine if a point lies inside a polygon the function provided by OpenCV libraries are used.

There are two types of obstacles defined boundary and normal obstacles. The boundary obstacles just is boundary of simulation environment. p>Thus for boundary obstacles the simulation env determine if the point on the circle lies outside the boundary while of other obstacles it checks if point lies inside the boundary.

Sensor 

ThThe sensor simulation if performed by finding point that lie on the circle of sensor range .The test is performed if the point lies inside/outside a polygon obstacles or not.All the direction along circle are evaluated .The current heading direction is indexed 0.
 
C#
for( int i = -180; i <= 179; i=i+delta)


    intersection = cos(toRadians((heading+i)))*sensorRange;
    intersectionPointWithSensorCircleY = sin(toRadians((heading+i)))*sensorRange;

    //point on the circle
    intersectionPointWithSensorCircleX+=robot.position.x;
    intersectionPointWithSensorCircleY+=robot.position.y;
    Point2f ff=Point2f(intersectionPointWithSensorCircleX,-intersectionPointWithSensorCircleY);
    
    //cycle over all the obstacles
    for(int k=0;k<o.size();k++)
    
       Obstacle o1=o[k];


       double d=pointPolygonTest( o1.contours[0], ff,true);
       bool flag,flag1;
       //check if point in inside the normal obstacle
       if(o1.type==0)
       
          flag=d>=0;
          best=-999999;
       
       //check if point in outside the boundary obstacle
       else
       
          flag=d<0;
          best=999999;
       
 //if obstacle detected
 if(flag)
 

 //add point to stack
 points.push_back(ScanPoint(FPoint2f(ff.x,-ff.y),heading+i));
 if(o1.type==0)
    flag1=d>=best;
    else
    flag1=d<best;
    //find closest distance to obstacle
        if(flag1)
 
   best=d;
   besti=heading+i;
        
 
       
]] 
<scan.size();i++) if(scan[i].index="=best_index)" if(scan[i].position.x!="-1" scan[i].position.y!="-1)" heading="best_index+90;" m.value="m.Nav;"> <o.size();k++) o1="o[k];" d="pointPolygonTest(" if(o1.type="=0)" flag="d"> <best; best="d;" besti="heading+i;">

Bug 2 Algorithm 

Bug Algorithms are greedy algorithms hence not predictable.

At the initialization the MLine is defined as the line joining the starting location and the goal.

Leave point is point at which robot transitions from boundary following to motion towards the goal states.

In the Bug 2 algorithm the leave point is a point on the MLine encountered after encountering hit point during boundary following.

Leave point is a point at we are able to leave the surface of obstace and have heading towards the goal.

The leave point is corresponding point on the bounday at the other side of obstacle that lies along the line joining the hit point and goal is called as leave point.

C#
//check if point on ine
if(mline.isMLine(position)==true)

//detected hit point
 if(hl.size()==0)
 
 hl.push_back(position);
 
//possible leave point 
 else
 
//distance from hit point to goal 
     float d1=dist(mline.hit,goal.position);
//distance from leave point to goal     
     float d2=dist(position,goal.position);
//if leave point farther,not leave point     
     if(d2>d1)
 return;
//enter motition towards goal state 
 m.value=m.Motion;
//initialize heading 
 computeHeading();
//computer unit vectors along heading 
 float dx=cos(heading*PI/180);
 float dy=sin(heading*PI/180);
//update the position 
 position.x=position.x+3*dx;
 position.y=position.y+3*dy;
 prevdelta=FPoint2f(dx,dy);
//clear the stack 
 hl.pop_back();
 

]]

In some situations it may happen that Bug2 algorithm takes you farther away from the target, or it may be stuck in a loop and robot never reaches the target. To avoid such situations a constraint is placed that the leave point should be closer to the goal than the hit point.

Let the distance between source to goal be denoted by Let the perimeter of the obstacle be denoted by The shortest distance Robot will travel using bug2 algorithm is .

Another variation is to check if there are obstacles along the direction to the goal. This no obstacles are detected then change the state to motion towards the goal. The freeheading flag is set if there are no obstacles in the direction of the goal. However since the range/tactile sensor have limited range ,a free heading does not necessarily indicate a free heading towards the goal.

Additional constraints such that potential leave point must be close to the goal that the previous hit point and on heading angle are imposed to ensure boundary following is exited at proper position.

C#
freeheading=false;
ScanPoint pp;
for(int k=0;k<scan.size();k++)

    ScanPoint p=scan[k];
    float h1=p.index+heading;
    if(h1<=-180)
        h1=h1+360;
    if(h1>180)
        h1=h1-360;
    h1=(int)h1<

Analysis 

When the obstacle encounters the boundary it will move in clockwise and anti clockwise direction till it encounters the Mline again.

It may happen that robot intersects the same obstacle multiple times .

Code

The code for the testing and training utility can be found at CODE

PDF

The PDF version of the document can be found at DOCUMENT

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)