Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles
(untagged)

Genetic and Ant Colony Optimization Algorithms

0.00/5 (No votes)
26 Sep 2006 58  
Genetic and Ant Colony Optimization Algorithms

Introduction

I have recently become very interested in the area of genetic algorithms and Ant Colony Optimization techniques. I was determined to write a complete program demonstrating these two techniques. In particular I wanted to compare the efficiency of these two approaches in the area of finding solutions to the Traveling Salesman Problem (TSP).

Traveling Salesman Problem

Buckland (2002, pp. 118) succinctly summarizes the problem as, �Given a collection of cities, the traveling salesman must determine the shortest route that will enable him to visit each city precisely once and then return back to his starting point.�

He goes on to explain that this is an example of what mathematicians call NP-Complete problems. As more cities are added, the computational power required to solve the problem increases exponentially. An algorithm implemented on a computer that solves the TSP for fifty cities would require an increase in computer power of a thousand-fold just to add an additional ten cities. The following table demonstrated the problem:

Clearly, a "brute force" way becomes impossible for a large number of cities and alternate algorithms need to be employed if a solution is to be found for this problem.

Background

The Algorithms Explained

Genetic Algorithm

The algorithm consists of the following fundamental steps:

  1. Initialization: chromosomes are randomly created. At this point, it is very important that the population is diverse. Otherwise, the algorithm may not produce good solutions.
  2. Evaluation: each chromosome is rated on how well the chromosome solves the problem at hand. A fitness value is assigned to each chromosome.
  3. Selection: the fittest chromosomes are selected for propagation into the future generation based on how fit they are.
  4. Recombination: individual chromosomes and pairs of chromosomes are recombined, modified and then put back into the population.

I am not going to discuss the details of genetic algorithms, as these are better explained elsewhere, apart from discussing the mechanisms used to produce valid crossover operators and Roulette Wheel Selection. Each chromosome is encoded as a possible tour. For example, a tour of five cities might be encoded as 3,4,0,1,2. A difficulty with the TSP is that a simple crossover will not work. Consider the following situation where crossover occurs at position 3.

Parent 1 1 2 3 4 5
Parent 2 3 5 2 1 4
Child 1 1 2 3 1 4
Child 2 3 5 2 4 5

A problem that has occurred here is that Child 1 has visited city 1 two times, which is not allowed, and Child 2 has not visited city 1 at all. A defacement encoding must be used in order to make sure that only valid tours are produced. A well-known and perhaps the simplest to understand crossover operator is called the Partially Matched Crossover. Buckland (2002, pp. 130-132) explains this technique as follows:

A crossing region is picked.

Parent 1: 12x34x5

Parent 2: 35x21x4

The following mapping is established.

3 -> 2

4 -> 1

Now we cycle through each parent gene and swap the genes wherever a gene is found that occurs in the above mapping.

First iteration (mapping 3 with 2):

Child 1: 13245

Child 2: 25314

Second iteration (mapping 4 with 1):

Child 1: 43215

Child 2: 25341

The final crossed-over genes are valid permutations with no duplicates.

Roulette Wheel Selection is a technique of choosing members from the population of chromosomes in a way that is proportional to their fitness. Buckland illustrates, �Imagine that the population�s total fitness score is represented by a pie chart or roulette wheel. Now you assign a slice of the wheel to each member of the population. The size of the slice is proportional to that chromosome�s fitness score: the fitter a member is, the bigger the slice of pie it gets. Now, to choose a chromosome, all you have to do is spin the wheel, toss in the ball and grab the chromosome that the ball stops on.� Buckland (2002, pp. 100)

Ant Algorithm

This algorithm is inspired by observation of real ants. Individually, each ant is blind, frail and almost insignificant. Yet, by being able to co-operate with each other, the colony of ants demonstrates complex behavior. One of these is the ability to find the closest route to a food source or some other interesting landmark. This is done by laying down special chemicals called "pheromones." As more ants use a particular trail, the pheromone concentration on it increases, hence attracting more ants. In our example, an artificial ant is placed randomly in each city and, during each iteration, chooses the next city to go to. This choice is governed by the following formula, as described by Dorigo (1997):

Each ant located at city i hops to a city j selected among the cities that have not yet been visited according to the probability:

where:

  • is the probability that ant k in city i will go to city j.
  • is the set of cities that have not yet been visited by ant k in city i.
  • is the relative importance of the pheromone trail.
  • is the relative importance of the distance between cities.

Therefore the probability that a city is chosen is a function of how close the city is and how much pheromone already exists on that trail. It is further possible to determine which of these has a larger weight by tweaking with the and parameters. Once a tour has been completed (i.e. each city has been visited exactly once by the ant), pheromone evaporation the edges is calculated. Then each ant deposits pheromone on the complete tour by a quantity which is calculated from the following formula (Dorigo 1991):

if , where:

  • multiplies the pheromone concentration on the edge between cities i and j by p(RHO), which is called the "evaporation constant." This value can be set between 0 and 1. The pheromone evaporates more rapidly for lower values.
  • is the amount of pheromone an ant k deposits on an edge, as defined by which is the length of the tour created by this ant. Intuitively, short tours will result in higher levels of pheromone deposited on the edges.

Quick User Manual

At the heart of the application lies the two algorithms which are used to solve the traveling salesman problem. Each algorithm is brought into a view by selecting it from a toolbar of the main frame. A view is a window which is presented to the user. The genetic algorithm view is shown below:

Each view is divided into two horizontal sections.

  1. The top half shows a graphical view of the current performance of each algorithm.
  2. The bottom half shows a graph of the current solution charted against the best know solution. In most demonstrations of the TSP, cities are placed at random. I chose to distribute the cities in a circular fashion. This allows an easy way to calculate the shortest path among the cities, as well as visually assess how far away the algorithm is from the optimal solution.

A Simulation is Controlled by the Control Buttons

Each button is self-explanatory. The Start button is used to start the application and the Stop button is used to stop the simulation. The Reset button resets the simulation while the Step button is used to step through the simulation. The text information shows the following information:

  • Epoch represents the number of times the simulation has run through each cycle of the simulation.
  • Best So Far is the shortest tour so far.
  • Best Possible represents the optimal solution to the current tour.
  • Elapsed Time is the time spent inside the algorithm. This excludes the time spent "drawing" the solution to the screen. See below for further information.

Graph of the above information:

The blue line is a plot of Epoch vs. the shortest distance found by the algorithm to date. The green line represents the shortest possible tour (optimal solution). When the two lines converge, a solution has been found.

Design and Implementation

I choose C++ as the language to program this application in. A compiled program will always be faster than an interpreted program. Both of the algorithms are computationally quite expensive and so is the process of visualization. Potentially C would be faster still, but by choosing this language the benefits of object orientation are lost. Other possible languages were Delphi, Visual Basic and C#, but I don�t have enough experience in any of these languages to attempt a project of this size. I chose to use the Microsoft Foundation Classes (MFC) application framework. The benefits of using this are:

  • Application framework applications are small and fast
  • The Visual C++ tools reduce coding drudgery
  • The framework is rich in features

One of the main disadvantages is that it takes a long time to understand the complexity of the underlying model. Also, once the developer tries to move away from wizard-generated code, it is very easy to get lost in the code with the debugger. The project was developed and compiled using Visual Studio .NET Academic Version 7.1.3088. The code used to implement the genetic algorithm is based on the work of Mat Buckland (2002) in his book "AI Techniques for Game Programming." The code for the ACO algorithm is partially based on the work of M Jones (2003) in his book "AI Application Programming." Several other code sources were also used:

  • CMemDC is a class used for double buffering. This is a technique used in computer animation to avoid screen flicker, which was a major problem due to the fact that the screen is updated (repainted) several times each second. The technique used here is to create an off-screen buffer to which the image is drawn. The final image is then copied to the screen.
  • CAutoFont is a class used for font manipulation.
  • CChart is a class used for charting.
  • CPerfTimer is used for timing.
  • CSideBannerWnd is used to display the panel on the left-hand side of the window.

Initially I thought that the document would hold information common to both algorithms, including the number of cities, city position and shortest possible tour. However, as the project progressed, I realized that each algorithm is sufficiently different to warrant its own data structure. What was needed was a way to switch between different views (derived from a common ancestor).

�Sometimes the way a document is being visualized needs to be significantly changed. For example, in PowerPoint, you can see the slides in a WYSIWYG form or view only the text in a text edit window. In the MFC world, one can find an amazingly large quantity of programs that implement this behavior, defining one CView descendant and making it responsible for all visualization changes. This path has several disadvantages:

  1. Big, difficult to manage class definition file.
  2. Diminished reusability: you could reuse one big CView descendant or nothing.
  3. Hard to maintain (what if you want to modify or add a new "look" to the document?).
  4. Wasted memory: some variables (objects) will exist in memory and won�t be used.� (Lodos,1999)

The author then goes on to explain how this is possible and gives an example of code that defines different Views and how to switch between them.

The following sequence diagram shows the basic interaction between the CACOView and CAntSystem classes.

The following sequence diagram shows the basic interaction between the CGAView and CGASystem classes.

Performance

I next turned my attention to the ACO and looked at how certain parameters in the algorithm affect performance.

Parameters used here are City No: 40, alpha: 1, beta: 1 and rho: 0.6. These findings suggest that it might be possible to write a genetic algorithm that would optimize the parameters. This might make an interesting project somewhere down the track.

Conclusion

This has been both a rewarding and difficult project. I previously had little knowledge of MFC programming and thought that it would be relatively easy to master. It turns out that I was wrong and it took me a very long time to get the program up and running. Despite the steep learning curve, I was thrilled to actually produce a working program and learned a lot along the way about genetic algorithms and ant colony optimization algorithms. I am very grateful for all the people who have contributed to this fantastic site. I have found many of the articles very useful in this project. Keep up all the good work, guys.

References

  • Buckland, M., 2002, AI Techniques for Game Developers, Premier Press, United States of America.
  • Dorigo ,M., & Gambardella, L. M (1997) Ant colonies for the traveling salesman problem. BioSystems, 43 ,73-81
  • Jearakul, C.,1999 2D and 3D Watefall Chart Control, [Online], Available: http:/www.codeguru.com.controls/Waterfall.shtml [Accessed 3/9/2003]
  • Jones, M., 2003, AI Application Programming, Publisher: David Pallali.
  • Lodos, J, 1999 Replacing a view in a doc-view application, [Online], Available: http:/www.codeguru.com/doc_view/ReplacingView.shtml [Accessed 2/9/2003]
  • Nordmeyer, J., Automatic Font Handling Class, [Online], Available: http://www.codeproject.com/gdi/autofont.asp [Accessed 28/10/2003]
  • Prosise, J, 1999,Programming Windows with MFC, 2nd Edition, Microsoft Press, Redmond Washington
  • Rule,K..1999 Flicker Free Drawing in MFC, [Online], Available: http:/www.codeproject.com/gdi/flickerfree.asp [Accessed 24/9/2003]
  • Wyant,D., 2002, CPerfTimer timer class, [Online], Available: http:/www.codeproject.com/datetime/perftimer.asp [Accessed 24/9/2003]

License

This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below.

A list of licenses authors might use can be found here