A Comparative Study Of A Class Of Nature Inspired Algorithms Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Evolutionary Computation (EC) is a vibrant area of investigation which has been enormously successful with more than hundreds of publications within a decade. Some of the widely known approaches being Ant Colony Optimization, Particle Swarm Optimization, Bees Algorithm and Genetic Algorithm. All of these can be used to solve a variety of problems. In fact there are so many papers, that it is sometimes difficult to understand the pros, cons and applicability of each one of them. This paper makes an attempt to compare the basic idea, applicability, limitations and effectiveness of the four effective approaches.

More recently, computer scientists have found in the study of social insect's behavior a source of inspiration for the design and implementation of several problems which cannot be solved using conventional algorithms. Since ever, many natural systems in the world are rich topics for the scientific researchers. However, a simple individual behavior can cooperate to create a system able to solve a real complex problem and perform very sophisticated tasks. In realty there are many patterns of such systems like ant colonies, bird flocking, fish shoaling, animal herding, bacterial growth, bee colonies, human neuron system, gene production system, particle swarm optimization and many more.

Evolutionary Computing is the global name for a range of problem-solving techniques from different domains including Optimization, Signal Processing, Bioinformatics and so on. They offer a framework such that it is comparably easy to incorporate prior knowledge about the problem and thus yielding a more efficient exploration of the state space of possible solutions.

In this paper techniques which have been discussed are namely Ant Colony Optimization (ACO), Bee Colony Optimization (BCO), Genetic Algorithms (GA) and Particle Swarm Optimization (PSO) in succeeding sections.


Ant colony optimization is a set of instructions based on search algorithms of artificial intelligence for optimal solutions; here the main member is ANT System [1, 2, and 3]. Ants, on being blind and small in size are able to find the shortest route to their food source. They use antennas and pheromone liquid to be in touch with each other. ACO has been derived from the behavior of real ants, and their synchronization with searching solutions to local problem and maintaining array list for maintaining previous information brought up by each ant.

ACO can be used to solve combinatorial optimization problems [4]. A combinatorial optimization problem can be defined with the set of components [5] C =,...,. A subset S of components represents a solution of the problem; F is the subset of feasible solutions, thus a solution S is feasible if and only if S F. A cost function z is defined over the solution domain, z: -> R, the objective being to find a minimum cost feasible solution S*, i.e., to find S*: S* F and z(S*) < z(S), SF. In ant colony optimization [6], ants make a solution for combinatorial optimization problem by going through each path and making a construction graph, (V, E). Graph made is thoroughly connected and contains set of vertices V and edges E. Basic component set C can be either associated with V or E. Ants build partial solution by going through each vertex using edges, meanwhile depositing pheromone on components (on either V or E). This act of making trail of pheromone is used as important information in traversing toward good food destination.

Moreover, [7] ACO deals with an important process, i.e. trail pheromone evaporation, that is decreasing in amount of pheromone deposited on every path by the time. Updating the trail is performed when ants either complete their search or get the shortest path to reach the food source. Each combinatorial problem defines its own updating criteria depending on its own local search and global search respectively.

It is quite clear from above discussion that ants are able to find shortest path on the basis of pheromone information laid on ground by other ants of same colony. Ant searching for food goes for all possible trails, but in last chooses the trail with largest deposit of pheromone, with some probability, [8] and meanwhile increasing pheromone level on that path. On returning back to initial source, ant uses the same path increasing the pheromone level again. If whole colony will follow that smallest path, then probability of choosing that path in future becomes high exceptionally. And probability of choosing other paths is less. Pheromone evaporation mechanism will make longer path almost extinguished as pheromone will evaporate more quickly than on smaller path.

Therefore, probability of next path chosen is given by: [8]

(t) = . /. If j

(t)= 0 otherwise

Where, is pheromone level on path (i, j) of graph, is heuristic function and is array list.

Heuristic function is used to search for optimal path with the help of useful information, and the array list stores the previous paths visited in the memory.

The updating phenomena for ant, to choose next possible path to complete its tour is given by: [8]

(1- ) +

Where, is pheromone trail evaporation [0, 1] and deposited level of pheromone followed by g ants after completing connected graph.

Fig 1: Figure 1 shows the nature of ant to discover every possible path to reach its destination. Solid arrows show the shortest path and in future all ants will follow this solid arrowed path to reach to food source.

Advantages of ACO

It has power capacity to find out solutions to combinatorial optimizing problems.

It has advantage of distributed computing.

It is also easy to accommodate with other algorithms.

Thus ACO displays powerful robustness.

ACO algorithms have advantage over simulated annealing and Genetic Algorithms approaches of similar problems (such as TSP) when the graph may change dynamically, the ant colony algorithms can be run continuously and adapt to changes in real time.

Applications of ACO

ACO is an optimization algorithm inspired by the collective foraging behavior of ants in order to find the food source that is nearest to the nest. The first of its applications was the Traveling Salesman Problem (TSP) [9], which can be seen as a direct translation of the food search problem. Soon, the metaphor was extrapolated for different types of routing problems, like the vehicle routing problem [10]. Various Applications areas of ACO include:


TSP (Traveling Salesman Problem)[9]

Vehicle Routing[23]

Sequential Ordering


QAP (Quadratic Assignment Problem)[5]

Graph Coloring

Generalized Assignment

Frequency Assignment


Job Shop

Project Scheduling

Group Shop



Set Covering

Network Routing[12]

Connection oriented network routing

Connection network routing

Optical network routing


Shortest Common Sequence

Nearest Neighbor Node Choosing Rules[13]

2D-HP protein folding

Fuzzy systems

Limitations of ACO

Though ant colony algorithms can solve some optimization problems successfully, we cannot prove its convergence [13].

It is prone to falling in the local optimal solution [11]. This is because the ant colony algorithms modify the pheromone according to the current better path, then after certain times of iteration, the pheromone on the best path is very strong, the pheromone on the worth path is very weak. All the ants start converging on the best path and it is very difficult to jump out of this best path, this increases the probability that the optimal solution is actually a local optimal solution.


A bee colony can be thought of as a swarm whose individual agents are bees. Each bee at the low-level component works through a swarm at the global level of component to form a system. Thus, the system global behavior is determined from its individual's local behavior where the different interactions and coordination among individuals lead to an organized team-work system.

The exchange of information among bees leads to the formation of a tuned collective knowledge. A colony of honey bees consists of


The queen's job is to lay eggs which helps in starting new colonies.

Male Drones (Many)

The drones' job is to mate with the queen.

Non-Reproductive Females Workers (Thousands).

The worker bees build honeycomb. The young, clean the colony, feed the queen and drones, guard the colony, and collect food. As nectar is the bees' energy source, two kinds of worker bees are responsible for food. These are scout bees and forager bees. A bee does many things in its life history, and does not become a scout/work bee until late in its life [14].


These are responsible for carrying out the exploration process of the search space.


These control the exploitation process.

However, exploration and exploitation processes must be carried out together by the colony's explorers and colony's exploiters. The swarm of bees behaves in such a way as to find and capture the food that containing the most energy while expending the least possible amount of time in real variables [15]. There are two forms of scenarios for any bee in forging process which are either scout or forager. Both the scenarios make use of waggle dance phenomenon.

Waggle Dance

When a bee finds a new food source, upon returning to its hive, it will perform a waggle dance. If the scout finds a rich food source patch a long way from the hive, it sips some nectar from a rich food source before it flies back. Other bees can smell the rich food sources on the scout's body. Sometimes the scout bee will give them a taste of the nectar too. Then the scout does the waggle dance, and the other bees crowd around. The waggle dance tells them how far away the food is and the direction they should fly in to find it. When the scout does the waggle dance, it walks in a straight line. Then it turns and loops back to where it started. It does this again, walking along the line but looping back in the other direction. It dances like this, in a figure eight, over and over again. When it walks along the straight line, the bee waggles its tail back and forth and makes a buzzing sound. The straight line tells the other bees the way to the food. The number of waggles tells them how far away the food is. The workers fly off to find the food. Some of these bees will fly back and does the waggle dance, too. That way, lots of bees will bring food back to the hive.

FIGURE 2. Waggle dance of bee

The following subsections present these two scenarios:

The Behavior of Scout Scenario

Scout bees fly around randomly and keep searching for food. When they find a rich food source they fly back to the colony and start dancing to communicate with other bees on a particular region in the comb. The behavior of the scout scenario is summarized according to the following activities:

The scout bees start flying from its colony in search for food sources in a random way.

Once they finish a full trip, they return back to their colony.

When a scout bee arrives at the colony, it goes inside and announces its presence by the wing vibrations to communicate its message.

If it has found a nearby rich food source, it undergoes a circular dance. The nearby bees observe this circular dance, smell it for the identity of the food, listen to the intensity of the wing vibrations to get the indication of the value of the food source.

No directions are given, if the source is nearby and vice-versa.

The abstract convention that the scout makes is that the up position on the comb is the position of the sun. Because bees can see polarized light, they can tell sun position without actually seeing the sun. The scout dances in a precise angle from the vertical. This equals to the horizontal angle of the sun with reference to the colony exit with the location of the food source.

Next the scout bee must tell the other bees "how far away the rich food source is?" This is done by waggling the abdomen from side to side. The slower the waggling, the further away is the distance of the food rich food source.

Thus the dance of scouts points to the direction, distance, and quality of food source [16].

The Behavior of Forger Scenario

The behavior of the forager bees on this show concludes into following steps:

The bees in the colony closely follow the scout bees to learn these directions, and also learn the smell of the food on scout bee, so they can identify and confirm the food source when they arrive at the food source's location.

As the sun is moving in the sky, the bees use an accurate clock sense to adjust for the changing sun position with reference to the food source and the colony exit.

In the BCO algorithm, not all bees that completed a tour will dance. Only bees that produce shorter tour length compared to its own previous best tour length are allowed to dance.

FIGURE 3: Fig representing basic flow of BCO

Advantages of BCO

Following are some of the prominent factors which make BCO an effective technique for solving certain kind of problems.

Team work: Scout and forager bees work together for reaching upto rich food source.

Learning: It takes lots of learning of the waggle dance phenomenon for a bee to become a scout bee.

Memory in BCO: Searching of food source and reaching upto food sources requires memory based searching.

Exploration and Exploitation: These processes gives a two way approach for solving any proposed problem.

Areas of Applicability of BCO

In the last years, BCO has received increased attention by the scientific community as can be seen by the growing number of publications and the different fields of application. The Bees Algorithm, which is inspired by the food foraging behavior of honey bees, has found many applications in fields such as:

Training neural networks for pattern recognition[13]

Forming manufacturing cells

Scheduling jobs for a production machine

Finding multiple feasible solutions to a preliminary design problems[16]

Data clustering[14]

Optimizing the design of mechanical components

Multi-Objective Optimization[14]

Tuning a fuzzy logic controller for a robot gymnast

Computer Vision and Image Analysis

Optimization Algorithms[15]

Numerical function optimization

Travelling salesman problem

Generation of pair-wise test set

Sudoku Puzzles

Optimal reservoir operation

Limitations of BCO

The mapping of waggle dance to the solution of any problem is a complicated task.

It requires thorough knowledge of mathematical operators which can be used for distinguishing various dance intensities.

Preknowledge of various factors (such distance energy relationship, waggle dance, food source quality) is required.


A Genetic Algorithm (GA) is a form of evolutionary algorithm often used as a search heuristic [18]. It was invented by John Holland in the 1960s at the University of Michigan, with the goal to study the phenomenon of evolution and adaptation occurring in nature [18]. He also wanted to develop ways in which mechanisms of natural adaptation could be imported into computer systems [19]. The GA is a method of moving from one population of chromosomes to a new population using natural selection and genetic operators.

Genetic algorithms are mostly used in searching and optimization problems. They maintain a population of structures that evolve on the basis of recombination, selection, crossover and mutation, which are basic steps of GA.


Recombination selects elements with high fitness values from the previous population and then elements with better fitness are created through crossover and mutation. The process is then applied again until a maximum level of fitness is achieved.


FIGURE 4. Basic genetic algorithm operations


It causes recombination of genetic individuals by selecting bits from parent elements and creating new offspring. In crossover, a particular point gets selected and everything before this point gets copied from a first parent and then everything after the crossover point gets copied from the second parent as represented in Fig 4. It is unlikely that the new population after recombination will be identical to a particular parent population. The sub-sequences before and after the crossover point are then exchanged to produce two new offspring, as present in Figure

FIGURE 5. Representation of Crossover operation

If crossover causes two identical genetic elements to be in the same off-spring random genetic element that is not in current population is selected, instead of including the duplicated genetic elements. After this, mutation takes place.


It causes local modification of off-springs generated in crossover with a small user specified mutation probability. Mutation changes randomly the new offspring. Like in a binary encoding we can swap a few randomly chosen bits from 1 to 0 or vice-versa. It is a method that provides a way to add variation to a new population. The new off-spring is identical to the prior parent except that one or more changes may be made to the new off-springs..

FIGURE 6. Representation of Mutation operation

Fitness Function

The most difficult and most important concept of GP is determination of its fitness function. Fitness function determines "how well a program is able to solve the problem". It varies greatly from one type of program to another. The outcome is the association of a fitness value to each individual. [18]. Fitness or evaluation functions are user supplied.

Advantages of GAs

GA only requires a search range, which need only be constrained by prior knowledge of the physical properties of the system. Effectively they search whole of the solution space, without calculating the fitness function at every point. This can help avoid a danger in any optimization problem such as: being trapped in local maxima or minima.

Concept of genetic algorithm is easy to understand, it is modular and separate from application oriented approaches.

It supports multi-objective optimization.

Genetic algorithm is Good for "noisy" environments

It always has answer; answer gets better with time

There are many ways to speed up and improve a GA-based application.

Applications of GAs

There are many application domains in which genetic algorithms are used. The various domains are as:

1. Genetic Programming [18]

Using genetic programming the expression evaluation is a lot easier process. If for example we have numerical data and 'answers', but no expression to conjoin the data with the answers. A genetic algorithm can be used to 'evolve' an expression tree to create a very close solution fit to the data. By 'splicing' and 'grafting' the trees and evaluating the resulting expression with the data and testing it to the answers, the fitness function can return how close the expression is.

2. Evolving Neural Networks [19]

GAs are perfect for evolving the weights of a neural network - there are immense number of possibilities that standard learning techniques such as back-propagation would take thousands upon thousands of iterations to converge to. GAs could evolve working weights within a hundred or so iterations.

3. Other Areas

GAs can be applied to virtually any problem that has a large search space.

GAs are used to filter out 'good' and 'bad' riffs for jazz improvisation.

The military uses GA to evolve equations to differentiate between different radar returns.

Many stock companies use GA-powered programs to predict the stock market.

Limitations of GAs

We need to identify which variables are suitable to be treated as input population and which variables in GA will be produced as off-springs.

The determination of the convenient parameters (population size, mutation rate) may be time consuming.

The huge search space is required that has complex equations. Therefore, before running a GA to search for an equation, the user tells the program which operators and numerical ranges to search under.


Particle Swarm Optimization (PSO), it is a technique that is used to explore the search space within a given problem for finding the parameters that are required to maximize objective of given problem. This technique is combination of two separate concepts:

The idea of swarm intelligence based on the observation of swarming habits by certain kinds of animals (such as birds and fish); and of

Evolutionary computation.

The process starts by initializing a group of particles (solutions) and thereof, searches for optima by updating generations. In every iteration, "best" values for each particle is updated. The first one is the best solution (fitness) that has achieved so far, by an individual particle, called pbest. Another "best" value that is obtained so far by any particle in the population, called gBest. Whenever a particle takes part in the population as its topological neighbors, the best value is a local best and is called lbest.

FIGURE 7: Figure representing position and velocity of particle with respect to local and global best.

The particle's position modification can be mathematically described as:

Vik+1 = wVik +c1 rand1(…) x (pbesti-sik) + c2 rand2(…) x (gbest-sik) ….. (1)

where, vik : velocity of agent i at iteration

w: weighing function, cj : weighting factor (uniformly distributed random number between 0 and 1), sik : current position of agent i at iteration k, pbesti : pbest of agent i, gbest: gbest of the group.

The weighting function is usually utilized in (1)

w = wMax-[(wMax-wMin) x iter]/maxIter (2)

where, wMax= initial weight,

wMin = final weight,

maxIter = maximum iteration number,

iter = current iteration number.

sik+1 = sik + Vik+1 (3)

FIGURE 8: Fig representing basic flow of PSO.

Hybrid PSO with GA

The premature convergence is major drawback of PSO. The principle lying behind the problem is, while searching for the global best in PSO, the particle converges to the single point, which lying in between the global best and the local best positions. The local optima cannot guarantee solution of the problem [20].

The fast rate of information flow between particles is another problem that results in the creation of similar particles with a danger of loss in diversity which can increase the possibility of getting trapped in local optima.

To overcome these limitations of PSO, hybrid algorithms with GA are proposed. One advantage of PSO-GA is its algorithmic simplicity. The ability to control convergence is another clear difference between PSO and GA. The GA operator's rates can affect GA convergence, but these cannot be corresponding to the control level achieved through weight of the inertia manipulation. In fact, the decrease of inertia weight increases the swarm's convergence. To prevent the convergence, the position update of the global best particles is changed. The position updation is done through hybrid mechanism of GA.

Advantages of PSO

PSO is efficient global optimizer for structural applications, with limited parameters.

It is easy to implement, with very few parameters to fine-tune.

It has very simple concept, easy to implement.

PSO generates high quality solution with less calculation time and stable convergence.

Applications of Swarm Intelligence [21]

Following are areas where PSO have been applied to solve problems:-

Routing optimization[20]

Collective Robots

Autonomous nanotechnology swarms

Swarm Bots[20]


Telecommunications control

Data mining and design

Combinatorial optimization[21]

Power systems

Signal processing

Far-field radiation pattern prediction from planar near-field samples

FIR filters IIR filters network synthesis

Sensor networks[22]

Security and Military

Fuzzy and Neurofuzzy[22]

Limitations of PSO

Premature convergence is major limitation of PSO.

Dependency on initial condition, parametric values, difficulty in identifying design parameters is problems to be solved in PSO.


This paper presents a comparative view of evolutionary computation techniques namely Ant Colony Optimization, Bee Colony Optimization, Genetic Algorithm and Particle Swarm Optimization. ACO is the process used by ants to forage food source. They use pheromone trail deposition/evaporation technique to map their way. Similarly in BCO, the bees use the phenomenon called waggle dance to communicate the distance, location, direction, food source quality of the food source. Then Genetic Algorithms have been discussed which produces the fittest population from the current population using genetic operators to generate new and improved population. Whereas, PSO is an optimization technique, where the global solution is constructed by the analysis of the local optimal solution. These techniques have immense potential and scope of application ranging from engineering to software engineering, optimization problems to non-optimization problems, fuzzy to Neurofuzzy, robotics, electronics and many more.

These techniques need to be further explored to find their suitability to certain applications. Also, there is a need to combine two or more techniques so that they complement each other and nullify their respective limitations.


We would like to acknowledge our respected mentors Dr.Arvinder Kaur(Associate Professor, USIT, GGSIPU) and Mrs.Bharti Suri(Assistant Professor, USIT, GGSIPU) for their valuable time and suggestions which is responsible for the work produced here.