Swarm intelligence (SI) is the intelligent behavior of non intelligent species, like ants (going toward search of food) or birds (during flying). SI system is made up by the simple agent's population interacting with their environments and each other. There is no central control dictating the performance of the agents.
To explore discrete problem solving without having a central control structure the artificial intelligence used is like SI. Real life swarm intelligence can be observed in ant colonies, bacterial growth, bird flocks, animal herds and fish schooling.
One popular model of swarm behavior is Ant colony model. Ant colony behavior is one of the popular models of swarm behavior. Through swarm intelligence ants can determine the shortest path to source of food. Ant colony optimization can used to solve traveling salesman problem, Scheduling problem, Vehicle routing problem and many other problems.
On the other hand particle swarm optimization is a type of swarm intelligence inspired by bird flocking and fish schools. This type of swarm intelligence is used in practical applications such as in artificial neural networks and in grammatical evolution models.
Introduction to Particle Swam optimization (PSO)
Get your grade
or your money back
using our Essay Writing Service!
PSO is a population based optimization method purposed by Kennedy and Eberhart. The algorithm simulates the behavior of bird flock flying together in multi dimensional space in search of some optimum place, adjusting their movements and distances for better search . PSO is very similar to evolutionary computation such as Genetic algorithm (GA). The swarms are initialized randomly solutions and search for an optimum by updating generations.
PSO is combination of two approaches, one is cognition model that is based on self expression and the other is social model, which incorporates the expressions of neighbors. The algorithm mimics a particle flying in the search space and moving towards the global optimum. All the particles are initialized with random positions having random velocity  the particles move towards the new position based on their own experience and with neighborhood experience.
Each particle in PSO maintainins two important positions called pbest and gbest where pbest is the particle's own best position and gbest is the global best position among all the particles.
The equations used to update a particle's velocity and position, are the following
Vi(t+1) = Vi(t) + c1*r1* (pbest - ni (b)) + c2*r2 * (gbest - xi (t)) ………….. (2.1)
Xi (t + 1) = xi (t) + vi (t + 1)…………………….. (2.2)
Where xi is the position, vi is the velocity and Pbest is the personal best position and gbest is the global best position for PSO where r1 and r2 are two random numbers where range is chosen from (0,1) and C1 and c2 are learning factors specifically the cognition and cognition component influential respectively.
Initialize the population randomly
If fitness value is batter from the best fitness value (pbest) in history then
Update pbest with the new pbest
Select the particle with best fitness value from all particle as gbest
For each particle
Calculate particle velocity by equation (2.1)
Update particle position according to equation (2.2)
While maximum iterations or minimum error criteria is not attained
Inertia weight plays a vital role to update the velocity of the particles. It is latter introduce by Shi  in the PSO to control the exploration and exploitation abilities of the swarm. Large value of inertia weight support explorations while small value promotes local exploitation . Some researchers [26, 27, 34] used fixed inertia weight and some [3, 4, 19] used decreeing inertia weight.
In linearly decreasing inertia weight, large inertia weight (0.9) linearly decreased to mall inertia weight (0.4). Formula used for linearly decreasing inertia weight is following.
Where is 0.9 and is 0.4 is the maximum number of iterations, is the iteration during which inertia weight is calculated.
The exponent decreasing inertia weight by Hui  as
Where denotes the max iteration, t denotes the t th iteration. denotes the original inertia weight, denotes the inertia weight value when the algorithm process run the max iterations, and is factor to control w between .
Non linear decreasing
Always on Time
Marked to Standard
In non linear decreasing inertia weight a large value decreases to small value but instead of linearly decreasing it decrease non-linearly. By decreeing inertia weight non-linearly search space can explored in shorter time but take larger time to exploit the search space. From Naka et all 
From venter el al 
Where and is the time step when the inertia last changed. The only condition when the inertia is changed is no significant difference between the swarm fitness.
It has been observed that linearly decreased inertia weight is batter rather than using fixed inertia weight. Exploration and exploitation can be controlled well by linear decreasing inertia weight. Decreasing inertia weight narrowing the search space from explorative to exploitative mode.
Fuzzy adaptive inertia weight
Using fuzzy sets and rules inertia weight is dynamically adjusted . Fuzzy system for the inertia adaption by Shi and Eberhart consist of the following component.
Two input variables, one to represent the fitness of the global best position, and the other the current value of the inertia weight.
One output variable to represent the change in inertia weight.
Three fuzzy sets, namely LOW, MEDIUM and HIGH, respectively implemented as a left triangle, triangle and right triangle membership function .
Nine fuzzy rules from which the change in inertia is calculated. An example rule in the fuzzy system is 
A very similar approach to inertia weight developed by clerc  to balance the exploration-exploitation trade off. Velocities are constricted as
Lovbjerg et al  proposed a new hybrid PSO variant which combines the PSO with breeding operators. The authors have also introduced the use of subpopulation for inter and intra population breeding. Some members of a population are marked for breeding in each iteration using the breeding probability and a weighted crossover is performed between these marked particles. In case of subpopulations, inter population breeding is performed using the probability of same subpopulation breeding. Each subpopulation is evolved using its own global best particle. The performance of this new variant has been compared with traditional PSO and Genetic algorithm and results are found outstanding.
Silva et al  predator pray optimization technique used for function optimization. New particles known as predators are introduced in the technique to avoid the premature convergence. The particles in the swarm are repelled by the predator particles and attracted towards the best positions of the swarm. This repulsive mechanism ensures the presence of diversity in the swarm and eliminates the phenomenon of premature convergence.
Brits et al  proposed another variant of PSO which intended to locate multiple best possible solutions in multimodal problems by using sub swarms and the convergent sub swarms algorithm. Niching algorithms find and track various solutions via fitness based principle to discover and mark particle solution. However there are still some issues that need to be solved.
PSO has been applied for constrained non linear optimization problem . Feasibility study has been used to deal with constraints, and feasibility function is used to check the satisfaction of all the constraints. Initial population is a group of feasible solutions that satisfy all the constraints. All particles keep feasible solution in their memory. The proposed modified algorithm has successfully solved problems with nonlinear inequality constraints.
Inertia weight of PSO is disused in detail by zhang et al . Finally they set the inertia weight as uniformly random number between 0 and 1. Author claim that it is more capable to escape from local minima. According to author's proposed method for inertia weight, can overcome two problem of linearly decreasing inertia weight.
We can overcome the problem of linearly inertia weight dependency on maximum iteration.
Another is avoiding the lacks of local search ability at early of run and global search ability at the end of run.
They test their method on three benchmark functions with different dimensions using different number of generations. The result of new proposed inertia weight found best.
Using particles previous best position and mistake as well Yang et al  proposed a new variant of PSO. The author says that the individual can learn not only from his/her previous best but they can improve their learning by using their mistakes also. Author used 4 benchmark functions to compare their technique with original PSO.
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
Wei et al  Proposed dynamical PSO with dimension mutation. First they introduce dynamic inertia weight which is changed dynamical based on speed and accumulation factor then present a dimension mutation operator to overcome the premature convergence. For speed factor they used below formula
H is called speed factor. Accumulation factor given below
Where N is the population size n is the number of variables L is the length of maximum diagonal Pid is the dth coordinated of the ith particle. They calculate inertia weight as following.
Where =1, ,
They named their proposed algorithm as DPSO. They have compared the result of DPSO with CEP, FEP and LDW using 5 benchmark functions and obtained better result than other.
Nguyen et al  inspects the some randomized low discrepancy sequence to initialize the swarm to increase the performance of PSO. They used three low discrepancy sequence halton, faur and sobol. Halton sequence is actually the extension of van der Corput. As the ven der corput sequence is one dimensional in order to cover search space in n dimension halton is defined as one of the extension of vender corput sequence. Six benchmark functions are used to evaluate the performance of new three version of PSO. They compare their all three new variants with global best SO in which swarms are initialized with pseudo random number. From result it is observed that S-PSO is dominated among all the four version of the PSO. In case of small search space PSO initialized with faur sequence can perform well, while in case of high dimensions Halton performance might be fine.
To prevent PSO trapping from local optima Li et al  introduce cauchy mutation in PSO. They named their algorithm as FPSO. They combined the natural selection strategy of evolutionary algorithm. They update the particles position by Cauchy mutation as
XÎ„=X+ VÎ„(δ) (2.13)
Where δ is a Cauchy random number. They update the velocity and position of the particle not only with eq (2.1) and (2.2), they also used the eq (2.12) and (2.13) for this purpose. Now choose the one with best fitness then generate next generation according to evolutionary selection strategy. They compare their algorithm with AMPSO using several benchmark function and fine better results.
Pant et al  proposed a version of PSO in which they proposed three variant of PSO using Gaussian inertia weight. Responsible factor for the uniqueness of the modified algorithm was
Development of new inertia weight using Gaussian distribution
Use of different distribution then uniform distribution for the generation of the initial swarm.
Probability density function of the Gaussian distribution as
With mean zero and standard deviation 1.i.e. N (0, 1)
Initialization of population plays an important role in the evolutionary and swarm based algorithms, in case of bad initialization, the algorithm may search in unwanted areas and may be unable to search for the optimal solution.
As inertia weight performs a vital role in PSO, Shu-Kai  proposed PSO using an adaptive dynamic weight scheme. They propose a novel nonlinear function amendable inertia weight adaptation with an active method for improving the performance of PSO algorithms.
The aim was the determination of the inertia weight through a nonlinear function at each time step. The nonlinear function is given by
Where d is the decrease rate from 1.0 to 0.1 and r is dynamic adaption rule depending on the following rule. For minimization case it follows.
If then (2.16)
If then (2.17)
Where and represent the global best position at current and previous time step respectively. Author claim that this method desires to make particle take off quickly towards the optimal solution and then perform local enhancement around the neighborhood of finest solution by decreasing inertia weight. They test their technique using different benchmark function and find best results.
To share the information of particles Zhi-Feng  proposed a PSO with cross over operator. The crossover takes place only in one dimension, which is randomly selected. In the meantime, fitness of two offspring produced by cross over is compared. Then enhanced one is selected.corrs over is done as
Where i=1,2,3……N, t is random integer in the range (1,D). is equal to in any dimention except in t dimension. R is random number in the range (0,1). K is the particle's best position equivalent to better fitness produced by i th particle with the fitness-proportionate selection. This technique is proposed just to prevent the trapping of algorithm into local minima by sharing the information of particles. Five benchmark functions with two uni-models and three multi-models are used to test the algorithm.
Wang  proposed a new Cauchy mutation operator for PSO. This operator is applied to perform local search around the global best particle. The motivation for using such a mutation operator is to increase the probability of escaping from a local optimum. Several benchmark functions have been used to test the performance of this new operator and better results were achieved. The inertia weight used is 72984 and c1=c2 =1.49618. they named their algorithm as HPSO. HPSO compared with the standard PSO, FDR-PSO, CEP, and FEP using 6 unimodel functions and 4 multimodal functions. HPSO perform well for all functions but in some cases it falls in local minima.
Wang  proposed opposition based initialization in PSO coupled with application of cauchy mutation operator. Cauchy mutation operator is used on the global best particle if newly created global best is better after application of mutation operator then global best is replaced. The proposed modification did not perform realistically well for multimodal functions.
Pant et al  proposed two variants of PSO AMPSO1 and AMPSO2 for global optimization problem; they used adaptive mutation for both techniques. The main goal of authors was to improve the diversity of PSO without compromising on the solution quality. In AMPSO1 they mutate the personal best position of the swarm while in AMOPSO2 they mutate the global best particle of the swarm. They compare the performance of APMSO with Cauchy mutation, Gaussian mutation, EP with having adaptive Gaussian and Cauchy mutation, FEP and CEP (version of EP) FEP with self adaptive Cauchy mutation whereas CEP is classical EP with self adaptive Gaussian mutation. Through the numerical results it is observed that AMPSO2 give good performance in7 out of 12 problem and FEP performance better in 4 functions out of 12, one function converged to zero in all algorithms except CEP. Mutate the particle at the end of each iteration by the following rule
Xj(t+1) =Xj(t) +σÎ„ j * Betarand j() (2.21)
Where σÎ„ j = σ j * exp(τN(0,1)+τÎ„ Nj (0,1)), N(01) denotes a normally distributed random number with mean zero and standard deviation one. Nj(0,1) indicates that a different random number is generated for each value of j. τ and ' τ are set as and respectively. Betarand j () is a random number generated by beta distribution with parameters less than 1.
Pant et al  explores the effect of initializing swarm with the vender corput sequence which is a low discrepancy sequence to solve the global optimization problem in large dimension search space. They named the proposed algorithm as VC_PSO. The author claim that PSO performance is very well for problems having low dimensions but as the dimensions evolve the performance deteriorates, this problem become more severe in case of multimodel functions. The author says that one of the reasons for this poor performance may be random initialization of the swarm therefore they proposed a PSO technique which initializes the swarm with low discrepancy random number to overcome this problem. They compare their algorithm with PSO using sobol random sequence which is dominated by halton and faur sequence. Vender corput is a low discrepancy sequence over the unit interval proposed by a Dutch mathematician in 1935, which is defined by the radical inverse function. They used the linear decreasing inertia weight from .9 to .4 with c1=c2=2.0.
Pant et al  explore the sobol mutation operator in PSO to enhance the performance of basic PSO algorithm, which uses qausi random sobol sequence as they claim that random probability distribution cannot cover the search domain as qausi random can cover. They named their operator Systematic mutation (SM). They proposed two variants of PSO SM-PSO1 and SM-PSO2. In SM-PSO1 mutation is applied to global best particle while SM-PSO2 mutates the worst particle of the swarm. They defined the SM operator as
SM= R1+ (R2/lnR1)
Where R1 and R2 are random number generated by sobol sequence. The aim of mutating the worst particle is to move forward scientifically. Just three multimodal functions are used to test the proposed variants with different population size and dimensions.
Different mutation operator performs well for different type of problem. Li et al  Proposed adaptive mutation with three mutation operator to escape the particle from local optima. They apply Cauchy mutation, Gaussian mutation and levy mutation on the position and velocity. Choose the mutation operator on the base of selection ratio and evaluate its offspring fitness. Initially the probability set to 1/3. Latter on probability for each mutation operator is updated. Probability of an operator increased in case of best fitness offspring and decreased in case of low fitness offspring. At the end only one appropriate mutation operator control the whole search space. 7 benchmark functions are used to test the algorithm with FEP algorithm. Adaptive algorithm performs better than FEP.
Omran et al  used an opposition based learning to improve the performance of PSO. In each iteration, the particle with lowest strength of fitness is replaced by it opposite, the speed and individual experience of the anti-particle are reset. After that a global best solution is updated. They have not introduced any new parameter to PSO. The only modification is the use of opposition based learning to enhance the performance of PSO.
Xuedan Liu et al  proposed the PSO with dynamic inertia weight with mutation for local PSO, Reinitialized the swarm when it get stagnate. Used the linearly decreasing inertia weight with following formula
Where is maximum number of iteration, they used the wheel structure. Information of particle exchanged to only the neighborhood's best position. The best position of neighborhood is calculated by following
L/2 (L is neighborhood length), then choose the best particle position from the neighborhood [1,i+L/2]. If it is better than Pli, then update Pli with it.
If the ith particle's subscript is greater than or equal to (s-L/2) (s is the swarm size), and then choose the best particle position from the neighborhood [i-L/2,s]. If it is better than Pli, then update Pli with it.
Else, choose the best particle position from the neighborhood [i-L/2, i+L/2]. If it is better than Pli, then update Pli with it.
Where Pli is the neighborhood's best position. Author test their technique using four bench mark function.
Yuelin  present new adaptive mutation operator by fitness variance and space position aggregation degree to give a new PSO with adaptive mutation. They claim that algorithm can stuck into local convergence when fitness variance of particles are near to zero. To get better this state of affairs they build adaptive mutation. The mutation probability
Where is the fitness value aggregation degree and is defines as
Where where f is factor of returning can get any value but need to cate two things
Changes with algorithm's evolution.
In equation 2.23 is relative to the objective function and h is space position aggregation degree in iteration below.
A random number r (0,1) is generated if r<pm then mutation operator is implemented as following
Where is best position of the particle so far and is n dimension random variable that reconciles normal distribution (0, 1).
In order to beat the early convergence Hui  proposed a variant of PSO with exponent decreasing inertia weight and stochastic mutation. Exponent decreasing inertia weight describe in eq (4). Mutation probability pm is defined as
Where >0, is the fitness of current global particle, Fm is the theory optimum value of the optimal problem, is defined as
Where, f is factor of returning. It can be limit value of f is defined as
Gbest is mutated as following
Where is random variable obeying the standard normal distribution.
Jabeen et al  proposed opposition based initialization which calculates opposition of randomly initialized population and selects better among random and opposition as initial population. This population is provided as an input for traditional PSO algorithm. The proposed modification has been applied on several benchmark functions and found successful.
Shahzad et al  proposed another variant of OPSO with velocity clamping (OVCPSO). The authors control the velocity by velocity clamping to speed up convergence and to stay away from impulsive convergence. Velocity clamping changes the search direction of particles. Linearly decreasing inertia weight between 0.4 and 0.9 has been used. The proposed algorithms has been tested on various benchmark functions and results revealed its success.
Tang et al  proposed an enhance opposition based PSO, called EOPSO. According to the authors opposite point is closer to global optima then current point. This provides more chances to get close to global optima. The enhanced opposition of a population is calculated based on opposition probability and best among original and enhanced population are selected for further exploration of the search space using traditional PSO. Prominent results have been achieved using the proposed modification to traditional PSO.
Chang et al  proposed an enhanced version of opposition based PSO called quasi-oppositional comprehensive learning PSO (QCLPSO). Instead of calculating traditional opposite of a point, the proposed modification calculates qausi opposite particle, which is generated from the interval between median and opposite position of the particle. According to authors the qausi opposite particles have higher chances of being closer to global optima then opposite particle calculated without apriori information.
Wu et al  proposed a new variant of PSO called power mutation based PSO (PMPSO), which employs a power mutation operator. The core plan of PMPSO is to apply power mutation on the fittest particle of current swarm. Purpose of power mutation is help particles to jump out from the local optima. The algorithm has been compared with several other PSO variants and better results have been achieved on most of the benchmark functions.
Pant et al  introduced the new mutation operator for improving the Qantum Particle Swarm Optimization algorithm. The mutation operator uses the qausi random sobol sequence and called as a sobol mutation (SOM) operator. Author proposed two versions using SOM in one they mutate the best particle and in other they mutate worst particle. The proposed technique is compared with BPSO, QPSO and two more variants of QPSO, also they compare both variants to each other.
Tang  proposed adaptive mutation in PSO by mutating the global best particle as
Where is the vector of global best particle, and are the minimum and maximum values of dimensions in current search space respectively, rand () is random number with in [0, 1] and t=1, 2, 3, 4 indicate the generations.