Swarm intelligence algorithms are taking the spotlight in the field of function optimization. In this research our attention centers on combining the Particle Swarm Optimization (PSO) algorithm with food foraging behavior of honey bees. The resulting algorithm (called HBF-PSO) and its variants are suitable for solving multimodal and dynamic optimization problems. We focus on the niching and speciation capabilities of these algorithms which allow them to locate and track multiple peaks in environments which are multimodal and dynamic in nature. The HBF-PSO algorithm performs a collective foraging for fitness in promising neighborhoods in combination with individual scouting searches in other areas. The strength of the algorithm lies in its continuous monitoring of the whole scouting and foraging process with dynamic relocation of the bees (solution/particles) if more promising regions are found. We also propose variants of the algorithm in which each bee has a different position update equation and we utilize genetic programming (GP) for continuous evolution of these position update equations. This process ensures adaptability and diversity in the swarm which leads to faster convergence and helps to avoid premature convergence. We also explore the use of opposite numbers in our algorithm and incorporate opposition based initialization, opposition based generation jumping and opposition based velocity calculation. The proposed algorithm and its variants are tested on a suite of benchmark optimization problems. In the final portion of our work we report our experiments on the training of feedforward neural networks utilizing our proposed algorithms.
Get your grade
or your money back
using our Essay Writing Service!
2 Honey Bee Foraging Behavior Inspired PSO Algorithm
In this chapter we describe our Honey Bee Foraging Particle Swarm Optimization (HBF-PSO) algorithm. This algorithm is modeled after the food foraging behavior of the honey bees and performs a collective foraging for fitness in promising neighborhoods in combination with individual scouting searches in other areas. The strength of the algorithm lies in its continuous monitoring of the whole scouting and foraging process with dynamic relocation of the bees if more promising regions are found. The algorithm has the potential to be useful for optimization problems of multimodal and dynamic nature.
Swarm based optimization algorithms have witnessed considerable attention lately. Optimization algorithms based on the behavior of birds (PSO) and ants (ACO) are well known and widely used [Engelbrecht, 2006]. PSO is known to be suitable for function optimization problems where the goal is to search for the optimum value of a function. Multimodal function optimization and dynamic function optimization are two variants of the general optimization problem. Researchers have attempted these two classes of problems using swarm based approaches and have shown that variants of canonical PSO algorithm are able to solve these problems. Some of the main efforts of swarm based solutions for multimodal problems include works by Brits et al. [Brits, 2002b] [Brits, 2002c] [Brits, 2003], Kennedy [Kennedy, 1997], Parrott and Li [Parrot, 2006], Parsopoulus and Vrahatis [Parsopoulus, 2001b] [Parsopoulus, 2004], and for dynamic problems include works by
Blackwell and Branke [Blackwell, 2006], Carlisle and Dozier [Carlisle, 2000], Eberhart and Shi [Eberhart, 2001], Hu and Eberhart [Hu, 2002], and Jin and Branke [Jin, 2005]. Over the past years PSO has undergone many improvements and enhancements. Majority of these modifications have been made to increase the diversity of swarm in order to improve the convergence of the PSO. The modifications made to PSO include the introduction of an inertia weight, velocity clamping, velocity constrictions, different ways of determining the personal best and global best positions, and different velocity models. In addition to the modifications made to basic PSO algorithm, a variety of PSO variations have also been developed. These include the sub-swarm based PSO algorithms and PSO with niching capabilities. Sub-swarm based PSO algorithms include all such PSO algorithms that incorporate some form of sub-swarm strategy. These sub swarms can either be used in cooperation with each other or in an independent manner.
2.2 Sub-swarms and Niching
2.2.1 Sub-swarm based PSO
Those PSO variations in which grouping of particles into sub-swarms have been incorporated are called sub-swarm based PSO. These sub-swarms can either exist in cooperative mode or competitive mode. In cooperative mode there exists a type of cooperation between the sub-swarms, generally in terms of information exchange. In competitive mode the sub-swarms are in competition with each other [Engelbrecht, 2006]. We first look at some of the cooperative sub-swarm based PSO variants. In [Lovberg, 2001] [Lovberg, 2002], Lovberg et al. used a sub-swarm based PSO. For each sub-swarm, a gbest PSO algorithm was employed. They used an arithmetic cross-over operator as the breeding mechanism and allowed not only the random selection of parents from within the sub-swarm but parents from different sub-swarms were also allowed to produce offspring. They found that this breeding mechanism worked well for a large number of swarms. In this approach the cooperation between different sub-swarms is achieved by the selection of parents from different sub-swarms thus allowing the exchange of genetic material between them.
Always on Time
Marked to Standard
In multi-phase PSO (MPPSO), Al-Kazemi and Mohan [Al-Kazemi, 2002a] [Al-Kazemi, 2002b] [Al-Kazemi, 2002c] also employed sub-swarms by randomly assigning the particles to one of two equally sized sub-swarms. These sub-swarms were either in an attraction phase, in which particles of a sub-swarm moved towards the global best, or in a repulsion phase, in which the particles of the sub-swarm moved away from the global best. In addition, the sub-swarms were allowed to change their phase after a certain number of iterations had been exceed or a user defined threshold had been crossed or if the particles of the sub-swarm failed to show improved fitness within a certain number of consecutive iterations. The cooperation here was achieved through the selection of the global best particle from the whole population. The life-cycle PSO (LCPSO) presented by Krink and Lovberg [Krink, 2002] [Lovberg, 2002], is yet another example of a sub-swarm based PSO. In LCPSO a particle can be in one of three phases. It can either be a regular PSO particle or a GA individual or a stochastic hill-climber. An individual particle is allowed to change from one phase to another if its fitness does not improve over a number of consecutive iterations. The LCPSO can thus be considered to consist of three sub-swarms, each consisting of individuals in one of the three phases. The cooperation between the sub-swarms results when an individual changes its phase and moves to another sub-swarm. Kennedy presented a clustering based PSO by applying the concept of stereotyping to PSO [Kennedy, 2000]. In this approach the particles are first clustered into a predefined number of clusters using a simple clustering algorithm. The particles then use the average of personal bests of the cluster as their personal best and the global best is defined in the usual manner. Here too there are multiple sub-swarms and the cooperation between them is achieved by each sub-swarm using the same global best.
Thompson et al. also presented a clustering based PSO algorithm [Thompson, 2003]. It differed from the approach of Kennedy in the sense that while updating the velocity of the cluster center, the clusters best position and the global best positions are used. Whereas when updating the velocity of other particles, the cluster center, the particles personal best and the clusters best position is used. In this approach the cooperation between the sub-swarms is achieved by the use of global best when updating the velocity of cluster centers.
Van den Bergh and Engelbrecht proposed the cooperative split PSO (CPSO-Sk) [van den Bergh, 2000] [van den Bergh, 2002] [van den Bergh, 2004]. In this approach they have split each particle into K separate parts of smaller dimension. Each part is then optimized using a separate sub-swarm. The cooperation between the sub-swarms is maintained by concatenating the global best positions from the K sub-swarms and using it as a context vector to represent the complete solution. To calculate the fitness of each particle, it is swapped into the corresponding position of the context vector, keeping all other components of the vector constant. The original fitness function is then used to calculate the fitness. For an example of the competitive sub-swarm based PSO variants we turn our attention to the PSO proposed by Silva et al. They introduced the predator-prey PSO [Silva, 2002], in which they used a single predator particle and the rest of the particles were considered prey. The predator constantly moved towards the global best prey particle. The prey particles used a modified velocity update equation in which in addition to being influenced by the personal and global best, the prey was also repelled from the predator particle. The influence of the predator particle on the prey particles grew exponentially as the prey particles came closer to the predator particle.
2.2.2 PSO with Niching
Niching algorithms are those algorithms which are capable of locating multiple solutions to a problem. Niches can be defined as partitions of an environment which represent one solution to a problem. Speciation is the process of finding multiple niches or solutions. Species are the partitions of a population competing within an environment. They are the group of particles which converge on a single niche [Engelbrecht, 2006].
Niching algorithms can work either in sequential mode, parallel mode or a combination of both. In sequential mode, the algorithm tries to find the optima sequentially. Once optima have been located, the search space is adjusted so that the found optima are not included in future searches. In parallel mode, the algorithm will try to find the niches in parallel and will keep a record of these niches for the whole duration of the search process. Algorithms combining both sequential and parallel approaches will generally try to find the niches in a sequential manner. Once a niche has been found, it is not
This Essay is
a Student's Work
This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.Examples of our work
immediately refined but is maintained in parallel and refined together with all other found niches [Engelbrecht, 2006]. Sequential niching can be implemented by executing the PSO multiple times, starting with a different swarm each time. The objective function is changed after a solution is found. The particles which move towards already found solutions are thus penalized. This is a simple method of implementing sequential niching and was employed by Kassabalidis et al. [Kassabalidis, 2002a] [Kassabalidis, 2002b] to invert a trained neural network using PSO. Another sequential niching technique is to use objective function stretching. In this approach, once an optimum is located, it is removed from the search space by using objective function stretching; the swarm is then reinitialized and allowed to search for other minima. This process continues in an iterative manner until a certain threshold has been achieved. This technique was used by Parsopoulous et al. in a niching based approach for locating multiple optima of a continuous function [Parsopoulos, 2001a], [Parsopoulos, 2001b] [Parsopoulos, 2004]. Brits et al. proposed the nbest PSO [Brits, 2002a] [Brits, 2002c] which is a parallel niching PSO algorithm. Brits suggested the use of a redefined objective function which rewards a particle when it is closer to any of the possible solutions. In addition to this, spatial neighborhoods are used to create species. The particles move towards the best solution of their nearest topological neighborhood, which is defined as the center of mass of the positions of the n closest particles at any given time. Although nbest PSO can efficiently locate multiple optima, it has difficulty in maintaining niches.
2.2.3 Sub-swarm based PSO with Niching
There are some PSO variants which are capable of finding multiple solutions to multimodal problems by employing a sub-swarm based niching approach. These include the NichePSO and Species based PSO.
In the NichePSO presented by Brits et al. [Brits, 2002a] [Brits, 2002b] [Brits, 2002c] [Brits, 2005], the algorithm initially has a single main swarm with all the particles. The main swarm then continues to spawn smaller sub-swarms as it converges towards potential solutions. Each sub-swarm thus maintains a separate niche and the particles within the sub-swarm continue to evolve and improve the solution. When further improvements to the
solutions found by the sub-swarms are not possible, then the algorithm is considered to have converged. The sub-swarms that are created are completely independent and no information is exchanged between sub-swarms. During the course of execution if a particle from the main swarms moves into an area being explored by a sub-swarm then that particle is absorbed into the sub-swarm. Whenever two sub-swarms start to explore the same optimum then both these sub-swarms are also merged together forming a bigger sub-warm which takes advantage of the experience gained by both of the sub-swarms. Brits et al. [Brits, 2003] have compared the scalability of NichePSO with two genetic algorithm niching techniques, sequential niching and deterministic crowding. The authors concluded that NichePSO has an upper hand on more complex problems. The species-based particle swarm optimization (SPSO) proposed by Parrott and Li [Parrot, 2006] relies on the concept of speciation to maintain niches and find multiple optima for multimodal optimization problems. During each of the iteration, SPSO identifies particles which are to be used as species seeds. The species seed has the best fitness amongst all particles in that species and is considered the neighborhood best for others particles in that species.
2.2.4 PSO for Dynamic Optimization
Now we move on towards dynamic optimization. Carlisle and Dozier [Carlisle, 2000] have presented a technique which allows the PSO to be adapted for dynamic environments. This technique involves resetting the particle memories periodically to the current positions allowing the swarm to track the optima with very little overhead. At the expense of an additional function evaluation the `reset' can also be triggered by detecting when the optima has moved too far. Eberhart and Shi [Eberhart, 2001] have employed the PSO in tracking optima in a dynamic environment. Hu and Eberhart [Hu, 2002] have suggested that a fixed number of particles be re-randomized to find new optima as soon as a change is detected in the environment. Jin and Branke [Jin, 2005] have discussed some of the existing evolutionary optimization techniques for dynamic environments.
Blackwell and Branke [Blackwell, 2006] have proposed that the population of particles be split into a set of swarms interacting locally by exclusion parameter and globally through anti-convergence operator. In addition charged or quantum particles are used to maintain diversity. The species-based particle swarm optimization (SPSO) of Parrott and Li [Parrot, 2006], mentioned above in context of multimodal problems, can also be used for dynamic optimization problems.
2.3 Honey Bee Foraging Behavior Inspired Particle Swarm Optimization Algorithm (HBF-PSO)
We now describe a novel algorithm, which we have called HBF-PSO, [Baig, 2007] and show that it can be applied to unimodal, multimodal and dynamic optimization problems. The main characteristic of the HBF-PSO is the incorporation of three new features, in the PSO algorithm. We propose simultaneous exploration and exploitation of the fitness landscape, along with a blackboard type of information sharing mechanism. These features allow parallel searching for fitness peaks and relocation of search towards promising areas. These features are derived from a model of the food foraging behavior of honey bees. It has been observed that scout bees gather information and the bees at the hive get asynchronous updates by means of waggle dances of the scout bees. If a promising flower patch is discovered, bees are sent from the hive for foraging. The quantity of allocated bees is proportional to the quality of patch. In HBF-PSO, several swarms of particle-bees forage (search for peak fitness) in a collective and coordinated manner. A swarm assembles in a promising region and then moves towards the top of the peak. Furthermore, there are scout particle-bees that are placed randomly and perform a local search. The foraging and scouting are allowed to continue for limited duration, sufficient enough for a swarm to forage its region of allocation (converge on peak fitness). Corresponding to the waggle dance of the bees at the hive, a blackboard is maintained which may be updated asynchronously whenever some useful information is available either from the scouting or foraging particle-bees. Particle-bees are reallocated after analyzing the information on the blackboard.
2.3.1 Related Work
Some researchers have tried in the past to mimic the behavior of honey bees in their algorithms. Abbass [Abbass, 2001a] [Abbass, 2001b] has presented the Marriage in honey Bees Optimization (MBO) algorithm which is modeled after the mating flight of queen bees. Pham [Pham, 2006a] [Pham, 2006b] has proposed the Bees algorithm which is inspired by the food foraging behavior of the honey bees. The main characteristic of Bees algorithm is the static sampling of fitness landscape. The Artificial Bee Colony algorithm has been proposed by Karaboga [Karaboga, 2005], [Basturk, 2006], [Karaboga, 2007]. It is a population based algorithm in which a possible solution to the optimization problem is represented by the position of a food source and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. Examples of other applications based on honey bee behavior include a routing algorithm [Wedde, 2004] [Farooq, 2009]. Several entomologists and sociologist have also tried to model the behavior of honey bees [Loengarov, 2006]. We elaborate a little more on the Bees algorithm and Artificial Bee Colony algorithms in the following paragraphs because they are relatively closer to our proposed algorithm. Bees Algorithm Pham et al. [Pham, 2005], [Pham, 2006] have proposed the Bees Algorithm which is a population-based algorithm that mimics the food foraging behavior of swarms of honey bees. In its basic version, the algorithm performs a kind of static neighborhood sampling combined with static random sampling. The Bees algorithm starts with the scout bees being placed randomly in the search space. The fitness of the scout bees is evaluated, after which, the bees that have the highest fitness are chosen as selected bees and sites visited by them are chosen for static neighborhood sampling. The algorithm takes samples in the neighborhood of the selected sites by placing bees in them, taking more samples near the best sites. For each site only the bee with the highest fitness is selected to form the next bee population. The remaining bees are used for random sampling. These steps are repeated until a stopping criterion is met.
Artificial Bee Colony algorithm The Artificial Bee Colony algorithm has been proposed by Karaboga [Karaboga, 2005], [Basturk, 2006], [Karaboga, 2007]. It is a population based algorithm. The colony consists of three groups of bees: employed bees, onlookers and scouts. A possible solution to the optimization problem is represented as the position of a food source and the nectar amount of a food source corresponds to the quality (fitness) of the associated solution. The number of the employed bees is equal to the number of solutions in the population. At the first step, a randomly distributed initial population (food source positions) is generated. An employed bee produces a modification on the source position in its memory and evaluates the fitness at that position (calculates the nectar amount). If the fitness (nectar amount) of the new position is higher than that of the previous position, the bee memorizes the new source position and forgets the old one; otherwise it keeps the position of the old one in memory. After all the employed bees have evaluated the new positions, the onlookers go to these positions with more onlookers going towards better positions and less onlookers going towards less fit positions. The onlookers also produce a modification on that position and evaluate the fitness at that position. The scouts randomly select positions to evaluate. This cycle continues until the termination criteria is meet. Also if the fitness of certain employed bee does not improve for some time then that employed bee is converted to a scout bee.
2.3.2 Food Foraging Behavior of Honey Bees
In this section, we present our model of the food foraging behavior of honey bees. For the development of the HBF-PSO algorithms the actual foraging process is simplified as follows. At the start of the food foraging process scout bees are sent out from the hive to look for flower patches. These bees return with information about the quantity, quality and direction of flower patches and communicate this information to the other bees by means of a waggle dance. Additional bees are then recruited based on available information to forage the discovered flower patches, with more bees being attracted towards richer flower patches. The scouting and foraging process continues simultaneously during the whole harvesting season.
Our model of the above biological process is as follows. The colony is supposed to consist of a fixed number of bees. Initially all the bees go out in random directions to find
flower patches. After a certain time they all return to the bee hive and analyze the collected information about the richness (quality and quantity) of the patches that they have discovered. Several bees go out to forage the discovered flower patches and other bees go out to scout the remaining region randomly. After spending a certain time in foraging/scouting, all of the bees again return to the bee hive and analyze the collected information. These alternate bouts of analysis and foraging/scouting are repeated again and again until the whole area has been sufficiently searched. These two phases are explained in more detail below. Book-Keeping & Analysis Phase Some aspects of the history of scouting and foraging are recorded and analyzed at the hive (corresponding to the waggle dance). During the analysis phase, the richness of each flower patch is compared with the richness of other patches. The bees decide which of the patches need to be foraged thoroughly in the next round. Each selected patch is assigned a separate swarm composed of a few bees. The remaining bees are assigned to perform random search. Richness record of each patch being foraged is kept and monitored during subsequent analysis. Based on this record a patch may be declared as completely foraged and bees are no longer assigned to that patch. Furthermore, the richer regions are foraged on a priority basis. A current patch may be abandoned on the discovery of a new richer patch. After the foraging of the richer patches is complete the bees may return to this abandoned patch again. Scouting & Foraging Phase Each selected flower patch is foraged by a swarm of several bees in a cooperative and collective manner. On the other hand, each scout bee (starting from an arbitrary point) explores its surrounding region locally and without any help from other bees. This foraging and scouting process continues for some pre-specified time duration. At the end of that time period the bees gather again at their hive, pool the information and start the analysis phase.
2.3.3 HBF-PSO Algorithm
In HBF-PSO one bee is one particle, i.e. one complete solution of the problem. A fixed number of bees (h) are created and placed randomly in the fitness landscape. The fitness at each of these points is calculated. The points are sorted on the basis of their fitness and bee swarms are created around each of the m best points. If any two or more of the best points are near one another than they are considered as belonging to the same region and only one swarm is created for them. In other words, overlapping of swarms in not allowed during their creation. This condition encourages that there may be only one exploitation per promising region. The creation of swarms means that n bees have been assigned to each of these swarms. The remaining bees (f = h - m * n) are used as scouts and placed randomly. These two types of bees are called foraging bees and scouting bees, respectively. The foraging and scouting bees search for a small, fixed number of iterations in the following manner. The n foraging bees search for a peak as a swarm in a coordinated and collective manner according to classical PSO algorithm. Mean while each of the f scout bees searches around its lieu of placement alone. This is done according to PSO algorithm without social component. After a few iterations of simultaneous foraging and scouting a global analysis takes place. The best fitness reported from each of the swarm and the best fitness of each of the scout bees are all sorted and the swarms for the next iteration are determined on the basis of the information. An old swarm is allowed to survive only if its best fitness comes within the first m best points. If one of these best points has been newly discovered by a scout bee, then a new swarm is created around it. As mentioned before, no overlapping is allowed during the creation of swarms. This eliminates one of the two swarms which have come close to one another (the swarm with the lower fitness is eliminated). This manner of creating swarms gives the algorithm a preference for exploring the higher fitness regions on a priority basis which usually leads to the discovery of higher peaks first in multi peak environments.
If a swarm has converged on a peak its best fitness value becomes stagnant. This phenomenon is kept under observation for each swarm. If a swarm is not reporting any
improvement in fitness during a few consecutive analysis phases, then it is assumed to have reached a peak. The swarm is disbanded and the region around its best fitness is considered as completely foraged. A completely foraged region is then considered taboo and is not included in any further search. In case of dynamic optimization problem, upon declaring a region as taboo, a bee is placed on the best position to monitor for change. The fitness of this bee is revaluated during each analysis phase. Once the fitness is found to have changed, it is removed from the foraged region list and becomes available for search in subsequent iterations. Please note that this monitoring is only employed in dynamic environment. The scouting/foraging followed by analysis continues until a stopping condition is fulfilled. This condition can be a prescribed number of cycles or the discovery of a pre-specified number of peaks or sufficient foraging of the search space (for example, if we are unable to find unexplored regions with a fitness greater than 10% of the best fitness) or a combination of these three conditions.
Figure 2-1 Overview of HBF-PSO algorithm
2.3.4 Summary of HBF-PSO Algorithm
An overview of the HBF-PSO algorithm is presented in Figure 2-1 and the flowchart is given in Figure. 2-3. The algorithm can be summarized in the steps outlined in Figure 2-2.
Begin N = population size; M = number of swarms; Initialize parameters; Generate initial population of bees by evaluating fitness at a random place; While (termination criteria is not met) Compile a list of all best fitness; Sort this list in descending order; While (m swarms have not been created) Remove the current top fitness from the list. If (neighborhood overlaps any other neighborhood of a swarm) Continue; End if If (neighborhood overlaps any area declared as completely foraged) Continue; End if If (retrieved fitness is due to a swarm) Continue with the old swarm; Else Place one bee on the position where that fitness has been obtained; Place other n - 1 bees randomly in the neighboring region around it; End if End while Allocate remaining bees for random scouting; Run PSO for each swarm; Run PSO with out social component for scout bees; If (any swarm has converged and become stagnant) Mark the corresponding region as completely foraged; If (optimization problem is dynamic) Place an extra bee at the global best position of foraged region to monitor it; End if End if If (optimization problem is dynamic) Evaluate the fitness of all extra bees allocated for the monitoring of foraged regions; If (fitness has changed) Remove region from foraged regions list; End if End if End while End
Figure 2-2 Summary of HBF-PSO algorithm
2.3.5 Parameters of HBF-PSO
The list of parameters which need to be set by the user is as follows: (i) total number of bees to be used, (ii) the number of bees in each swarm, (iii) the number of scouting bees, (iv) the number of swarms, (v) the size of the neighborhood where swarms are initially formed, (vi) the number of iterations in a foraging/scouting phase, (vii) the parameters of the PSO algorithm used by the foraging bees for collective search, and (viii) the parameters of the PSO algorithm used by the scout bees for isolated searching. For the time being we leave aside the parameters of points (i), (vii) and (viii) since they can be borrowed from literature of classical PSO. We concentrate on the four parameters listed from (ii) to (vi). The ratio of foraging bees to scouting bees: This ratio is dependent on the type of fitness landscape being explored. It can be fixed for the entire run of the algorithm or kept variable as a function of the fitness values of the current patches being explored. If the patches are of high fitness we can assign more bees to get to their peaks quickly and later on reassign them for exploration. Number of elite regions (swarms) for a given foraging period: We can have a fixed number of swarms throughout the duration of the HBF-PSO algorithm. Though new swarms may be created and old swarms disbanded, but the total number may be kept the same. We can also choose to have a variable number of swarms by imposing a minimum fitness threshold on the fitness values competing for creating a new swarm. In both the cases the maximum swarms allowed will have to be a reasonable number keeping in view the number of bees available for foraging (at least two or three bees have to be assigned to each swarm and we also have to reserve some bees for scouting purposes). The size of a patch (i.e. the region defining the patch): Regions may need to be defined for swarm formation at the beginning of the foraging phase. The size of these regions can be kept constant throughout the execution of the algorithm (e.g. one tenth of the range of a dimension, centered on the best fitness point known for that patch at the time of analysis). The region can also be defined in a variable way after a detailed analysis of the recent foraging history of a patch.
Figure 2-3 Flowchart of HBF-PSO algorithm.
The number of iterations defining the foraging period: This can be fixed at a reasonable arbitrary number (e.g. 15). This can also be made self-adaptive and can be shortened if we are reaching fitness plateaus earlier or if we observe oscillations in swarm movement (indicating multiple peaks in the same area). Similarly, it can be lengthened if we observe that swarms are being created in the same region again and again in consecutive analysis. In this chapter after given an overview of sub-swarm based PSO algorithms, PSO algorithms with niching and sub-swarm based PSO algorithms with niching, we presented our proposed honey bee foraging behavior inspired PSO algorithm. In the next chapter we present the experimentation that we have carried out to test its performance.