The Automatic Adjustment Of Three Parameters Biology Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

In this chapter we explore the automatic adjustment of three parameters of HBF-PSO algorithm. First we describe a method for automatic adjustment of swarm radius. Second we use the species seed identification algorithm to automatically create swarms which relieves us from setting the user defined parameters of number of swarms and the number of bees per swarm. The experimental results for both methods are reported.

4.1 Self Adjusting Neighborhood (HBF-PSOn)

While creating new swarms in HBF-PSO, a neighborhood size is employed which restricts the initialization of the particles of that swarm to within a specified region. The neighborhood size is a sensitive parameter as it greatly affects the performance of the algorithm. Ideally the neighborhood size should be half of the distance between two closest optima. This is to ensure that the region in which a single swarm is initialized contains only one optimum. We propose an extension to the HBF-PSO algorithm, which we call Honey Bee Foraging Behavior Inspired PSO Algorithm with Self Adjusting Neighborhood (HBF-PSOn), which allows it to automatically adjust and adapt the neighborhood size during its execution [Rashid, 2009a]. In this extension whenever a swarm converges to a solution, we perform a check to determine the number of peaks found and recalculate the distance between the closest two peaks and set the neighborhood size equal to half of this distance.

Figure 4 1 Flow chart of HBF-PSOn algorithm

Figure 4 2 Summary of HBF-PSOn algorithm

The HBF-PSOn is similar to the HBF-PSO algorithm and only one new step is added. The flow chart of HBF-PSOn algorithm is presented in Figure 4-1. It can be summarized in the steps outlined in Figure 4-2.

Table 4 1 Accuracy, number of function evaluation and success rate of HBF-PSOn and HBF-PSO

HBF-PSOn HBF-PSO

Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR

2D

F1 5.54E-07�1.41E-07 1.91E+03�5.23E+02 100% 2.87E-06�1.19E-07 6.96E+02�3.21E+02 100%

F2 4.38E-05�3.12E-05 1.11E+04�2.22E+03 100% 1.76E-05�3.80E-05 1.26E+04�3.83E+03 92%

F3 4.53E-06�4.31E-06 4.30E+03�2.06E+03 100% 9.47E-07�6.49E-07 4.10E+03�1.27E+03 100%

F4 1.80E-05�1.11E-05 7.81E+03�8.34E+02 100% 1.65E-05�1.93E-05 7.79E+03�8.62E+02 100%

F5 6.34E-04�2.04E-03 9.40E+03�5.01E+03 84% 1.23E-03�2.75E-03 6.55E+03�7.68E+03 84%

10D

F1 3.27E-05�2.26E-05 1.06E+04�1.53E+03 100% 3.06E-05�2.19E-05 1.15E+04�2.99E+03 100%

F2 3.27E-01�4.60E-01 1.00E+05�0.00E+00 0% 4.60E-01�6.64E-01 1.00E+05�0.00E+00 0%

F3 2.51E+00�1.52E+00 9.73E+04�1.34E+04 4% 2.31E+00�9.82E-01 9.71E+04�1.44E+04 4%

F4 5.33E-05�2.27E-05 3.18E+04�1.43E+04 100% 5.42E-05�2.30E-05 3.06E+04�9.66E+03 100%

F5 8.47E-02�3.70E-02 1.00E+05�0.00E+00 0% 7.82E-02�4.23E-02 1.00E+05�0.00E+00 0%

30D

F1 5.63E-05�3.13E-05 7.69E+04�4.72E+04 100% 5.53E-05�3.03E-05 7.32E+04�2.06E+04 100%

F2 1.08E+01�8.13E+00 3.00E+05�0.00E+00 0% 1.11E+01�9.06E+00 3.00E+05�0.00E+00 0%

F3 2.32E+01�6.05E+00 3.00E+05�0.00E+00 0% 2.12E+01�5.97E+00 3.00E+05�0.00E+00 0%

F4 2.26E+00�6.42E-01 3.00E+05�0.00E+00 0% 1.98E+00�7.41E-01 2.95E+05�2.63E+04 4%

F5 6.26E-02�4.99E-02 2.76E+05�6.76E+04 12% 4.98E-02�4.10E-02 2.61E+05�8.07E+04 20%

Accuracy, as defined by the best error value from the global optimum, achieved by HBF-PSOn on 2, 10 and 30 dimensional versions of Sphere (F1), Rosenbrock (F2), Rastrigin (F3), Ackley (F4) and Griewank (F5) functions. The error values are averages of 25 runs for each function. The total number of function evaluations allowed is 20000, 100000, and 300000 for the 2, 10 and 30 dimensional versions of the functions. These values are compared with those achieved by HBF-PSO on the same suite of test functions. Also shown for both algorithms is the number of function evaluations required to achieve an error value of 0.0001 and the success rate, as defined by the number of times the algorithm was able to find the global peak, achieved. Better performance is shown in boldface.

4.1.1 Single Optimum in Multimodal Environments

HBF-PSOn is run on the suite of five problems described in the Section 3.2.1. Table 4-1 presents the performance results of HBF-PSOn on the 2, 10 and 30 dimensional versions of the test functions. HBF-PSOn only managed to show better accuracy for 2 dimensional Sphere function, 10 dimensional Rosenbrock function and 30 dimensional Rosenbrock function, on all other test functions the accuracy of HBF-PSO is better. The number for functional evaluations required by HBF-PSOn to achieve an accuracy of 0.0001 is less than that of HBF-PSO for only 2 dimensional Rosenbrock function and 10 dimensional Sphere function, for all other test functions the number of function evaluations required by HBF-PSOn is greater than or equal to that of HBF-PSO. HBF-PSOn only managed to show better success rate for 2 dimensional Rosenbrock function, on all other test functions the success rate of HBF-PSO is better. The comparison the results of HBF-PSOn and HBF-PSO shows that the self-adjusting neighborhood definition of HBF-PSOn is not successful in general over the user defined neighborhood used in HBF-PSO.

4.1.2 Multiple Optima in Multimodal Environments

HBF-PSOn was also run on test functions described in section 3.2.1. Table 4-2 presents the results obtained for accuracy and success rate of HBF-PSOn. The accuracy is calculated by taking the average of the error of the closest particles to optima and then averaging the results over 50 runs. The results of HBF-PSO, SPSO and NichePSO are also presented in Table 4-2 to allow for easy comparison. The reported accuracies of SPSO and NichePSO are taken from [Parrot, 2006] and [Brits, 2002b] respectively.

HBF-PSOn reported a success rate of 100% for all test functions. In terms of accuracy, HBF-PSOn performed slightly better on functions F9 and F10 where as the performance of HBF-PSO was better on F6 and F8. The performances of both were the same for F7. Overall the performance of both HBF-PSO and HBF-PSOn are almost identical which was expected because the neighborhood size used for HBF-PSO is ideal for these test functions.

Table 4 2 Accuracy of HBF-PSOn and HBF-PSO

Function HBF-PSOn HBF-PSO SPSO [Parrot, 2006] NichePSO [Brits, 2002b]

F6 3.73E-12�7.14E-12 3.67E-15�6.75E-15 0.00�0.00 7.68E-05�3.11E-05

F7 0.00�0.00 0.00�0.00 4.00E-17�2.26E-17 9.12E-02�9.09E-03

F8 1.66E-15�3.12E-15 1.02E-15�1.94E-15 3.20E-14�3.20E-14 5.95E-06�6.87E-06

F9 1.45E-11�8.53E-18 1.45E-11�1.25E-17 1.72E-07�0.00 8.07E-02�9.45E-03

F10 4.06E-31�2.32E-32 4.81E-31�1.56E-31 2.19E-09�2.19E-09 4.78E-06�1.46E-06

Accuracy, as defined by the average of best error values from the global optima, achieved by HBF-PSOn on the five test functions defined in Section 3.2.1. The error values are averages of 50 runs for each function. The total number of iterations allowed is 2000. These values are compared with HBF-PSO results from Table 3-4, SPSO results quoted from [Parrot, 2006] and NichePSO results quoted from [Brits, 2002b] on the same suite of test functions. Better performance is shown in boldface.

4.1.3 Dynamic Optimization Experiments

We also tested the HBF-PSOn algorithm on dynamic optimization problems. The test functions, parameters and performance measures are same as given in Section 3.3.1 to Section 3.3.3. The offline errors for HBF-PSOn are given in Table 4-3. The results indicate that the user defined radius of HBF-PSO is better than the self adjusted one of HBF-PSOn for the moving peaks benchmark problem. HBF-PSOn shows relatively poorer performance because it suffers from over fitting of the radius to the current problem and when a change in the environment occurs, it cannot adjust to the new problem fast enough.

Table 4 3 Offline error after 90 environment changes (averaged over 50 runs)

Peaks HBF-PSOn HBF-PSO Modified PSO

1 0.41096�0.097164 0.386771�0.093619 0.410522�0.093723

2 0.616084�0.145435 0.588506�0.116014 0.601038�0.148958

3 0.739823�0.134723 0.721849�0.119004 0.761835�0.171065

4 0.974574�0.356381 0.896369�0.207682 0.940301�0.384956

5 1.17242�0.269992 1.074916�0.274183 1.231222�0.255825

6 1.268157�0.234895 1.197776�0.238554 1.246377�0.310335

7 1.43168�0.319109 1.364812�0.270311 1.415025�0.282955

8 1.601541�0.298421 1.463505�0.310638 1.625191�0.375529

9 1.678506�0.293537 1.610298�0.268932 1.686908�0.32827

10 1.776964�0.413589 1.674262�0.304011 1.723074�0.291545

4.2 Species Identification Algorithm

We now incorporate an algorithm called species identification algorithm into HBF-PSO. We call this variant of HBF-PSO the HBF-SPSO algorithm. The species identification algorithm utilized here was introduced by Petrowski [Petrowski, 1996] and has been used in [Cioppa, 2007] [Li, 2002] [Parrot, 2006]. According to this algorithm, for the purpose of creating species (or sub-swarms), we first need to identify species seeds from the whole population. A species seed has the highest fitness within that species and is considered the neighborhood best for all other particles in the same species. We first sort the particles in order of their fitness. The first particle on the top of the list is made a species seed. We then process all particles starting from the one with second highest fitness and continue until we reach the particle with worst fitness. If the particle being processed falls within a pre-specified radius of an already identified species seed, then we make this particle a member of the species represented by that species seed. However if the particle does not fall within the radius of any other species seed found thus far, then we make this particle a species seed itself. We use the species seed algorithm at the start of every HBF iteration to define swarms. The species seeds identification algorithm is shown in Figure 4-3.

Figure 4 3 Algorithm for determining species seeds

The algorithm takes as an input Lsorted, a list containing all particles sorted in decreasing order of fitness. The species seed S set is initially set to be empty. All particles are checked in turn (from best to least-fit) against the species seeds found so far. If a particle does not fall within the radius rs of all the seeds of S, then this particle will become a new seed and be added to S. Figure 4-4 provides an example to illustrate the working of this algorithm. In this case, applying the algorithm will identify s1, s2, and s3 as the species seeds. Note also that if seeds have their radii overlapped (e.g., s2 and s3 here), the first identified seed (such as s2) will prevail over those seeds identified later from the list Lsorted. For example, s2 prevails over s3, therefore p should belong to the species led by s2.

Figure 4 4 Example of how to determine the species seeds

The flowchart of HBF-SPSO algorithm is presented in Figure 4-6. The HBF-SPSO algorithm can be summarized in the steps outlined in Figure 4-5.

Figure 4 5 Summary of HBF-SPSO algorithm

As was the case with HBF-PSO, for dynamic optimization problem, in step 6 after marking the region as completely foraged, a bee is placed at the global best position of that region to monitor it. After step 6 an additional step is also performed. This consists of evaluation of the fitness of all bees allocated for the monitoring of foraged regions. If their fitness has changed, then the region is removed from foraged regions list.

4.2.1 Single Optimum in Multimodal Environments

HBF-SPSO is run on the same suite of five problems used in the previous section. Table 4-4 to Table 4-6 present the performance achieved by HBF-SPSO. In addition to utilizing 2% of the search space as the radius for determining species seed, we have also experimented with a method of determining the species radius proposed by Deb and Goldberg [Deb, 1989]. Many researchers have utilized the Deb & Goldberg method for determining the species radius.

Figure 4 6 Flowchart of HBF-SPSO algorithm

Deb and Goldberg have proposed a method to determine the value of niche radius given the number of global optima in the search space and the upper and lower bound of each dimension of the search space. The niche radius r is calculated as:

(4.1)

Where is the number of global optima in the search space and and are the upper and lower bounds on the th dimension of the variable vector of dimensions. The 2% radius and Deb & Goldberg�s radius is given in Table 4-4.

Table 4 4 Species radius calculated @ 2% and Deb & Goldberg�s method

Function Range

in each dimension 2% radius Deb & Goldberg's radius

2D 10D 30D

F1 -5.12 to +5.12 0.2048 7.240773 16.19086 28.04339

F2 -30 to +30 1.2 42.42641 94.86833 164.3168

F3 -5.12 to +5.12 0.2048 7.240773 16.19086 28.04339

F4 -32 to +32 1.28 45.25483 101.1929 175.2712

F5 -600 to +600 24 848.5281 1897.367 3286.335

We first compare the performance of HBF-SPSO which utilizes the 2% species radius with that of HBF-SPSO which utilizes a species radius calculated by the Deb & Goldberg method. Table 4-5 presents the results of our experiments. HBF-SPSO which utilized the 2% radius only managed to show better performance for 2 dimensional Sphere and Ackley functions, on all other functions the performance of HBF-SPSO, which utilized the species radius calculated by Deb & Goldberg�s method, was better.

We also compared the performance of HBF-PSO which utilized Deb & Goldberg method for calculating species radius with that of HBF-PSO. The results of experiments are presented in Table 4-6. HBF-PSO showed better performance as compared to HBF-SPSO. HBF-SPSO only had better accuracy for the 2 dimensional Griewank function. In terms of number of function evaluations required to achieve an accuracy of 0.0001, HBF-SPSO was better as compared to HBF-PSO for only 2 dimensional Rosenbrock function. In terms of success rate, HBF-SPSO showed better performance as compared to HBF-PSO for only 2 dimensional Rosenbrock and Griewank functions, for all other functions HBF-PSO was better.

Table 4 5 Accuracy, number of function evaluations and success rate HBF-SPSO with 2% radius and with Deb & Goldberg�s radius.

HBF-SPSO (02% radius) HBF-SPSO (Deb & Goldberg�s radius)

Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR

2D

F1 3.82E-05�2.73E-05 2.32E+03�5.72E+02 100% 4.03E-05�2.80E-05 2.10E+03�6.82E+02 100%

F2 3.77E-03�1.68E-02 1.63E+04�4.47E+03 52% 4.02E-05�2.98E-05 8.41E+03�2.60E+03 100%

F3 4.01E-02�1.99E-01 8.13E+03�5.14E+03 88% 4.63E-05�3.42E-05 6.31E+03�1.66E+03 100%

F4 5.33E-05�2.80E-05 9.31E+03�9.26E+02 100% 5.95E-05�2.77E-05 7.90E+03�6.91E+02 100%

F5 6.88E-03�7.45E-03 1.76E+04�4.43E+03 36% 9.26E-04�2.44E-03 1.16E+04�4.43E+03 88%

10D

F1 6.97E+00�2.86E+00 1.00E+05�0.00E+00 0% 5.67E-03�9.07E-03 8.94E+04�2.47E+04 16%

F2 8.71E+05�7.41E+05 1.00E+05�0.00E+00 0% 1.54E+02�1.35E+02 1.00E+05�0.00E+00 0%

F3 4.54E+01�7.05E+00 1.00E+05�0.00E+00 0% 9.70E+00�4.47E+00 1.00E+05�0.00E+00 0%

F4 1.37E+01�1.62E+00 1.00E+05�0.00E+00 0% 2.33E+00�1.13E+00 1.00E+05�0.00E+00 0%

F5 2.44E+01�7.61E+00 1.00E+05�0.00E+00 0% 3.44E-01�2.27E-01 1.00E+05�0.00E+00 0%

30D

F1 5.81E+01�8.23E+00 3.00E+05�0.00E+00 0% 6.83E+00�2.01E+00 3.00E+05�0.00E+00 0%

F2 3.11E+07�9.15E+06 3.00E+05�0.00E+00 0% 1.07E+06�6.14E+05 3.00E+05�0.00E+00 0%

F3 2.53E+02�1.95E+01 3.00E+05�0.00E+00 0% 1.03E+02�2.06E+01 3.00E+05�0.00E+00 0%

F4 1.80E+01�4.72E-01 3.00E+05�0.00E+00 0% 1.14E+01�1.24E+00 3.00E+05�0.00E+00 0%

F5 2.03E+02�2.83E+01 3.00E+05�0.00E+00 0% 2.64E+01�1.04E+01 3.00E+05�0.00E+00 0%

Accuracy, as defined by the best error value from the global optimum, achieved by HBF-SPSO (2% radius) and HBF-SPSO(Deb & Goldberg�s radius) on 2, 10 and 30 dimensional versions of Sphere (F1), Rosenbrock (F2), Rastrigin (F3), Ackley (F4) and Griewank (F5) functions. The error values are averages of 25 runs for each function. The total number of function evaluations allowed is 20000, 100000, and 300000 for the 2, 10 and 30 dimensional versions of the functions. Also shown for both algorithms is the number of function evaluations required to achieve an error value of 0.0001 and the success rate, as defined by the number of times the algorithm was able to find the global peak, achieved. Better performance is shown in boldface.

Since HBF-SPSO which utilized the species radius calculated by Deb & Goldberg�s method showed better performance as compared to the HBF-SPSO that utilized the 2% radius, we were also interested in determining whether the same would hold for HBF-PSO. So we also test HBF-PSO utilizing the radius calculated by Deb & Goldberg�s method. The results of the experiment are presented in Table 4-7. Apparently, the performance of HBF-PSO deteriorated when the radius calculated by Deb & Goldberg�s method was utilized.

4.2.2 Multiple Optima in Multimodal Environments

HBF-SPSO was also run on test functions described in section 3.2.1. Table 4-8 presents the results obtained for accuracy of HBF-SPSO. The accuracy is calculated by taking the average of the error of the closest particles to optima and then averaging the results over 50 runs. The results of HBF-PSO, HBF-PSOn, SPSO and NichePSO are also presented in Table 4-8 to allow for easy comparison. The reported accuracies of SPSO and NichePSO are taken from [Parrot, 2006] and [Brits, 2002b] respectively.

Table 4 6 Accuracy, number of function evaluations and success rate HBF-SPSO with Deb & Goldberg�s radius and HBF-PSO.

HBF-PSO HBF-SPSO (Deb�s radius)

Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR

2D

F1 2.87E-06�1.19E-07 6.96E+02�3.21E+02 100% 4.03E-05�2.80E-05 2.10E+03�6.82E+02 100%

F2 1.76E-05�3.80E-05 1.26E+04�3.83E+03 92% 4.02E-05�2.98E-05 8.41E+03�2.60E+03 100%

F3 9.47E-07�6.49E-07 4.10E+03�1.27E+03 100% 4.63E-05�3.42E-05 6.31E+03�1.66E+03 100%

F4 1.65E-05�1.93E-05 7.79E+03�8.62E+02 100% 5.95E-05�2.77E-05 7.90E+03�6.91E+02 100%

F5 1.23E-03�2.75E-03 6.55E+03�7.68E+03 84% 9.26E-04�2.44E-03 1.16E+04�4.43E+03 88%

10D

F1 3.06E-05�2.19E-05 1.15E+04�2.99E+03 100% 5.67E-03�9.07E-03 8.94E+04�2.47E+04 16%

F2 4.60E-01�6.64E-01 1.00E+05�0.00E+00 0% 1.54E+02�1.35E+02 1.00E+05�0.00E+00 0%

F3 2.31E+00�9.82E-01 9.71E+04�1.44E+04 4% 9.70E+00�4.47E+00 1.00E+05�0.00E+00 0%

F4 5.42E-05�2.30E-05 3.06E+04�9.66E+03 100% 2.33E+00�1.13E+00 1.00E+05�0.00E+00 0%

F5 7.82E-02�4.23E-02 1.00E+05�0.00E+00 0% 3.44E-01�2.27E-01 1.00E+05�0.00E+00 0%

30D

F1 5.53E-05�3.03E-05 7.32E+04�2.06E+04 100% 6.83E+00�2.01E+00 3.00E+05�0.00E+00 0%

F2 1.11E+01�9.06E+00 3.00E+05�0.00E+00 0% 1.07E+06�6.14E+05 3.00E+05�0.00E+00 0%

F3 2.12E+01�5.97E+00 3.00E+05�0.00E+00 0% 1.03E+02�2.06E+01 3.00E+05�0.00E+00 0%

F4 1.98E+00�7.41E-01 2.95E+05�2.63E+04 4% 1.14E+01�1.24E+00 3.00E+05�0.00E+00 0%

F5 4.98E-02�4.10E-02 2.61E+05�8.07E+04 20% 2.64E+01�1.04E+01 3.00E+05�0.00E+00 0%

Accuracy, as defined by the best error value from the global optimum, achieved by HBF-PSO and HBF-SPSO (Deb & Goldberg�s radius) on 2, 10 and 30 dimensional versions of Sphere (F1), Rosenbrock (F2), Rastrigin (F3), Ackley (F4) and Griewank (F5) functions. The error values are averages of 25 runs for each function. The total number of function evaluations allowed is 20000, 100000, and 300000 for the 2, 10 and 30 dimensional versions of the functions. Also shown for both algorithms is the number of function evaluations required to achieve an error value of 0.0001 and the success rate, as defined by the number of times the algorithm was able to find the global peak, achieved. Better performance is shown in boldface.

From the results it is evident that HBF-SPSO in comparison to NichePSO was only able to show better accuracy for functions F7 and F9. However HBF-SPSO, when compared with HBF-PSO, HBF-PSOn and SPSO, was not able to exhibit better accuracy on any of the test functions.

4.2.3 Dynamic Optimization Experiments

We also tested the HBF-PSOn algorithm on dynamic optimization problems. The test functions, parameters and performance measures are same as given in Section 3.4.1 to Section 3.4.3. The offline errors for HBF-SPSO are given in Table 4-9. The results of HBF-PSO, HBF-PSOn and modified PSO are also presented in Table 4-9 for easy comparison. The results indicate that although HBF-SPSO was able to achieve better performance as compared to the modified PSO for dynamic problems with 1, 3, 5, 7 and 8 peaks, however its performance was still worse than that of HBF-PSO.

Table 4 7 Accuracy, number of function evaluations and success rate HBF-PSO with 2% radius and with Deb & Goldberg�s radius.

HBF-PSO (02% radius) HBF-PSO (Deb�s radius)

Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR Accuracy

(mean and std err) Num. of evals.

(mean and std err) SR

2D

F1 2.87E-06�1.19E-07 6.96E+02�3.21E+02 100% 8.23E-06�6.46E-06 2.06E+03�7.00E+02 100%

F2 1.76E-05�3.80E-05 1.26E+04�3.83E+03 92% 3.61E-05�2.02E-05 1.79E+04�1.01E+04 100%

F3 9.47E-07�6.49E-07 4.10E+03�1.27E+03 100% 8.80E-06�1.07E-05 6.16E+03�1.71E+03 100%

F4 1.65E-05�1.93E-05 7.79E+03�8.62E+02 100% 2.27E-05�1.38E-05 1.03E+04�1.58E+03 100%

F5 1.23E-03�2.75E-03 6.55E+03�7.68E+03 84% 3.87E-03�4.14E-03 1.87E+04�7.53E+03 52%

10D

F1 3.06E-05�2.19E-05 1.15E+04�2.99E+03 100% 2.28E-05�1.57E-05 1.49E+04�1.50E+03 100%

F2 4.60E-01�6.64E-01 1.00E+05�0.00E+00 0% 5.44E+00�1.77E+01 1.00E+05�0.00E+00 0%

F3 2.31E+00�9.82E-01 9.71E+04�1.44E+04 4% 1.04E+01�3.42E+00 1.00E+05�0.00E+00 0%

F4 5.42E-05�2.30E-05 3.06E+04�9.66E+03 100% 7.74E-01�1.13E+00 6.39E+04�3.30E+04 56%

F5 7.82E-02�4.23E-02 1.00E+05�0.00E+00 0% 9.84E-02�6.32E-02 1.00E+05�0.00E+00 0%

30D

F1 5.53E-05�3.03E-05 7.32E+04�2.06E+04 100% 5.79E-05�2.61E-05 1.05E+05�2.06E+04 100%

F2 1.11E+01�9.06E+00 3.00E+05�0.00E+00 0% 4.21E+01�3.92E+01 3.00E+05�0.00E+00 0%

F3 2.12E+01�5.97E+00 3.00E+05�0.00E+00 0% 1.05E+02�2.65E+01 3.00E+05�0.00E+00 0%

F4 1.98E+00�7.41E-01 2.95E+05�2.63E+04 4% 1.26E+01�3.01E+00 3.00E+05�0.00E+00 0%

F5 4.98E-02�4.10E-02 2.61E+05�8.07E+04 20% 2.10E-01�2.20E-01 2.87E+05�4.67E+04 8%

Accuracy, as defined by the best error value from the global optimum, achieved by HBF-PSO (2% radius) and HBF-PSO(Deb & Goldberg�s radius) on 2, 10 and 30 dimensional versions of Sphere (F1), Rosenbrock (F2), Rastrigin (F3), Ackley (F4) and Griewank (F5) functions. The error values are averages of 25 runs for each function. The total number of function evaluations allowed is 20000, 100000, and 300000 for the 2, 10 and 30 dimensional versions of the functions. Also shown for both algorithms is the number of function evaluations required to achieve an error value of 0.0001 and the success rate, as defined by the number of times the algorithm was able to find the global peak, achieved. Better performance is shown in boldface.

Table 4 8 Accuracy of HBF-SPSO, HBF-PSO, HBF-PSOn, SPSO and NichePSO

Function HBF-PSO HBF-PSOn HBF-SPSO SPSO [Parrot, 2006] NichePSO [Brits, 2002b]

F6 3.67E-15 � 6.75E-15 3.73E-12 � 7.14E-12 2.72E-02 � 7.21E-02 0.00 � 0.00 7.68E-05 � 3.11E-05

F7 0.00 � 0.00 0.00 � 0.00 1.03E-02 � 2.39E-02 4.00E-17 � 2.26E-17 9.12E-02 � 9.09E-03

F8 1.02E-15 � 1.94E-15 1.66E-15 � 3.12E-15 5.17E-02 � 8.06E-02 3.20E-14 � 3.20E-14 5.95E-06 � 6.87E-06

F9 1.45E-11 � 1.25E-17 1.45E-11 � 8.53E-18 7.77E-03 � 1.71E-02 1.72E-07 � 0.00 8.07E-02 � 9.45E-03

F10 4.81E-31 � 1.56E-31 4.06E-31 � 2.32E-32 3.15E-02 � 8.20E-02 2.19E-09 � 2.19E-09 4.78E-06 � 1.46E-06

Accuracy, as defined by the average of best error values from the global optima, achieved by HBF-SPSO on the five test functions defined in Section 3.2.1. The error values are averages of 50 runs for each function. The total number of iterations allowed is 2000. These values are compared with SPSO results quoted from [Parrot, 2006] and NichePSO results quoted from [Brits, 2002b] on the same suite of test functions. Better performance is shown in boldface.

Table 4 9 Offline error after 90 environment changes (averaged over 50 runs)

Peaks HBF-PSO HBF-PSOn HBF-SPSO Modified PSO

1 0.386771�0.093619 0.41096�0.097164 0.402616 � 0.097629 0.410522�0.093723

2 0.588506�0.116014 0.616084�0.145435 0.628233 � 0.128867 0.601038�0.148958

3 0.721849�0.119004 0.739823�0.134723 0.758156 � 0.120164 0.761835�0.171065

4 0.896369�0.207682 0.974574�0.356381 0.962625 � 0.202163 0.940301�0.384956

5 1.074916�0.274183 1.17242�0.269992 1.138752 � 0.248472 1.231222�0.255825

6 1.197776�0.238554 1.268157�0.234895 1.321595 � 0.394764 1.246377�0.310335

7 1.364812�0.270311 1.43168�0.319109 1.398601 � 0.2976 1.415025�0.282955

8 1.463505�0.310638 1.601541�0.298421 1.601541 � 0.298421 1.625191�0.375529

9 1.610298�0.268932 1.678506�0.293537 1.688235 � 0.294861 1.686908�0.32827

10 1.674262�0.304011 1.776964�0.413589 1.862164 � 0.452827 1.723074�0.291545

In this chapter we presented our experiments for finding a way to self-adjust the swarm radius. In the next chapter we focus on our experiments for evolving separate velocity update equations for each particle.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.