# Harmony Search Algorithm Meta Heuristic Methods Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Meta-heuristic techniques are based on certain strategies and they are non-deterministic algorithms. These strategies guide the search process in the design domain. In other words, meta-heuristic can be described as a general algorithmic framework. However; the common agreed definition for meta-heuristic cannot be accepted due to a variety of techniques and concepts comprised by meta-heuristic. In recent years, the word 'meta-heuristics' refers to all modern higher level algorithms. The meta-heuristic methods make random moves rather than in the descending direction of the gradient vector of objective function. Recently, meta-heuristic algorithms are widely applied in solving combinatorial optimization engineering problems. Furthermore the literature survey also indicates that the meta-heuristic techniques are widely preferred in solving engineering optimization problems compared to traditional mathematical programming techniques. There are a number of meta-heuristic techniques, such as genetic algorithm (GA), tabu search method (TS), simulated annealing (SA), evolutionary strategies (ES), ant colony optimization (ACO), particle swarm optimization (PSO), bees algorithm (BE), firefly algorithms (FA), and certainly harmony search algorithm (HS). Some of these algorithms are discussed in the following sections. The harmony search algorithm is one of the recent additions to the meta-heuristic optimization methods. This new meta-heuristic method imitates the music improvisation of a musician into a numerical approach for general optimization problems. The harmony search algorithm is presented by Geem, Kim and Loganathan [30]. The harmony search algorithm is used in the engineering optimization problems effectively, because this algorithm not only needs adjustment of a few parameters and also consists of simple steps and has a fast convergence rate. For these reasons, the harmony search algorithm is adopted in the solution of the optimum detailed reinforced concrete design problem in this thesis.

## 2.1.1 The Genetic algorithm

The genetic algorithm (GA) method is developed by Holland [31] in 1975. The GA is a search and optimization method based on the natural selection and the principles of genetics. These principles are derived from biology and rely on Darwin's theory of survival of the fittest. The specific mechanics of this algorithm use the language of microbiology, and mimics genetic operations. The basic idea of a genetic algorithm is to generate a new set of designs (population) from the current set such that the average fitness of the population is improved. The process is continued until the number of iterations exceeds a specified limit. Three genetic operators: reproduction, crossover and mutation, are used to accomplish this task. The main model of GA is given in Figure 2.1.

Figure 2.1 The main model of Genetic Algorithm

Genetic algorithms have been studied, experimented and applied in many fields in engineering worlds, such as optimal control problems, job scheduling, transportation problems, pattern recognition, machine-learning [32, 33], etc. Genetic algorithms are extremely successful in solving unconstrained and constrained optimization problems. GAs do not only provide alternative methods to solve a problem, but it also consistently outperforms other traditional methods in most of the problems. In future, we would witness some developments of variants of GAs to tailor for some very specific tasks [34].

## 2.1.2 Evolutionary Strategies

Evolution Strategies (ES) were developed by Rechenberg [35] and Schwefel [36] at the Technical University of Berlin in Germany and have been extensively studied in Europe. It was created in the early 1960s and developed further along the 1970s and later. Evolutionary Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. Evolution Strategies (ES) are in many ways very similar to Genetic Algorithms (GA), but Evolution Strategies do not use crossover operators. They mainly use mutation, selection and element-wise average for intermediate recombination as the genetic operators. The important working mechanisms are mutation and adaptation of mutation rates in the Evolution Strategies. In evolutionary strategies, the representation used is a fixed-length real-valued vector. As with the bit-strings of genetic algorithms, each position in the vector corresponds to a feature of the individual. However; the features are considered to be behavioral rather than structural. Consequently, arbitrary non-linear interactions between features during evaluation are expected which forces a more holistic approach to evolving solutions [37]. Evolution Strategies have been applied to a wide range of optimization problems [38, 39].

## 2.1.3 Simulated Annealing

Simulated annealing (SA) is the one of modern meta-heuristic algorithms. SA was first developed by Kirkpatrick et al. [40] in 1983. The current state of the thermodynamic system is analogous to the current solution to the combinatorial problem, the energy equation for the thermodynamic system is analogous to the objective function, and ground state is analogous to the global minimum. The concept is based on the manner in which liquids freeze or metals recrystalize in the process of annealing. In an annealing process a melt, initially at high temperature and disordered, is slowly cooled so that the system at any time is approximately in thermodynamic equilibrium. As cooling proceeds, the system becomes more ordered and approaches a "frozen" ground state at T=0. Hence the process can be thought of as an adiabatic approach to the lowest energy state.

The optimization process typically starts from an initial guess with higher energy. It then moves to another location randomly with slightly reduced energy. The move is accepted if the new state has lower energy and the solution improves with a better objective or lower value of the objective function for minimization.

## 2.1.4 Ant Colony Optimization

The ant colony optimization (ACO) formulated by Dorigo et al. [41, 42] is another population-based meta-heuristic algorithm. This algorithm is based upon the characteristics of behaviours of social ants. Ants can find the shortest path to food by laying a pheromone (chemical) trail as they walk and other ants follow the pheromone trail to food. Ants that happen to pick the shorter path will create a strong trail of pheromone faster than the ones choosing a longer path. Since stronger pheromone attracts ants better, more and more ants choose the shorter path until eventually all ants have found the shortest path. Consequently, new ants will select that path first and further reinforce the pheromone level on that path. Eventually all the ants will follow the shortest path to food [43].

The ACO is used for discrete routing and scheduling optimization problems, because this problem closely resembles the shortest path to food source. This algorithm is exceptionally successful in combinatorial optimization problems. Ant colony optimization algorithms have been applied to many combinatorial and engineering optimization problems and successful results are obtained [44].

## 2.1.5 Particle Swarm Optimization

The particle swarm optimization (PSO) algorithm was formulated by Eberhart and Kennedy [45] in 1995, inspired by the social behaviour of animals in nature, such as bird flocking and fish schooling. Unlike genetic algorithms and simulated annealing, PSO has no evolution operators and searches the solution space by adjusting multiple trajectories of individual agents (called particles). PSO begins with a random population matrix and the rows in this matrix are called particles. They contain variables value are not binary coded. Each particle moves about the function surface with a velocity and the particles update their positions and velocities based on local and global best solutions;

(2.1)

(2.2)

where,

The PSO algorithm updates the velocity vector for each particle then updates the particle positions or values using updated velocity. Velocity updates are influenced by both the best global and the best local solutions. The particle velocity is reminiscent of local minimisers that use derivative information, because velocity is the derivative of position. The advantages of PSO are that there are few parameters to adjust and PSO algorithm is easy to implement. Particle swarm optimization has been applied to a wide range of engineering optimization problems [46] and there are many variants of PSO in the literature [47].

## 2.1.6 Bee Colony Optimization

The bees algorithm is a population-based optimization algorithm developed by Pham et al. in 2005 [48]. The bees algorithm is inspired by natural foraging behavior of honey bees to use in optimization problems. A colony of honey bees can extend itself over a large area and in multiple directions to exploit a large number of food sources at the same time [49]. A colony feeds by spreading its foragers to good fields. Practically, a large number of bees visit and collect nectar or pollen from flower patches with plentiful amounts of food, instead of visiting flower patches with less nectar or pollen. This behavior provides feeding of colony with less effort.

At the beginning of the foraging process, the scout bees in a colony are sent to search for rich flower patches. Scout bees move from one patch to another randomly. Bee colony keeps a percentage as scout bees and continues its investigation during the harvesting season. After scout bees return to their hive, they go to the dance floor to perform a dance known as the waggle dance. By performing this dance, successful bees on finding a source share with their hive mates information about the direction to patches of flowers yielding nectar and pollen, the distance of these patches from hive and the quality rating of these patches. Thus the waggle dance helps successful foragers to recruit other bees in their colony to good locations for collecting various resources. In other words, waggle dance assists the colony to compare relative rate of different source patches according to both the amount of nectar or pollen in the source patch and amount of energy needed to collect them. After the dancer (i.e. the scout bee) finished its waggle dance, this bee turns to the flower patch with follower bees that were waiting inside the hive. For more promising source patches, more follower bees are sent to collect to food quickly and efficiently. The bees decide upon the next waggle dance, they will perform when they return to the hive, by monitoring food level of sources each time. In case these patches are still good enough as a food source, these patches advertised in the waggle dance for more bees will be recruited to these sources. The pseudo-code of Bees Algorithm in its simplest form is given Figure 2.2.

## 1

Initialize population with random solutions

## 2

Evaluate fitness of the population

While (stopping criterion not met)

## 3

Forming new bee population (select elite bees) and select

sites for neighborhood search

## 4

Recruit bees for selected sites and evaluate their fitness

## 5

Select the fittest bee from each patch

## 6

Assign remaining bees to search randomly and evaluate

their fitness

End While

Figure 2.2 T The pseudo-code for the Bees Algorithm.

## 2.2 Harmony Search Method

The Harmony Search (HS) method is an emerging meta-heuristic optimization algorithm, which is proposed by Geem, Kim, and Loganathan [30]. The harmony search algorithm is inspired by the underlying principles of the musicians' improvisation of the perfect state of harmony. The harmony search algorithm mimics the improvisation of music players for searching the better harmony. The various possible combinations of the musical pitches stored in the musicians' memory, when they compose harmony. And the musicians can find the fantastic harmony from their harmony memories.

In the adaptation of this musical improvisation process into the solution of optimization problems, each musician corresponds to a decision variable and possible notes correspond to the possible values of decision variables. The players play any pitch within the possible range, and these sounds make one harmony vector in music improvisation. These harmony vectors are stored in each player's memory if all the pitches make a good harmony, and the possibility to make a better harmony is increased next time. Similarly in optimization problems, each decision variable initially chooses any value within the possible range, and these values of decision variables make one solution vector. If all the values of decision variables make a good solution, these solution vectors are stored in the memory. Thus, the possibility to make a good solution is also increased next time. For example, if the first musician plays {La} while second and third musicians play {Fa} and {Do} from their harmony memories or randomly, a new harmony is created {La, Fa, Do}. And if this new harmony is better than the existing worst harmony in the harmony memory (HM), the new harmony is included in the HM and the worst harmony is excluded from the HM. This procedure is repeated until fantastic harmony is found. In engineering optimization problems, decision variables can be replaced with musicians, and variable's selected values can be replaced with sound pitches of musicians. If first design variable is chosen {100mm} out of {100mm, 200mm, 300mm}, second {500mm} out of {250mm, 500mm, 750mm}, and third {600mm} out of {200mm, 400mm, 600mm, 800mm}, those values {100mm, 500mm, 600mm} make a new solution vector. The new vector is included in the harmony memory (HM) and the worst vector is excluded from the HM, if this new vector is better than existing worst vector in the HM. Until certain termination criterion is satisfied, this iterative procedure is repeated. The details of the explained analogy between music improvisation and engineering optimization can be seen in Figure 2.3.

The optimization procedure of the Harmony Search meta-heuristic algorithm consists of Steps 1-5, as follows;

Step1. Optimization problem and algorithm parameters are initialized.

Step2. Harmony memory matrix (HM) is initialized.

Step3. A new harmony from the HM is improvised.

Step4. Harmony memory matrix is updated.

Step5. Step 3 and step 4 are repeated until the termination criterion is satisfied.

These steps can be summarized in the following:

Step 1- The discrete optimization problem is specified as follows:

Minimize f(x) s.t. xi ïƒŽ Xi , i= 1,2,â€¦,N

Sol, La , Si

Re, Mi , Fa

Do, Re , Mi

X1

X2

X3

100

200

3000

250

500

75000

200

400

600

800

## F(100, 500, 600)

## La, Fa, Do

where; Xi is the set of possible candidate values for each decision variable i. A design variable pool is constructed for each design variable. The number of solution vectors in harmony memory matrix (HMS), harmony memory considering rate (HMCR), pitch adjusting rate (PAR), and termination criterion (maximum number of search) are also decided in this step.

Figure 2.3 The pseudo-code for the Harmony Search Algorithm.

Step 2- The ''harmony memory'' (HM) matrix is filled with randomly generated solution vectors using the values in the design pool and sorted by the values of the objective function, f(x). The Harmony Memory is filled with as many solution vectors as harmony memory size (HMS). The harmony memory matrix has the following form:

Each row of harmony memory matrix contains the values of design variables of the optimization problem and presents one feasible solution vector. The design variables are randomly selected from the design variable pool for that particular design variable. Hence, this memory matrix has N columns (N; the total number of design variables) and HMS rows (HMS; harmony memory size). These candidate solutions are sorted such that the objective function value corresponding to the first solution vector (first row) is the minimum or maximum.

Step 3- A new harmony vector, x Ä± = ( x1Ä±, x2Ä±, x3Ä±,â€¦. xNÄ±) is improvised. There are three rules to choose one value for each decision variable; harmony memory consideration rate (HMCR), pitch adjustment rate (PAR), and random selection.

In memory consideration, the value of the first decision variable (x1Ä±) can be chosen from any discrete value in the specified HM range { x11, x12, x13,â€¦., x1HMS-1, x1HMS } with the probability of HMCR which varies between 0 and 1. Values of the other decision variables (xi Ä±) can be chosen in the same manner. However, there is still a chance where the new value can be randomly chosen from a set of entire possible values in the design variable pools with the probability of (1-HMCR). That is

Any component of the new harmony vector, whose value was chosen from the HM, is then examined to determine whether it should be pitch-adjusted. This operation uses pitch adjusting parameter (PAR) that sets the rate of pitch-adjustment decision as follows:

If the pitch adjustment decision for xi Ä± is Yes , xi Ä± is replaced with xi(k) (the kth element Xi ) , and the pitch-adjusted value of xi(k) becomes:

xi Ä± xi ( k ï‚± 1)

The algorithm chooses - 1 or 1 for the neighboring index m with the same probability.

Step 4- After selecting the new values for each design variable the objective function value is calculated for the new feasible harmony vector. If the new Harmony vector is better than the worst harmony in the HM in terms of the objective function value, the new harmony is included in the HM and the existing worst harmony is excluded from the HM. The HM is then sorted by the objective function value.

Step 5- The computations are terminated when the termination criterion is satisfied. If not, Steps 3 and 4 are repeated. The termination criterion is selected large enough such that within this number of cycles no further improvement is observed in the objective function value.

The aforementioned steps of the harmony search method can be applied to structural optimum design problems as described in the following. Firstly, the harmony search method parameters (HMS, HMCR, PAR, Max. Iteration number)

START

## Generation of Harmony Memory

Generation of initial harmony memory with feasible solution vectors (randomly selected, satisfy constraints, and sorted by f(x) )

## Initialize Problem

Spesification of each design variable and a possible design variable bound specification of the constraints

minimize or maximize f(xi )

subject to constraints

## Initialize HS parameters

Spesification of harmony memory memory size (HMS), harmony memory considering rate (HMCR), pitch adjusting rate (PAR) , and termination criterion (maximum number of search)

Step 1

Step 2

## Improvision of new harmony

Generate new harmony (vector) according to three rules;

HMCR (memory considering)

PAR (pitch adjusting)

Random selection

Step 3

## Evaluation of new harmony

Does A new harmony (solution vector) satisfy the constraints ?

## and

Is a new harmony better than the worst harmony in HM?

Step 4

## YES

## Update harmony

eliminate the worst harmony (solution vector) in HM

insert a new feasible harmony into HM

sorted harmonies in HM by f(x)

goto step 5

## N O

## Termination criterion

Maximum number of searches satisfied?

## YES

## STOP

## N O

Step 5

Figure 2.4 Harmony search algorithm optimization procedure.

are selected. The HS algorithm initiates the design process by selecting random values for the design variables to fill in the harmony memory matrix with feasible design vectors. After the harmony memory matrix is filled in, the new design vectors are created according to the HMCR and PAR values selected earlier. If the created new design vectors satisfy the constraints and its objective function of this design vector is better than the worst harmony in the HM, then HM matrix is updated with the new design vector value. The revised HM matrix is then sorted in ascending order. This process is repeated until the termination criterion is satisfied. At the end of this process, the first row of HM matrix gives the optimum design vector based on the initially selected parameters of the harmony search method. The flowchart of this process is demonstrated in Figure 2.4.

## 2.2.1 Case Study 1: Constrained Function Optimization

The function given in Eq.(2.3) with five design variables and six inequality constraints has been solved by several researchers. The optimal values of variables and the function are obtained by the harmony search algorithm by Lee and Geem [50], and these results are compared with earlier results presented by Deb [51], Shi and Eberhart [52], and Coello [53] in Table 2.1.

(2.3)

subject to

(2.4)

(2.5)

(2.6)

(2.7)

(2.8)

(2.9)

(2.10)

Coello [53] and Shi & Eberhart [52] obtained the best solution function value of f(x)= -31020.859 and f(x)= -31025.561,respectively, using the GA based method and the modified particle swarm optimization algorithm. Deb [51] solved this problem using an efficient constraint handling method for the GA, and reported an objective function value of f(x)= -30665.500, which corresponds to the best known optimal solution to the function is x*={78.0, 33.0, 29.995, 45.0, 36.776} and the objective function value f(x)= -30665.500. The harmony search algortihm based method found the best known solution vector with the best objective function value after 65000 iterations. The best solution obtained using HS algorithm is the same as the optimal solution, and the results are given in Table 2.1.

Table 2.1. Optimal results of the constrained function problem.

## Method

## Variables Design Optimal

## Obj. function value

## 1x

## 2x

## 3x

## 4x

## 5x

[51]Deb

Unavailable

-30665.5

[53]Coello

78.0495

33.007

27.081

45

44.94

-31020.9

[52]ihS

78

33

27.07099

45

44.969

-31025.6

Lee&Geem[50]

78

33

29.995

45

36.776

-30665.5

.Optimal Sol

78

33

29.995

45

36.775

-30665.5

2.2.1.2 Case Study 2: Pressure Vessel Design

Figure 2.5 . Schematic of pressure vessel.

The pressure vessel design problem (Figure 2.5) was previously proposed and analyzed by Sandgren [54]. Also, this optimization problem was solved to minimize the total cost of the material, forming and welding of a cylindrical vessel by Wu and Chow [55]. In the pressure vessel optimization problem, there are four design variable; x1 (Ts, shell thickness), x2 (Th, spherical head thickness), x3 (R, radius of cylindrical shell), and x4 (L, shell length). The design variables x1 (Ts) and x2 (Th) are integer multipliers of 0.00625 in. in accordance with the available thickness of rolled steel plates, and the other design variables x3 (R) and x4 (L) have continuous values of 40 â‰¤ R â‰¤ 80 in. and 20 â‰¤ L â‰¤ 60 in., respectively. This optimization problem is expressed as mathematically,

(2.11)

subject to

(2.12)

(2.13)

(2.14)

(2.15)

(2.16)

(2.17)

Sandgren [54] found the optimal values of $7980.894 using branch and bound method for this optimization problem. Wu and Chow used GA-based approach for this problem and obtained the minimum cost of $7207.494. Lee and Geem [50] achieves a design with a best solution vector of x={1.125, 0.625, 58.2789, 43.7549} and the minimum cost of $7207.494 using the harmony search algorithm without violating any constraint. The results obtained using the HS algorithm are better than earlier results found for this problem, and optimal results are compared in Table 2.2.

Tablo 2.2 Optimal results for pressure vessel design.

Method

Optimal design variables

Cost ($)

## (1x) sT

## (2x) hT

## (3x) R

## (4x) L

Sandgren [54]

1.125

0.625

48.97

106.72

7980.894

Wu&Chow[55]

1.125

0.625

58.1978

44.2930

7207.494

Lee&Geem [50]

1.125

0.625

58.2789

43.7549

7198.433

## 2.2.1.3 Case Study 3: Welded Beam Design

The goal in the optimum design problem of the welded beam given in Figure 2.6 is to minimize the fabricating cost of the welded beam subject to constraints on the shear stress, bending stress on the beam, buckling load on the bar, deflection of the beam, and side constraints. The thickness of the weld h=x1, the length of the welded joint l= x2, the width of the beam t= x3, and the thickness of the beam

## Figure 2.6 Schematic of welded beam

b=x4 are selected as a design variable. The values of x1 and x2 are coded with integer multiplies of 0.0065. There are fifteen constrains for the optimization problem of welded beam. This optimization problem is expressed mathematically as;

(2.18)

subject to

shear stress;

(2.19)

bending stress in the beam;

(2.20)

side constraints;

(2.21)

(2.22)

(2.23)

end deflection of the beam;

(2.24)

buckling load on the bar;

(2.25)

where;

(2.26)

(2.27)

(2.28)

(2.29)

(2.30)

The input data and limit values of design variables are given in Table 2.3 The harmony search algorithm is applied by Geem and Lee [50] to the optimum design problem of the welded beam with the four design variables. Lee and Geem [50] achieves a design with a best solution vector of x={0.2442, 6.2231, 8.2915, 0,2443} and the minimum cost of 2.38 using the harmony search algorithm. The same problem was also solved by Ragsdell and Philips [56] using geometric programming. Deb [57] used a simple Genetic Algorithm (GA). The results are given in Table 2.4.

Table 2.3 Input values and bounds for design variables.

## Variable

## Lower bound

## Upper bound

## x1

## 0.1

## 2.0

## x2

## 0.1

## 1.0

## x3

## 0.1

## 1.0

## x4

## 0.1

## 2.0

## P=6000 lb

## L=14 in

## ï¤max=0.25 in

## E=30Â·106 psi

## ï´max=13,600 psi

## G=12Â·106 psi

## ï³max=30,000 psi

Table 2.4 Optimal results of welded beam design.

Method

Optimal design variables

Cost

## H

## l

## T

## b

Ragsdell and Philips [56]

Approx

0.2444

6.2189

8.2915

0.2444

2.38

David

0.2434

6.2552

8.2915

0.2444

2.38

GP

0.2455

6.1960

8.2730

0.2455

2.39

Simplex

0.2792

5.6256

7.7512

0.2796

2.53

Random

0.4575

4.7313

5.0853

0.6600

4.12

Deb [51]

unavailable

2.38

Deb [57]

0.2489

6.1730

8.1789

0.2533

2.43

Lee&Geem [50]

0.2442

6.2231

8.2915

0.2443

2.38

## 2.2.2 Applications of Harmony Search Algorithm

Since the initial presentation of the harmony search algorithm in the beginning of 2000's, the Harmony Search algorithm has been applied to a widely range of optimization problems from musical composition to solving Sudoku, from heat exchanger design to timetabling problems, from medical imaging to structural design and environmental problems (The examples below focus only on application of Harmony Search Algorithm to various fields of civil engineering). The harmony search algorithm (HS) now has a substantial literature.

Several studies have focused on the design of water distribution networks (Figure 2.7), in particular the selection of optimal pipe size [58, 59, 60]. The design variables are the diameters of pipe segments for this optimization problem. The aim is to select the diameter for each pipe segment that minimizes the total cost of the water distribution network. The problem is constrained by the principles of fluid mechanics and there are some practical and demand requirements. Geem [60] considered the optimal sizing of both the diameters of pipes and the water pump in the water supply system, and minimized the total cost of the system considering the operational cost for the pump. Several water-related studies involve parameter estimation for the nonlinear Muskingum model, which can be used in the prediction of volumetric water flow [61].

Figure 2.7 BakRyun water distribution network.

The harmony search algorithm used also in slope stability analysis to predict the location of the critical slip surface and to estimate the associated factor of safety [62, 63]. Two-dimensional slip surfaces of arbitrary shape and three-dimensional slopes were analyzed using the harmony search algorithm (Figure 2.61). The location of slip surface is found by minimizing the factor of the safety of slope.

## Figure 2.8 Cross section of slope.

Transport-related problems are investigated using harmony search algorithm. Travelling salesmen problems with a 20-city and a 51-city are solved using harmony search algorithm [30]. The minimum possible tour routes are found using harmony search algorithm for travelling salesmen problems. Also, a school bus routing problem was investigated by Geem [64]. The required numbers of school buses are found for the best route, and the bus seating capacity and maximum allowable journey time are selected as constraints for this problem.

## Figure 2.9 20-city travelling salesman problem.

The Harmony search algorithm is also used for structural design problems. Lee and Geem [65] present continuous variable applications of harmony search algorithm. Two-dimensional (plane) trusses with 10-200 bars (Figure 2.10), three-dimensional (space) trusses with 22-72 bars, and a 120-bar dome truss problems are investigated in Lee's study. Lee and Geem [66] studied also on discrete variable problems, namely 52-bar plane truss, 25- and 72-bar space trusses, and 47-bar power line tower (Figure 2.64).

## Figure 2.10 Two-hundred-bar planar truss.

The optimum designs for geodesic domes having three to six rings (75-285 bars) are presented by Saka [67]. Erdal and Saka [68] present optimization of grillage systems with 24- and 264-member. DeÄŸertekin studied on optimization of frame systems, including 1-bay 8-storey plane frame, three-bay twenty four-storey frame and a 4-storey, 84-bar space frame [69, 70].