Optimum Controller Setting Using Bio Inspired Techniques Biology Essay

Published:

There are several optimization algorithms which can be used for searching the optimal gain parameters a very basic one being the random search. Random points are selected from the feasible set and these are evaluated. The best one is selected as the optimum. A more sophisticated algorithm is, for example, the Simulated Annealing (SA). SA is an algorithm that is inspired by annealing in metallurgy. The main idea of this method is to replace the current solution by a random "nearby" that is better than the current solution. However, the recently Genetic Algorithm (GA) and the Particle Swarm Optimization (PSO) proved to be more appealing. Genetic Algorithms can be used in parallel searching for optimal solutions. The Particle Swarm Optimization can be easily applied in various situations. Moreover, it can generate a high-quality solution with high computational efficiency and stable convergence characteristics. In this chapter, applying GA, ACO S.A and BF-PSO as tuning techniques for PID controllers has been illustrated. The advantages and disadvantages of all the above methods will be discussed.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

With the ease of computation, numerical optimization methods have become significant in devising formulae for PI and PID controller parameter tuning. The squared error integral criteria are the most common for such optimization. Several optimization techniques using the swarming principle have been adopted to solve a variety of engineering problems in the past decade. Ant Colony Optimization (ACO) was introduced around 1991-1992 by M. Dorigo and colleagues as a novel nature-inspired metaheuristic for the solution of hard combinatorial optimization problems. Farooq et al developed a bee inspired algorithm for routing in telecommunication networks. The work is inspired by the way these insects communicate. Swarming strategies in bird flocking and fish schooling are used in the Particle Swarm Optimization (PSO) introduced by Eberhart and Kennedy. A relatively newer evolutionary computation algorithm, called Bacterial Foraging scheme has been proposed and introduced recently by K. M. Passino. In this chapter, the use of both the PSO and (E coli) based optimization for PID parameter tuning is investigated. A new algorithm bacterial foraging oriented by particle swarm optimization (BF-PSO) is implemented that combines the above mentioned optimization algorithms.

4.2. Basic Particle Swarm Optimization (PSO)

The Particle Swarm Optimization (PSO) model consists of a swarm of particles, which are initialized with a population of random candidate solutions. They move iteratively through the d-dimension problem space to search for new solutions. Each particle has a position represented by a position-vector Xik, where 'i' is the index of the particle), and a velocity represented by a velocity-vector Vik . Each particle remembers its own best position. The best position vector among the swarm is then stored in a vector. During the iteration time k, the update of the velocity from the previous velocity to the new velocity is determined.

In a physical n-dimensional search space, the position and velocity of ith individual are represented as the vectors Xi = (xi1…….xin) and Vi = (vi1…….vin), respectively. In the PSO algorithm, Pbesti = (xi1pbest…….xinpbest) and Gbest = (xi1Gbest…….xinGbest) respectively, are the best position of individual and its neighbours' best position so far. Using the information, the updated velocity of individual is modified under the following equation in the PSO algorithm:

(4.1)

where ,

Vik - velocity of individual at iteration

W - weight parameter

C1, C2 - weight factors

rand1, rand2 - random numbers between 0 and 1

Xik - position of individual at iteration

Pbestik - best position of individual until iteration

Gbestik - best position of the group until iteration

Each individual moves from the current position to the next one by the modified velocity in (4.1):

(4.2)

The search mechanism of the PSO using the modified velocity and position of individual based on (4.1) and (4.2) is illustrated in Figure 4.1.

The process of the PSO algorithm can be summarized as follows:

Step 1: Initialization of a group at random while satisfying constraints.

Step 2: Velocity and Position updates while satisfying constraints.

Step 3: Updation of Pbest and Gbest.

Step 4: Go to Step 2 until stopping criteria is satisfied.

In subsequent sections, the detailed implementation strategies of the PSO are described.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Figure 4.1 The search mechanism of the particle swarm optimization

Step 1) Initialization: In the initialization process, a set of individuals is created at random. Here, the structure of tuning a PI controller for a CSTR process is composed of a set of elements (i.e., generation outputs). Therefore, the individual i's position at iteration 0 can be represented as the vector (Xi0 = (Pi10…….Pin0)) where n is the number of generators. The velocity of individual i (Vi0 = (vi10…….vin0)) corresponds to the generation update quantity covering all generators. The elements of position and velocity have the same dimension. The following strategy is used in creating the initial velocity:

(4.3)

where Ɛ is a small positive real number. The velocity of element j of individual i is generated at random within the boundary. The developed initialization scheme always guarantees to produce individuals satisfying the constraints while maintaining the concept of the PSO algorithm. The initial Pbesti of individual i is set as the initial position of individual i and the initial Gbest is determined as the position of an individual with minimum payoff.

Step 2. a) Velocity Update: To modify the position of each individual, it is necessary to calculate the velocity of each individual in the next stage, which is obtained from (4.1). In this velocity updating process:

(4.4)

where Wmin - initial weights

Wmax - final weights

Itermax - maximum iteration number

Iter - current iteration number

Step 2 b) Position Update: The position of each individual is modified by (4.2). The resulting position of an individual is not always guaranteed to satisfy the inequality constraints due to over/under velocity. If any element of an individual violates its inequality constraint due to over/under speed, then the position of the individual is fixed to its maximum/minimum operating point. Therefore, this can be formulated as follows:

(4.5)

Step 3) Updation of Pbest and Gbest: The Pbest of each individual at iteration is updated as follows:

where TCi is the object function evaluated at the position of individual i. Additionally, Gbest at iteration k+1 is set as the best evaluated position among Pbestik+1.

Step 4) Stopping Criteria: The PSO is terminated if the iteration approaches the predefined maximum iteration.

The tuning of the PSO algorithm for obtaining the optimal design of a CSTR process has been carried out using the above mentioned five steps. The key advantage of the PSO is its computational efficiency and lesser parameters required to be adjusted in order to get the optimum result. The proposed approach utilizes the global exploration capabilities of the PSO to search for the optimal solution.

4.3 Basic Bacterial Foraging Optimization (BFO)

The selection behaviour of bacteria tends to eliminate poor foraging strategies and improve successful foraging strategies. After many generations, a foraging animal takes actions to maximize the energy obtained per unit time spent foraging. This activity of foraging led researchers to uses as an optimization process. The E-coli bacterium has a control system that enables it to search for food and tries to avoid noxious substances. The bacteria distributed motion can be modelled is the following four stages:

Step 1: Swarming and Tumbling via flagella (Ns)

The flagellum is a left-handed helix configured so that the base of the flagellum (i.e. where it is connected to the cell) rotate counter clockwise, as shown in Figure 4.2.

Figure 4.2: E coli bacteria while it's swimming or tumbling

From the free end of the flagellum looking toward the cell, it produces a force against the bacterium pushing the cell. This mode of motion is called swimming. The bacteria swim either for a maximum number of steps Ns or less depending on the nutrition concentration and environment condition. But if the flagellum rotates clockwise each flagellum pulls on the cell as shown in Figure 4.2, so that the net effect is that each flagellum operates relatively independently of the others and so the bacterium "tumbles". The tumbling mode indicates a change in the future swim direction. The flagellum of the bacterium alternates between these two modes of operation in the entire life time.

Step 2: Chemotaxis (Nc)

A chemotaxis step is a set of consequence swim steps following by tumble. A maximum of swim steps with a chemotactic step is predefined by Ns. The actual number of swim steps is determined by the environment. If the environment shows good nutrient concentration in the direction of the swim, the (E. Coli) bacteria swims more steps. The end of the chemotactic step is determined by either reaching the maximum number of steps Ns or by reaching a poor environment. When the swim step is stopped, a tumble action takes place. To represent a tumble, a random unit length vector with direction Delta (n,i) is generated, where 'j' is the index for the chemotactic step and 'i' is the index of bacterium that has the maximum number of bacteria S. This vector is used to define the direction of movement after a tumble. Let Nc be the length of the lifetime of the bacteria as measured by the number of chemotaxis steps they take during their life. Let ci > 0 (i = 1,2,. . .), S denote a basic chemotactic step size that we will use to define the lengths of steps during runs. The step size is assumed to be constant. The position of each bacterium is denoted by P(n, i, j, k, ell) where n is the dimension of search space, k is the index of reproduction step and ell is the index of elimination-dispersal events. The new bacterium position after tumbling is given by

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

(4.7)

Step 3: Reproduction (Nre)

After Nc chemotactic steps, a reproduction step is taken. Let Nre be the number of reproduction steps to be taken. For convenience, it is assumed that S is a positive even integer. Let be the number of population members who have had sufficient nutrients so that they will reproduce (split in two) with no mutations. For reproduction, the population is sorted in order of ascending accumulated cost (higher accumulated cost represents that it did not get as many nutrients during its lifetime of foraging and hence, is not as "healthy" and thus unlikely to reproduce). The Sr least healthy bacteria die and the other Sr healthiest bacteria each split into two bacteria, which are placed at the same location.

Step 4: Elimination and dispersal (Ned)

Elimination event may occur for example when local significant increases in heat kill a population of bacteria that are currently in a region with a high concentration of nutrients. A sudden flow of water can disperse bacteria from one place to another. The effect of elimination and dispersal events is possibly destroying chemotactic progress, but they also have the effect of assisting in Chemotaxis, since dispersal may place bacteria near good food sources.

4.4 Bacterial Foraging Optimization Oriented By Particle Swarm Optimization (BF-PSO)

The BF-PSO combines both algorithms BF and PSO. This combination aims to make use of PSO's ability to exchange social information and BF ability in finding a new solution by elimination and dispersal.

For initialization, the user selects S, Ns, Nc, Nre, Ned, Ped, C1, C2, R1, R2 and c(i), i = 1, 2, ... S. Also initialize the Position Pin, and Velocity is randomly initialized. The BF-PSO models Bacterial Population Chemotaxis, swarming, reproduction, elimination and dispersal oriented by PSO is given below (Initially, j = k = ell = 0). Implicit subscribes will be dropped for simplicity.

The following steps involve in developing BF-PSO algorithm

(i) Initialize parameters n, S, Nc, Ns, Nre, Ned, Ped, c(i) (i = 1, 2 . . . S), Delta, C1, C2, R1, R2, where,

n : Dimension of the search space,

S : The number of bacteria in the population,

Sr : Half the total number of bacteria ,

Ns : Maximum number of swim length,

Nc : Chemotactic steps,

Nre : The number of reproduction steps,

Ned : Elimination and dispersal events,

Ped : Elimination and dispersal with probability,

c(i) : The step size taken in the random direction,

C1, C2 : PSO random parameter,

R1, R2 : PSO random parameter.

(ii) Generate a random direction Delta(n,i) and position.

For

For )

For ( j=1 to Nc)

For

Evaluate the cost function

Store the best cost function in Jlast

The best cost for each bacteria will be selected to be the local best J local

Update position and cost function

while (m < Ns )

Update position and cost function

Evaluate the current position and local cost for each bacteria

else

end while

next i ( next bacteria)

Evaluate the local best position ( Plbest ) for each bacteria and global best position ( Pgbest ) .

Evaluate the new direction for each bacteria

next j ( next chemotactic )

f o r

end

next k

next ell

4.5 PID Tuning By PSO, BF and BF-PSO

The prime objective is to test the performance of the developed bacterial foraging oriented by the particle swarm optimization algorithm PID controller tuning. Attempt has been made to achieve globally minimal error squared error integral criteria in the step response of a process which is cascaded with a PID controller by tuning the proportional gain Kp, integral gain Ki and differential gain Kd values. Usually, the choice of controller coefficients is implemented by approximate methods, which in turn will not guarantee globally optimal solution for control applications. The values of Kp, Ki and Kd are derived through the BF, PSO and BF-PSO methods after ensuring the presence of all the poles of the transfer function confined to the left half of the S plane. The performance of the developed algorithm is tested with the CSTR system. The cost function here is the square of integral error. The closed loop PID controller cascaded with the process is tuned for values Kp, Ki and Kd. Results obtained by using the BF-PSO algorithm are presented in Table 4.1.

The performance of the algorithm has been analyzed in CSTR process through computer simulation.

p=2; Dimension of search space

s=10; The number of bacteria

Nc=10; Number of chemotactic steps

Ns=4; Limits the length of a swim

Nre=4; The number of reproduction steps

Ned=2; The number of elimination-dispersal events

Sr=s/2; The number of bacteria reproductions (splits) per generation

Ped=0.25; The probability that each bacteria will be eliminated/dispersed

Using BF-PSO tuning, the controller parameters obtained after simulation are Kp= 1.1430 and Ki= 1.1465. The switching signal shown in figure 4.3 is given a set point in order to test the dynamic response of the system for various set point variations. Response of the BF-PSO tuned PI Control for the CSTR process in shown in Figure 4.4.

Figure 4.3 Set Point of BF-PSO tuned PI Control for CSTR process

Figure 4.4 Response of BF-PSO tuned PI Control for CSTR process

From the Figure 4.4, it is clear that the peak overshoot is about 4.29% and with more settling time. The proposed approach has superior features, including easy implementation, stable convergence characteristic and good computational efficiency.

4.6 Error Criteria

The essential function of a feedback control system is to reduce the error, e(t), between any variable and its demanded value to zero, as quickly as possible. Therefore, any criterion used to measure the quality of a system response must take into account the variation of e over the whole range of time. Four basic criteria are in common use:

Integral of absolute error (IAE)

IAE = (4.7)

Integral of squared error (ISE)

ISE = (4.8)

Integral of time multiplied by absolute error (ITAE)

ITAE =(4.9)

Integral of time multiplied by squared error (ISE)

ISE = (4.10)

For any of the possible criteria, the best response corresponds to the minimum value of the chosen criterion. In all cases, it is either the absolute error or the squared error which is involved. Straightforward integration of the error would produce a zero result, even if the system response was a constant amplitude oscillation. IAE is often used where digital simulation of a system is being employed, but it is inapplicable for analytical work, because the absolute value of an error function is not generally analytic in form. This problem is overcome by the ISE criterion. The ITAE and ITSE have an additional time multiplier of the error function, which emphasizes long-duration errors, and therefore these criteria are most often applied in systems requiring a fast settling time.

Figure 4.5 Responses of ISE, IAE, and ITAE in PSO

Using modern optimization techniques, it is possible to tune a PI controller based on the actual transfer function of the plant to optimize the closed-loop performance. The different errors are Integral Square Error (ISE), Integral Absolute Error (IAE) and Integral Time Absolute Error (ITAE). These error responses are shown in Figure 4.5 and used for the analysis of the controller output performance. While using the Particle swarm optimization, the values of ISE as 8.045143, IAE as 98.7911 and ITAE as 346.8 is obtained.

The obtained overshoot is about 4.2 %. To eliminate the overshoot completely, other tuning algorithms using GA and ACO are incorporated.

4.7 GA -Based PI Controller Tuning for a CSTR Process

4.7.1 Overview of Genetic Algorithm

Genetic Algorithms have been used to evolve computer programs for specific tasks, and to design other computational structures. A genetic algorithm is a search heuristic that mimics the process of natural evolution. This heuristic is routinely used to generate useful solutions to optimization and search problems. Genetic algorithms belong to the larger class of Evolutionary Algorithms (EA), which generate solutions to optimization problems using techniques inspired by natural evolution, such as inheritance, mutation, selection and crossover.

Genetic Algorithms are a stochastic global search method that mimics the process of natural evolution. The genetic algorithm starts with no knowledge of the correct solution and depends entirely on responses from its environment and evolution operators (i.e. reproduction, crossover and mutation) to arrive at the best solution. By starting at several independent points and searching in parallel, the algorithm avoids local minima and converging to sub optimal solutions. In this way, GAs have been shown to be capable of locating high performance areas in complex domains without experiencing the difficulties associated with high dimensionality, as may occur with gradient decent techniques or methods that rely on derivative information.

In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population. If the algorithm has terminated due to a maximum number of generations, a satisfactory solution may or may not have been reached.

4.7.1.1 Initialization

Initially, many individual solutions are randomly generated to form an initial population. The population size depends on the nature of the problem, but typically contains several hundreds or thousands of possible solutions. Traditionally, the population is generated randomly, covering the entire range of possible solutions (the search space). Occasionally, the solutions may be "seeded" in areas where optimal solutions are likely to be found.

4.7.1.2 Selection

During each successive generation, a proportion of the existing population is selected to breed a new generation. Individual solutions are selected through a fitness-based process, where fitter solutions (as measured by a fitness function) are typically more likely to be selected. Certain selection methods rate the fitness of each solution and preferentially select the best solutions. Other methods rate only a random sample of the population, as this process may be very time-consuming.

Four common methods for selection are:

Roulette Wheel selection

Stochastic Universal sampling

Normalized geometric selection

Tournament selection

4.7.1.3 Reproduction

For each new solution to be produced, a pair of "parent" solutions is selected for breeding from the pool selected previously. By producing a "child" solution using the above methods of crossover and mutation, a new solution is created which typically shares many of the characteristics of its "parents". New parents are selected for each new child, and the process continues until a new population of solutions of appropriate size is generated. These processes ultimately result in the next generation population of chromosomes that is different from the initial generation. Generally, the average fitness will have increased by this procedure for the population, since only the best organisms from the first generation are selected for breeding, along with a small proportion of less fit solutions.

There are a number of other selection methods available and it is up to the user to select the appropriate one for each process. All selection methods are based on the same principle i.e. giving fitter chromosomes a larger probability of selection. The next step is to generate a second generation population of solutions from those selected through genetic operators: crossover (also called recombination) and/or mutation.

4.7.1.4 Crossover

Once the selection process is complete, the crossover algorithm is initiated. The crossover operations swap certain parts of the two selected strings in a bid to capture the good parts of old chromosomes and create better new ones. Genetic operators manipulate the characters of a chromosome directly, using the assumption that certain individual's gene codes, on average, produce fitter individuals. The crossover probability indicates how often crossover is performed. A probability of 0% means that the 'offspring' will be exact replicas of their 'parents' and a probability of 100% means that each generation will be composed of entirely new offspring. The simplest crossover technique is the Single Point Crossover.

4.7.1.5 Mutation

Using selection and crossover on their own will generate a large amount of different strings. However there are two main problems with this:

Depending on the initial population chosen, there may not be enough diversity in the initial strings to ensure the GA searches in the entire problem space.

The GA may converge on sub-optimum strings due to a bad choice of initial population.

These problems may be overcome by the introduction of a mutation operator into the GA. Mutation is the occasional random alteration of a value of a string position. It is considered as a background operator in the genetic algorithm. The probability of mutation is normally low because a high mutation rate would destroy fit strings and degenerate the genetic algorithm into a random search. Mutation probability values of around 0.1% or 0.01% are common. These values represent the probability that a certain string will be selected for mutation i.e. for a probability of 0.1%; one string in one thousand will be selected for mutation. Once a string is selected for mutation, a randomly chosen element of the string is changed or 'mutated'. Although Crossover and Mutation are known as the main genetic operators, it is possible to use other operators such as regrouping, colonization-extinction or migration in genetic algorithms. A graphical illustration of the Genetic Algorithm Outline is shown in Figure 4.6.

The steps involved in creating and implementing a genetic algorithm are as follows:

Generate an initial, random population of individuals for a fixed size.

Evaluate their fitness.

Select the fittest members of the population.

Mutation is a genetic operator used to maintain genetic diversity from one generation of a population of algorithm chromosomes to the next. Mutation alters one or more gene values in a chromosome from its initial state.

Reproduce using a probabilistic method. (e.g., roulette wheel).

Implement crossover operation on the reproduced chromosomes. (Choosing probabilistically both the crossover site and the 'mates').

Execute mutation operation with low probability.

Repeat step 2 until a predefined convergence criterion is met.

Create Population

Measure Fitness

Select Fittest

Mutation

Crossover/Reproduction

Generate the Initial Population

Start

Optimum Solution

Stop

No

Yes

Figure 4.6 Graphical Illustration of the Genetic Algorithm Outline

4.7.2 Simulation using GA

C:\DOCUME~1\Tiffany\LOCALS~1\Temp\msohtmlclip1\01\clip_image001.jpg

Figure 4.7 Simulink block diagram of GA tuned PI Controller for CSTR process

GA - based PI controller tuning for obtaining the optimal design of the CSTR process is implemented. The simulink block diagram of a GA tuned PI controller for the CSTR process model is shown in Figure 4.7. In this work, the selection method implemented is the Roulette Wheel selection.

The performance of the algorithm has been analyzed in the CSTR process through computer simulation. The objective function in this case is the minimization of the peak overshoot, delay time, rise time, settling time and peak time. Roulette-Wheel Tournament selection is applied to select the members of the new population. The optimum results of the GA are given below:

Number of Generations = 200

Number of Variables = 2

Population size = 10

Population Mutual Range = 0.1-1.2

Population Type = Double Vector

Crossover probability = 0.8

Mutation probability = 0.6

Figure 4.8 Best fitness value and Stopping criteria

Figure 4.8 shows the best fitness value and stopping criteria. It is clear that the proposed GA tuned PI process produces better controller performance and it takes less computation time to reach the optimal solution.

Figure 4.9 Response of GA tuned PI Control for CSTR process

Figure 4.10 Responses of ISE, IAE, and ITAE in GA

Figure 4.9 show the response of a GA tuned PI controller and Figure 4.10 shows the convergence of GA for different error functions like ISE, IAE, and ITAE in GA for the CSTR process. The control parameters obtained after simulation are Kp=0.8983 and Ki=0.9595. The overshoot is completely eliminated using this technique. Thus, the development of a GA tuned PI control makes the CSTR process more efficient than that of using a BF-PSO tuned CSTR process in point of overshoot, but it takes more settling time than BF-PSO.

4.8. Overview of SA

Simulated annealing (SA) is a generic probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete. For certain problems, simulated annealing may be more efficient than exhaustive enumeration - provided that the goal is merely to find an acceptably good solution in a fixed amount of time, rather than the best possible solution. The name and inspiration comes from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one.

By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends both on the difference between the corresponding function values and also on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" (for a minimization problem) as T goes to zero. The allowance for "uphill" moves potentially saves the method from becoming stuck at local optima-which are the bane of greedier methods.

In the simulated annealing (SA) method, each point 's' of the search space is analogous to a state of some physical system, and the function E(s) to be minimized is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy.

4.8.1The basic iteration

At each step, the SA heuristic considers some neighbouring state s' of the current state s, and probabilistically decides between moving the system to state s' or staying in state s. These probabilities ultimately lead the system to move to states of lower energy. Typically, this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted.

4.8.2. The neighbours of a state

The neighbours of a state are new states of the problem that are produced after altering the given state in some particular way. In the travelling salesman problem, each state is typically defined as a particular permutation of the cities to be visited. The neighbours of some particular permutation are the permutations that are produced by interchanging a pair of adjacent cities. The action taken to alter the solution in order to find neighbouring solutions is called "move" and different "moves" give different neighbours. These moves usually result in minimal alterations of the solution, in order to help an algorithm to optimize the solution to the maximum extent and also to retain the already optimum parts of the solution and affect only the suboptimum parts.

Searching for neighbours to a state is fundamental to optimization because the final solution will come after a tour of successive neighbours. Simple heuristics move by finding the best neighbour and stop when they have reached a solution which has no neighbours that are considered as better solutions. The problem with this approach is that a solution that does not have any immediate neighbours is a better solution. It would be the optimum if it was shown that any kind of alteration of the solution does not give a better solution and not just a particular kind of alteration. For this reason it is said that simple heuristics can only reach local optima and not the global optimum. Although, Metaheuristics also optimize through the neighbourhood approach, they differ from heuristics in that they can move through neighbours that are worse solutions than the current solution. Simulated Annealing in particular doesn't even try to find the best neighbour. The reason for this is that the search can no longer stop in a local optimum and in theory, if the metaheuristic can run for an infinite amount of time, the global optimum will be found.

4.8.3 Acceptance probabilities

The probability of making the transition from the current state s to a candidate new state s' is specified by an acceptance probability function P (e, e', T), that depends on the energies e = E(s) and e' = E(s') of the two states, and on a global time-varying parameter T called the temperature. One essential requirement for the probability function P is that it must be nonzero when e' > e, meaning that the system may move to the new state even when it is worse (has a higher energy) than the current one. It is this feature that prevents the method from becoming stuck in a local minimum - a state that is worse than the global minimum, yet better than any of its neighbours.

On the other hand, when T goes to zero, the probability P (e, e', T) must tend to zero if e' > e, and to a positive value if e' < e. That way, for sufficiently small values of T, the system will increasingly favour moves that go "downhill" (to lower energy values), and avoid those that go "uphill". In particular, when T becomes 0, the procedure will reduce to the greedy algorithm-which makes the move only if it goes downhill. In the original description of SA, the probability P (e, e', T) was defined as 1 when e' < e - i.e., the procedure always moved downhill when it found a way to do so, irrespective of the temperature. Many descriptions and implementations of SA still take this condition as part of the method's definition. However, this condition is not essential for the method to work, and one may argue that it is both counterproductive and contrary to its principle.

The P function is usually chosen so that the probability of accepting a move decreases when the difference e'-e increases that is, small uphill moves are more likely than large ones. However, this requirement is not strictly necessary, provided that the above requirements are met. Given these properties, the temperature T plays a crucial role in controlling the evolution of the state s of the system vis-a-vis its sensitivity to the variations of system energies. To be precise, for a large T, the evolution of s is sensitive to coarser energy variations, while it is sensitive to finer energy variations when T is small.

4.8.4 The annealing schedule

The name and inspiration of the algorithm demand that an interesting feature related to temperature variation be embedded in the operational characteristics of the algorithm. This necessitates a gradual reduction of the temperature as the simulation proceeds. The algorithm starts initially with T set to a high value (or infinity), and then it is decreased at each step following some annealing schedule - which may be specified by the user, but must end with T = 0 towards the end of the allotted time budget. In this way, the system is expected to wander initially towards a broad region of the search space containing good solutions, ignoring small features of the energy function; then drift towards low-energy regions that become narrower and narrower; and finally move downhill according to the steepest descent heuristic.

The effect of the cooling schedule on the performance of simulated annealing is shown in Figure 4.11 and 4.12. The problem is to rearrange the pixels of an image so as to minimize a certain potential energy function, which causes similar colours to attract at short range and repel at a slightly larger distance. The elementary moves swap two adjacent pixels. These images obtained with a fast cooling schedule is shown in Figure 4.11 and a slow cooling schedule is shown in Figure 4.12, producing results similar to amorphous and crystalline solids, respectively.

SimulatedAnnealingFast.jpg

Figure 4.11 Fast Cooling Schedules

SimulatedAnnealingSlow.jpg

Figure 4.12 Slow Cooling Schedules

For any given finite problem, the probability that the simulated annealing algorithm terminates with the global optimal solution approaches 1 as the annealing schedule is extended. However, the time required to ensure a significant probability of success will usually exceed the time required for a complete search of the solution space.

4.8.5 Pseudocode

The following pseudocode presents the simulated annealing heuristic. It starts from a state s0 and continues to either a maximum of kmax steps or until a state with energy of emax or less is found. In the process, the call neighbour(s) should generate a randomly chosen neighbour of a given state s; the call random ( ) should return a random value in the range [0, 1]. The annealing schedule is defined by the call temp(r), which should yield the temperature to use, given the fraction r of the time budget that has been expended so far.

s ← s0; e ← E(s) // Initial state, energy.

sbest ← s; ebest ← e // Initial "best" solution

k ← 0 // Energy evaluation count.

while k < kmax and e > emax // While time left & not good

enough:

snew ← neighbour(s) // Pick some neighbour.

enew ← E(snew) // Compute its energy.

if P(e, enew, temp(k/kmax)) > random() then // Should we move to it?

s ← snew; e ← enew // Yes, change state.

if enew < ebest then // Is this a new best?

sbest ← snew; ebest ← enew // Save 'new neighbour' to 'best

found'.

k ← k + 1 // One more evaluation done

return sbest // Return the best solution found.

Actually, the "pure" SA algorithm does not keep track of the best solution found so far: it does not use the variables sbest and ebest, it lacks the second if inside the loop and, at the end it returns the current state s instead of sbest. While remembering the best state is a standard technique in optimization that can be used in any metaheuristic, it does not have an analogy with physical annealing - since a physical system can "store" a single state only.

In strict mathematical terms, saving the best state is not necessarily an improvement, since one may have to specify a smaller kmax in order to compensate for the higher cost per iteration and since there is a good probability that sbest equals s in the final iteration. However, the step sbest ← snew happens only in a small fraction of the moves. Therefore, the optimization is usually worthwhile, even when state-copying is an expensive operation.

4.8.6. Selecting the parameters

In order to apply the SA method to a specific problem, the following parameters are to be specified. The state space, the energy (goal) function E ( ), the candidate generator procedure neighbour ( ), the acceptance probability function P ( ), the annealing schedule temp ( ) and initial temperature <init temp>. These choices can have a significant impact on the method's effectiveness.

4.8.7 Diameter of the search graph

Simulated annealing may be modelled as a random walk on a search graph, whose vertices are all possible states, and whose edges are the candidate moves. An essential requirement for the neighbour ( ) function is that it must provide a sufficiently short path on this graph from the initial state to any state which may be the global optimum.

4.8.8. Transition probabilities

For each edge (s, s') of the search graph, it is necessary to define a transition probability, which is the probability that the SA algorithm will move to state s' when its current state is s. This probability depends on the current temperature as specified by temp ( ), by the order in which the candidate moves are generated by the neighbour( ) function, and by the acceptance probability function P ( ).

4.8.9 Acceptance probabilities

The specification of neighbour ( ), P ( ) and temp ( ) is partially redundant. In practice, it's common to use the same acceptance function P ( ) for many problems, and adjust the other two functions according to the specific problem. As a result, the transition probabilities of the simulated annealing algorithm do not correspond to the transitions of the analogous physical system, and the long - term distribution of states at a constant temperature T need not bear any resemblance to the thermodynamic equilibrium distribution over states of that physical system, at any temperature. Nevertheless, most descriptions of SA assume the original acceptance function, which is probably hard-coded in many implementations of SA.

4.8.10. Efficient candidate generation

When choosing the candidate generator neighbour ( ), it must be considered that after a few iterations of the SA algorithm, the current state is expected to have much lower energy than a random state. Therefore, as a general rule, it should skew the generator towards candidate moves where the energy of the destination state s' is likely to be similar to that of the current state. This heuristic tends to exclude "very good" candidate moves as well as "very bad" ones; however, the latter are usually much more common than the former, so the heuristic is generally quite effective.

In the travelling salesman problem, swapping two consecutive cities in a low-energy tour is expected to have a modest effect on its energy (length); whereas swapping two arbitrary cities is far more likely to increase its length than to decrease it. Thus, the consecutive-swap neighbour generator is expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat shorter path to the optimum (with n − 1 swaps, instead of n (n − 1) / 2).

A more precise statement of the heuristic is that it should try first candidate states s' for which P (E(s), E(s'), T) is large. For the "standard" acceptance function P above, it means that E(s') − E(s) is on the order of T or less. Thus, use a neighbour () function that swaps two random cities, where the probability of choosing a city pair vanishes as their distance increases beyond T.

4.8.11. Barrier avoidance

When choosing the candidate generator neighbour ( ) it must also try to reduce the number of "deep" local minima - states (or sets of connected states) that have much lower energy than all its neighbouring states. Such "closed catchment basins" of the energy function may trap the SA algorithm with high probability and for a very long time. As a rule, it is impossible to design a candidate generator that will satisfy this goal and also prioritize candidates with similar energy. It can often be seen to vastly improve the efficiency of an SA by relatively simple changes to the generator.

4.8.12. Cooling schedule

The physical analogy that is used to justify SA assumes that the cooling rate is low enough for the probability distribution of the current state to be near thermodynamic equilibrium at all times. Unfortunately, the relaxation time - the time one must wait for the equilibrium to be restored after a change in temperature - strongly depends on the "topography" of the energy function and on the current temperature. In the SA algorithm, the relaxation time also depends on the candidate generator, in a very complicated way. All these parameters are usually provided as black box functions to the SA algorithm.

Therefore, in practice the ideal cooling rate cannot be determined beforehand, and should be empirically adjusted for each problem. The variant of SA known as thermodynamic simulated annealing tries to avoid this problem by dispensing with the cooling schedule and instead automatically adjusting the temperature at each step based on the energy difference between the two states, according to the laws of thermodynamics.

4.8.13 Restarts

Sometimes, it is better to move back to a solution that was significantly better rather than always moving from the current state. This process is called restarting of simulated annealing. To do this, set s and e to sbest and ebest and perhaps restart the annealing schedule. The decision to restart could be based on several criteria, whether the current energy being too high from the best energy obtained so far, restarting randomly etc.

4.9 SA - BASED TUNING FOR PI CONTROLLER IN CSTR PROCESS

4.9.1 The Simulated Annealing Mechanics

Simulated annealing mechanics originated from the statistical mechanics of annealing in solids, considering coercing a solid into a low energy state. To accomplish this, the solid was heated to a temperature that permits many atomic rearrangements, then cooled carefully, slowly, until the material freezes into a good crystal. Simulated annealing techniques use an analogous set of controlled cooling operations for nonphysical optimization problems, in effect transforming a poor, unordered solution into a highly optimized, desirable solution. For this purpose, several basic components are needed for the operation of the mechanics.

• Configurations: The configuration of parameters that minimizes some function E. This function is usually referred to as the cost or objective function. It is a measure of goodness of a particular configuration of parameters.The number of parameters may be considerably large and the cost function of parameters may be very sophisticated.

• Move sets: A set of allowable moves that permits us to reach all feasible configurations and one that is easy to compute. The Metropolis algorithm, which performs to move from configuration to configuration in the schedule of decreasing temperature, is employed here. An iterative improvement strategy, which attempt to perturb some existing, suboptimal solution in the direction of a better, lower-cost solution, then evaluate the result change in energy ΔE. If the energy is reduced, , then the new configuration has lower energy and is accepted as the starting point for the next move. However, if , the move may still happen, the probability of jumping to higher energy depends on the current temperature T . The Metropolis algorithm models this with a Boltzmann distribution: the probability of an uphill move of size ΔE at temperature T is

4.9.2 Control Design

The PID controller based on the simulated annealing algorithms specifically designed by steps as follows.

Step 1 The design parameter: The design parameter is the vector. Initialize the design parameter with a one by three matrix containing random values, usually between 0 and 1.

Step 2 The objective function: Choose the integral time absolute error (ITAE) as the objective function, which emphasizes on the adjusting time as well as the overshoot. The performance index ITAE is quantified as follows:

Step 3 The temperature index and the Markov Chain:

The value of the initial temperature index, Ta, is very important. If the initial temperature index is too high,the searching efficiency for the design parameter becomes low. On the contrary, the search may be trapped in the local minimum. In order to avoid blind choice of the initial temperature, the value of the performance index with the best and worst performance respectively is evaluated. Suppose the acceptance probability of the to the is Pr, then the initial temperature index is derived as

The value of the acceptance probability Pr is usually between [0.8 0.9] so that the searching mechanism avoids premature convergence a local optimum. The length of the Markov Chain, La, defines the step size in search and it changes with the temperature.

Step 4 The state producing function: As the search for the optimal design parameter begins, a random perturbation is generated on the design parameter and a set of new state parameter is produced. As the search approaches, the perturbation becomes subtler and subtler.

So the state producing function is designed as

Where η is the scale parameter, λ is a random perturbation with a Gaussian distribution.

Step 5 Acceptance Criteria: Using the current state parameter, we can obtain the current objective function, which is then compared to the previous best performance index. If the current performance index is lower than the previous best one, the current state parameter replaces the previous best design parameter. Otherwise it is not discarded immediately but accepted with a probability, whose evaluation is subject to the Boltzmann equation

Where n is a threshold value with a uniform distribution between 0 and 1. is the difference between the new performance index and the previous best one.

Step 6 The cooling schedule:

Repeat Step 4 and Step 5 with in the current step size in search. Then decrease

the current temperature index with an exponentially decay

Where λ is between 0 and 1. The length of the Markov Chain gets longer as the temperature index becomes lower.

(4.15)

The scale parameter in Step 4 gets smaller as the temperature index becomes lower.

Step 7 The termination criteria: Iterate Step 4 to Step 6

until either the best performance index decreased to an acceptance minimum level or does not change in

several temperature decreasing periods. This criteria avoids redundant search.

Where is the difference between two adjacent best performance index. is a very small positive value and is a suitable acceptance minimum performance index

4.10 Simulation Results Using Simulated Annealing Algorithm

Here SA-based PI parameter tuning for obtaining the optimal design of the CSTR process is simulated. The performance of the algorithm has been analyzed in CSTR process through computer simulation. Optimal PI settings are computed by means of optimization based on the algorithm. Response of Simulated Annealing tuned PI Control for CSTR process in shown in Figure 4.13. It is seen that an SA tuned PI controller is a better tuned controller in various aspects like delay time, peak time, settling time, rise time and peak overshoot. But it has an overshoot of about of about 7.9 %. The number of iterations used in this algorithm is 1000. The best fitness value obtained is 751.15.

Moreover, on using SA tuning the controller parameters obtained are Kp= 1.1542 and Ki= 2.4627.

Figure 4.13 Response of Simulated Annealing tuned PI Control for CSTR process

4.11. Ant Colony Optimization

The searching on ACO utilizes two evaluations which consist of the static value and the dynamic one. The static evaluation is a peculiar information of the target problem. Usually, a reciprocal number of the distance is adopted as the static evaluation value, when ACO is applied to the route searching problem. On the other hand, the dynamic evaluation introduces pheromone amount. Specifically, the optimization procedure of an ACO is explained as follows. First, the random number q between from 0 to 1 is generated. Next, q is compared with the next parameter, q0. When q is smaller than q0, then it has the largest value of the product of the static evaluation and the dynamic one is selected as the next destination.

The probability pk of an ant k in the node i select to move the node j as follows:

(4.21)

where, (i,j) is a pheromone amount between city i and city j, (i, j) is a reciprocal of the distance between node i and node j, β is a parameter which controls the balance between static evaluation value and dynamic one, and nk is a set of un-visit nodes. Therefore, the selection probability is proportional to the product of the static evaluation and the dynamic one.

Regarding the dynamic evaluation, a pheromone amount on each route between nodes is calculated by using two pheromone update rules. One is local update rule and the other is global update rule. The local update rule is applied to the route which is selected by equation (4.11), and it is defined as follows:

0 (4.22)

where, β is a decay parameter in local update rule and 0 is the initial value of pheromone. Thus, the local update rule adds the pheromone to the selected route, when the ant moves. The global update rule adds pheromone to the best route (the completed route) of all routes. The best route usually indicates the shortest route. The global update rule is defined as follows:

where ρ is a decay parameter in the global update rule, T+ is the best route, and L+ is the distance of the best route.

The ACO with Tabu search has a good searching capability compared with other searching techniques.

The moving destination has been limited.

The ant agent is trapped in the blind alley.

Regarding the problem (i), the proposed algorithm solves it by separately handling the un-visit node, the visited node and the moving candidate node, respectively. In addition, regarding the blind alley of the problem (ii), the proposed algorithm overcomes it by adopting a Tabu search. Here, the situation of which the ant agent is trapped in the blind alley represents that the ant agent has not reached the final destination and no moving candidate node exists. In the proposed algorithm, first of all, the ant agent returns to the previous node. That is, the ant agent moves to another node set on tabu. The flowchart of the ACO Outline is shown in Figure 4.14.

ACO Procedure for finding optimal solution:

Step1: Select the start node and the destination node (Target node).

Step2: Select the node using equations (4.22) and (4.23).

Step3: Move to the selected node and mark the current node as the visited node.

Step4: Tabu operation is executed when no moving candidate node exists.

Step5: Repeat from Step 2 to Step 4 until the ant agent reaches to the final destination.

Step6: Update the pheromone value.

Step7: Repeat from Step 2 to Step 6 until the generations are terminated.

Global heuristic rule

Yes

NO

START

Initialise system Data input, Set Initial Conditions, Preliminary Calculations

Parameters and counters initilisation α, β and ρ =0.75,Iterations=0,W=0

Execute Tabu operation (local heuristic rule)

Objective function evaluation for each best objective function improvement: Update best location and set w=0

Last ant

m=M?

w=w+1; New ant:m=m+1

New iteration:h=h+1 ;m=0(ants)

Stop criterion w=W?

STOP

NO

Yes

Figure 4.14 Flow Chart of the ACO Outline

4.11.1 Simulation results using ACO

ACO simulation is shown in Figure 4.15. It shows the various searching routes for obtaining the feasible solution. In ACO, the search region is independent of the number of parameters, given by the distance between the randomly selected initial position and the position corresponding to optimal fitness value. The speed of computation is determined by the initial velocity of the ACO algorithm with which it reaches to the best solution. After the optimization, the optimum controller parameters are found as Kp =0.8491 and Ki =0.9340. These values are used for the controller tuning.

Figure 4.15 Ant colony optimization

The different iterations by using the Ant colony optimization are shown in Figure 4.15. Here, the global variables are taken as X1 and Y1. For initialization of the pheromone values, the number of the ants are taken as 20, the number of echos (iterations) are taken as 5 and the range of the iteration values starts at 0 and ends at 1. Based on these parameters, the loop cycle finds out the best echo after the first iteration. Based on the values of the first iteration, the second iteration occurs and finally the optimum value of Kp and Ki are obtained.

The optimally tuned controller output by using the ant colony optimization is shown in the Figure 4.16. It also shows the set point tracking. i.e., if the set point changes, the controller output will track it. This is essential for the CSTR process and the ant colony optimization also gives a good performance in the process. It gives better performance than the PSO and GA - based tuning process.

Figure 4.16 Controller output with ACO

Figure 4.17 Responses of ISE, IAE, and ITAE in ACO

Using these modern optimization techniques, it is possible to tune a PI controller based on the actual transfer function of the plant in order to optimize the closed-loop performance. The different error responses are as shown in Figure 4.17. The different errors are Integral square error (ISE), Integral absolute error (IAE) and Integral time absolute error (ITAE). These error responses are used for the analysis of the controller output performance. While using the Ant colony optimization, the values of ISE as 768.8954, IAE as 129.7663 and the ITAE as 1.3772e+003 are obtained.

Figure 4.18 Convergence of GAVS BF- PSO

It is observed from figure 4.18 that the BF-PSO algorithm converges faster than G.A and proved that BF-PSO is stable. It is noted from the Table 4.1, that the BF-PSO tuned method has a less Delay Time, Rise Time, Peak Time, Settling Time and more Peak Overshoot than both the GA and the ACO tuned methods. The GA tuned method has a more Delay Time, Rise Time, Peak Time and Settling Time than the ACO tuned method. However the peak overshoot is reduced by this method.

Table 4.1Comparison of various Tuning Methods

Tuning Methods

Delay Time (Secs)

Rise Time (Secs)

Peak Time (Secs)

Settling Time (Secs)

Peak Overshoot (%)

BF-PSO

0.75

1.5

2.0

2.2

4.2

GA

0.8

1.7

1.75

3.2

1.5

ACO

1

2

2.2

4

1.1

SA

0.9

1.75

2.5

3.4

7.9

4.12 Conclusion

Tuning of a CSTR process has been described in this chapter using BF-PSO, GA and ACO techniques. Computer simulations are done in MATLAB. The results show that in the BF-PSO method, less settling time with small percentage of overshoot is obtained. It is also observed that the speed of computation in the BF-PSO is much better when compared with the other optimization techniques. Further the simulation results with GA and ACO techniques prove to be effective. In the ACO, the search region is independent of the number of parameters, given by the distance between the randomly selected initial position and the position corresponding to the optimal fitness value. The speed of computation is determined by the velocity initializing the ACO algorithm with which it reaches the best solution. It is also seen from the output response that the peak overshoot in the GA and ACO strategy is very less, indicating better stability of the control system. But the settling time is more in GA and ACO than BF-PSO tuned controller. It is also seen that the GA method is inferior to ACO method but superior to SA method. The obtained results are compared and tabulated in Table 4.1. Also, the controller maintains the output with a change in the set point and with minimum values of ITAE, ISE, and IAE. The results of this chapter reveals that the BF-PSO - based tuning method is better in all aspects (Rise time, Peak time, Delay time, Settling time and Peak Overshoot) than other tuning methods.