# Enumeration Deterministic Heuristic And Stochastic Optimisation Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Optimization can be defined as a method of maximizing or minimizing a function of various variables i.e. finding the values of the variables for which the function takes on the minimum or maximum value. The function f(x), called the merit function or objective function, is the quantity that has to be kept as small as possible, such as cost, or as large as possible, such as net present worth (NPV). The components of x, known as design variables or decision variables, are the quantities that can be adjusted.

Optimization is a large topic with many books dedicated to it. Optimization is almost everywhere and optimization problems arise in almost every field. There is scarcely a modern journal, whether of engineering, economics, management, mathematics, physics or social sciences, in which the concept "optimization" is missing from the subject index (Schwefel, 1981).

There is a continuous research going on to develop and search for new optimization techniques and as a result various techniques exist to solve optimization problems. The availability of algorithms for solving optimization problems is very large and the development of new techniques is constantly increasing (Celeste, Suzuki et al. 2004). These optimization techniques and algorithms are classified into various categories. There are different approaches of classifying optimization techniques. These classifications can be on the bases of algorithmic structure, underlying principles or theory, smoothness of objective function and type of variables etc. Simplest classifications of optimization techniques are;

Deterministic and stochastic

Local and global and

Sequential and parallel optimization techniques etc.

Some other classifications of optimization techniques are;

Enumeration, deterministic, heuristic and stochastic

Analytical methods, mathematical programming, gradient methods, statistical optimization and evolutionary computations

Thomas (1992) classified the optimization techniques as rigorous, heuristic, stochastic, static and dynamic optimization

Haupt (2004) classified optimization techniques into six categories as shown in Figure 3.1. None amongst these six categories or their branches is necessarily mutually exclusive.

## Constrained

Figure 3.1 Categories of Optimization Techniques (Haupt and Haupt 2004)

Each of these classification categories may have multiple techniques. These techniques are listed in Tables 3.1 (a), 3.1 (b) and 3.1 (c).

Classification of Optimization Techniques

Each optimization technique has advantages and disadvantages. While it is not within the scope of this research to detail all the possible optimization techniques, a brief introduction to the most important categories and a description of the techniques used in this dissertation is presented.

## Analytical Methods

Direct search

Lagrangian multipliers

Calculus of variations

## Mathematical Programming

Geometric programming

Linear programming

Nonlinear programming

Dynamic programming

Goal programming

Semidefinite and Second order cone programming

Steepest decent / ascent method

Sequential simplex method

## Statistical Optimization

Regression analysis

Correlation analysis

## Evolutionary Computations

Genetic algorithms

Evolution strategies

Differential evolution

Evolutionary programming

Genetic programming Table 3.1 (a) Classification of optimization techniques

Table 3.1 (b) Classification of optimization techniques

Steepest Descent

Quasi-Newton

Newton Method

## Constrained

Simplex - linear

SLP - linear

SQP - nonlinear

Exterior Penalty - nonlinear

Interior Penalty - nonlinear

Method of Feasible Directions - nonlinear

Mixed Integer Programming

Table 3.1 (c) Classification of optimization techniques

## Exhaustive search

Dynamic programming

## Deterministic Techniques

Hill climbing

Branch and bound

Divide and conquer

Depth first (greedy, backtracking)

Dynamic programming

Best first

Tabu search

Informed search

## Stochastic Search Technique

Simulated annealing

Monte carlo

Evolutionary algorithms

Mematic algorithms

Swarm Intelligence

Analytical methods

These are classical methods of optimization and used when the function to be optimized is specifically analytical and the number of independent variables is small. A function of small number of variables can be optimized by differentiating and equating the resulting expression to zero. The expression is solved to find the optimum conditions. Larger number of variables has dimensionality issues and also makes the solution of the expression prohibitive. Constraints and larger number of variables curb the use of analytical methods. Because of this analytical methods have limited use in practice.

These are numerical methods of search. They are universal and well adopted and prove most efficient in search for external values of constrained and unconstrained nonlinear functions and also when a function is not known analytically all together. These methods works by moving along a gradient, which is a vector orthogonal to the performance contour surface at a given point. Gradient methods are based on computations, which readily ascertain the direction that ensures the fastest approach to the optimum.

Rigorous algorithms

These are the ones which, given enough time, will always find the "optimum" solution to the problem for the supplied data and constraints, and for which a formal proof of the optimality of their solutions has been developed (Thomas, 1992).

Enumeration techniques

These involve conducting a total enumeration of the search space in order to look at every possible solution to the optimization problem and be guaranteed to generate the optimal soÂ­lution(s). A total enumeration of the search space is not the best approach if the space is too complex to finish within a reasonable amount of time or if the problem is constrained. The reason is that enumeration techniques typically do not detect infeasible combinations of decision variables prior to computing their fitness value. Enumeration techniques are not always a good technique to use as one would like to generÂ­ate solutions to an optimization problem within a reasonable amount of time and enumeration approaches cannot meet this requirement for complex, large scale optimization problems (Zydallis 2003).

Deterministic techniques

These techniques typically incorporate problem domain knowledge to reduce the size of the search space that is actually analyzed. Throughout the deterministic search process, certain paths may be deemed inferior to others, and only paths of potentially better solution quality are fully explored. Many deterministic search techniques proceed to search the space through a tree or graph like process. While these techniques are typically more efficient than conducting an enumeration of the space, they are still time consuming when applied to large scale optimization problems (Zydallis 2003).

Heuristic Techniques

These are a part of an optimization algorithm that uses the information currently gathered by the algorithm to help to decide which candidate solution should be tested next or how the next solution / individual can be produced. Heuristics used in optimization are functions that help decide which one of a set of possible solutions is to be examined next. These techniques work in nearly all cases but lack rigorous mathematical proof of the optimality of their solutions, so these techniques will find only an approximate solution to the problem (which may or may not be close to the "true" optimum).

Stochastic Search techniques

These techniques make use of some type of randomness or random method in the search process. These techniques base their analyses on probabilistic sampling of the range of possible solutions and maintain some type of record of good solutions found. This information is used in the search process to generate better solutions as the search progresses. A purely random search randomly generates solutions until some stopping criterion is met. At the conclusion of the search, the best solution found is presented to the user (Zydallis 2003). As this process is entirely random (knowledge of good solutions or promising areas of the search space are not recorded) and the techniques typically do not search the entire space for very large optimization problems, they are not guaranteed to find the optimal solution. These approaches typically generate good solutions for problems in which the search space is not totally chaotic.

Evolutionary computations or Evolutionary algorithms (EAs)

Evolutionary algorithms are a population based class of algorithm that have a direct link to biology and the concept of optimal processes. EAs are stochastic, population based algorithms designed to evolve a population over time (generations). The basic concept behind EAs is that of evolution through time. The use of EAs originated from attempts to find more efficient and effective methods for solving optimization problems. EAs can be useful when applied to difficult optimization problems. Difficult optimization problems have one or more of the following features;

The search space is large and is not known to be smooth i.e. multimodal

Time consuming to solve and an acceptable solution in a reasonable time is required

The fitness functions exhibit noise

The problem is not well understood and hence a better optimization method is not known to work well on this problem

High dimensionality and

Have multiple local optima and a single or multiple global optimal

EAs are not guaranteed to find the optimal solution unless given an infinite amount of time, but they offer the ability to find a good solution, and sometimes the optimal solution to difficult optimization problems in an acceptable amount of time.

Multi Objective Optimization

Multi objective optimization problems involve the maximization or minimization of an objective or fitness function, where the fitness function allows one to compare the quality of a solution to other solutions. The fitness function has multiple, often times conflicting, objectives. The overall goal is to find the global optimum over the entire search space. There is usually no single solution that is optimum with respect to all objectives. Consequently there are a set of optimal solutions, known as Pareto optimal solutions, non inferior solutions, or effective solutions. Without additional information, all these solutions are equally satisfactory.

The principle of mulÂ­ti objective optimization was first developed by a French-Italian economist named Vilfredo Pareto for use in economics about 110 years ago. His theories became collectively known as Pareto's optimality concept. There are two general approaches to multi objective optimization. One is to combine the individual objective functions into a single composite function or move all but one objective to the constraint set. The second approach is to determine an entire Pareto optimal solution set or a representative subset. The goal is to find as many of these solutions as possible. Pareto based multi objective optimization approach allow one to analyze the results of a simultaneous optimization of multiple objectives and determine the set of points that are optimal with respect to all of the other points generated.

A feasible solution x1 is said to dominate another feasible solution x2 if x1 has a lower cost than x2 for at least one of the objective functions and is not worse with respect to the remaining objective functions. In other words, x1 dominates x2 if

and (3.1)

or

and (3.2)

A solution is said to be Pareto optimal if it is not dominated by any other solution in the solution space. A Pareto optimal solution cannot be improved with respect to any objective without worsening at least one other objective. The set of all feasible non-dominated solutions is called a Pareto optimal set, and for a given Pareto optimal set, the corresponding objective function values in the objective space are called the Pareto front (Konak et al, 2006). The ultimate goal of a multi-objective optimization algorithm is to identify solutions in a Pareto optimal set.

Various optimization techniques are used by researchers from diverse fields to solve a wide spectrum of optimization problems. There is also a growing trend of using hybrid optimization techniques. The present research also utilizes both the traditional (mixed integer linear programming) and contemporary (multi objective Pareto based genetic algorithm) to search for optimum production scheduling of cement quarry operations.

Genetic Algorithms (GAs)

Genetic algorithms are a subclass of evolutionary algorithms. The beginning of genetic algorithms is credited to John Holland, who developed the basic ideas in the late 1960s and early 1970s. Genetic algorithms evolved as a new approach for problem solving and finally became widely recognized and popular. Today, there are many applications in engineering, science, economy, function optimization, medicine, robotics and scheduling that can be tackled with genetic algorithms. Therefore, various forms of genetic algorithms from simple and steady state GAs to complex multi objective GAs have been developed. The GAs and its many versions have been popular in academia and the industry mainly because of its intuitiveness, ease of implementation, and the ability to effectively solve highly nonlinear, mixed integer optimization problems that are typical of complex engineering systems.

Genetic algorithms are stochastic search algorithms based on the mechanics of natural selection and natural genetics subject to survival of the fittest. The algorithm iteratively transforms a set (population) of mathematical objects (string structure) into a new population of off springs. In GAs terminology, a solution vector x Ð„ X is called an individual or a chromosome. Chromosomes are made up of discrete units called genes. Each gene controls one or more features of the chromosome. Normally a chromosome corresponds to a unique solution x in the solution space. This requires a mapping mechanism between the solution space and the chromosomes. This mapping is called an encoding, and GAs works in the encoding of the problem, not on the problem itself (Konak, Coit et al. 2006).

GAs operates on a collection of chromosomes, called a population. The population is normally randomly initialized. As the search evolves, the population includes fitter and fitter solutions and eventually it converges i.e. it is dominated by a single solution. Two operators, crossover and mutation are used for reproduction i.e. generate new solutions from existing ones. As sketched in Figure 3.1, the genotypes are used in the reproduction operations

GPM

Objective function fi

Phenotype x

GPM: Genotype Phenotype Mapping

Genotype g

Population

## Fitness Assignment

Use the objective values to determine the fitness values

## Evaluation

Compute the objective value of the candidate solutions

## Initial Population

Create and initial population of random Individuals

## Selection

Select the fittest individual for reproduction

## Reproduction

Create new individuals from the mating pool by cross over and mutation

Crossover

Genotype g

## Population Pop

Mutation

Figure 3.1 Working of GAs (modified from Chong and Zak 2001)

whereas the values of the objective functions fi are computed on the basis of phenotypes in the problem space X which are obtained via the genotype-phenotype mapping. Working of a simple genetic algorithm can be summarized as follows (Chong and Zak 2001) ;

Set k := 0; form initial population P(0);

Evaluate P(k);

If stopping criterion satisfied, then stop;

Select M(k) from P(k);

Evolve M(k) to form P(k + 1);

Set K := k + 1, go to step 2.

A flow chart illustrating the above algorithm is shown in Figure 3.2

k := 0

Form P(0)

Evaluate P(k)

Stopping Criterion Satisfied?

STOP

Yes

Fitness

No

P(k)

Selection

Crossover

Pc

Mutation

M(k)

Evolution

Pm

P(k+1)

k := k +1

Figure 3.2 Flowchart for the genetic algorithm(Chong and Zak 2001)

Multi Objective Genetic Algorithms

Genetic algorithms are well suited to solve multi-objective optimization problems (MOOPs) and are the most popular heuristic approach for MOOPs. Various versions of multi-objective GAs are in use and are listed in Table 3.2. Generally, multi-objective GA differs based on their fitness assignÂ­ment procedure, elitism, or diversification approaches. Table 3.3 highlights the well-known multi-objective GAs with their advantages and disadvantages.

Table 3.2 Various Multi-objective Genetic Algorithms

## Multi Objective Genetic Algorithms

Vector Evaluated GA (VEGA)

Pareto-Archived Evolution Strategy (PAES)

Multi-Objective Genetic Algorithm (MOGA)

Multi Objective Genetic Local Search (MOGLS)

Niched Pareto Genetic Algorithm (NPGA)

Multi-Objective Evolutionary Algorithm (MEA)

Weight-Based Genetic Algorithm (WBGA)

Micro-GA

Non-dominated Sorting Genetic Algorithm (NSGA)

Rank-Density Based Genetic Algorithm (RDGA)

Fast Non-dominated Sorting Genetic Algorithm (NSGA-II)

Dynamic Multi-objective Evolutionary

Algorithm (DMOEA)

Pareto Envelop Region-based Selection Algorithm (PESA-II)

Mendelian Multi-objective Simple Genetic

Algorithm (MOSGA)

Strength Pareto Evolutionary Algorithm (SPEA)

Multi-objective Messy Genetic Algorithm

(MOMGA)

Improved SPEA (SPEA2)

Modified Multi-objective Messy Genetic

Algorithm (MOMGA - II)

Random Weighted Genetic Algorithm (RWGA)

Pareto Envelope-based Selection Algorithm

(PESA)

Search for optimum long term production schedules presented in this research uses Pareto based modified non-dominated sort genetic algorithm (NSGA-II). Reasons for choosing this method are;

It does not require determining an optimal weight for the objective functions.

It is easy to implement and is computationally efficient.

It allows for a simultaneous optimization of multiple objective functions.

A general description of the working of modified NSGA is as follows;

Table 3.3 List of well-known Multi-Objective GA (Konak, Coit et al. 2006)

## VEGA

Each subpopulation is evaluated with respect to a different objective

No

No

No

First MOGA Straightforward implementation

Tend converge to the extreme of each objective

## MOGA

Pareto ranking

Fitness sharing by niching

No

No

Simple extension of single objective GA

Usually slow convergence, Problems related to niche size parameter

## WBGA

Weighted average of normalized objectives

Niching, Predefined weights

No

No

Simple extension of single objective GA

Difficulties in non-convex objective function space

## NPGA

No fitness, assignment, tournament selection

Niche count as tieâ€‘ breaker in tournament selection

No

No

Very simple selection process with tournament selection

Problems related to niche size parameter, Extra parameter for tournament selection

## RWGA

Weighted average of normalized objectives

Randomly assigned weights

Yes

Yes

Efficient and easy implement

Difficulties in non-convex objective function space

## PESA

No fitness assignment

Cell-based density

Pure

elitist

Yes

Easy to implement, Computationally efficient

Performance depends on cell sizes, Prior information needed about objective space

## PAES

Pareto dominance is used to replace a parent if offspring dominates

Cell-based density as tie breaker between offspring and parent

Yes

Yes

Random mutation hill climbing strategy, Easy to implement, Computationally efficient

Not a population based approach, Performance depends on cell sizes

## NSGA

Ranking based on non-domination sorting

Fitness sharing by niching

No

No

Fast convergence

Problems related to niche size parameter

## NSGA - II

Ranking based on non-domination sorting

Crowding distance

Yes

No

Single parameter (N), Well tested, Efficient

Crowding distance works in objective space only

## SPEA

Raking based on the external archive of non-dominated solutions

Clustering to truncate external population

Yes

Yes

Well tested, No parameter for clustering

Complex clustering algorithm

## SPEA - II

Strength of dominators

Density based on the k-th nearest neighbor

Yes

Yes

Improved SPEA, Make sure extreme points are preserved

Computationally expensive fitness and density calculation

## RDGA

The problem reduced to bi-objective problem with solution rank and density as

Forbidden region cell-based density

Yes

Yes

Dynamic cell update, Robust with respect to the number of objectives

More difficult to implement than others

## DMOEA

Cell-based ranking

Yes (implicitly)

No

Includes efficient techniques to update cell densities, Adaptive approaches to set GA parameters

More difficult to implement than others

A population of initial feasible annual production schedules is generated. The population is sorted based on non-domination into each front. The first front is completely non-dominant set in the current population and the second front being dominated by the individuals in the first front only and the front goes so on. Each individual in a respective front is assigned rank (fitness values) based on front in which it belongs. Individuals in first front are given a fitness value of 1 and individuals in second are assigned fitness value as 2 and so on. In addition to fitness value a new parameter called crowding distance is calculated for each individual. The crowding distance is a measure of how close an individual is to its neighbors. Large average crowding distance will result in better diversity in the population.

Parents are selected from the population by using binary tournament selection based on the rank and crowding distance on the last front. An individual is selected if the rank is lesser than the other. Crowding distance is compared only if the rank for both individuals is same. The individual having greater crowding distance is selected. The selected population generates offsprings from crossover and mutation operators. The population with the current population and current offsprings is sorted again based on non-domination and only the best N individuals are selected, where N is the population size. The selection is based on rank and the crowding distance on the last front. The search continues till the population converges to an optimal or near optimal solution.

Mathematical Programming

Mathematical programming has long been recognized as a vital modeling approach to solve optimization problems. Mathematical programming can be defined as a mathematical representation aimed at programming for planning the best possible allocation of scarce resources. It concerns the optimum allocation of limited resources among competing activities, under a set of constraints imposed by the nature of the problem being studied. These constraints could reflect financial, technological, marketing, organizational, or many other considerations. AÂ mathematical programÂ is an optimization problem of the form:

Maximize or Minimize f(x): x in X,

g(x) <= 0, h(x) >= 0,

where X is a subset of RnÂ and is in the domain of the functions, f, g and h, which map into real spaces. The relations g(x) <= 0 and h(x) >= 0 areÂ constraints, and f is theÂ objective function. There are, however, forms that deviate from this paradigm, these include fuzzy mathematical program, goal program, multiple objectives, randomized program and stochastic program etc. It is typically a modeling issue to find an equivalent standard form for these types (Holder, 2008). A point x isÂ feasibleÂ if it is in X and satisfies the constraints: g(x) <= 0 and h(x) >= 0. A point x* isÂ optimalÂ if it is feasible and if the value of the objective function is better than that of any other feasible solution.

There are several branches of mathematical programming. Each one of these branches consists of the theory and application of a collection of optimization techniques that are suited to a specific class of optimization problems. The differences among various branches of mathematical programming are closely linked to the structure of the optimization problem and to the mathematical nature of the objective and constraint functions (Antoniou and Lu 2007). A mathematical program is often extended to indicate aÂ familyÂ of mathematical programs over a parameter space. One of the taxonomy for mathematical programming is by its defining ingredients and is given in Table 3.2.

Table 3.2 Classification of Mathematical Programming

## Types of Mathematical Programming

Abstract

Discrete

Integer

Nonlinear

Biconvex

Disjunctive

Infinite

Pseudo-boolean

Bilinear

Dynamic

Lattice

Reverse convex

Composite concave

Factorable

Linear

Semi-definite

Continuous

Fractional

Mixed-Integer

Semi-infinite

Convex

Geometric

Multilevel

Separable

These mathematical programming types are a natural way to formulate and solve engineering problems. Many techniques have been developed and broaden the applications of these mathematical programming approaches.

Mixed Integer Linear Programming (MILP)

A linear programming (LP) model is a mathematical representation using linear functions exclusively. A linear programming (LP) problem can have many variables and equations. In 1947, George B. Dantzig, developed the Simplex method for solving the general linear programming problem. The extraordinary computational efficiency and robustness of the Simplex method, together with the availability of high-speed computers, have made linear programming a powerful and widely used optimization method. One can solve 100s of thousands to millions of equations and variables in a reasonable time.

Mixed integer linear programming problems are the ones in which some of the variables are integer valued and some are real valued i.e. can take fractional values. A Mixed Integer Linear Program is given by vectors c , b , a matrix A and a number Ï . The goal of the problem is to find a vector solving the following optimization problem:

max or min

subject to: Ab

Mixed integer linear programs can be used to formulate many discrete optimization problems. They are heavily used in practice for solving problems in transportation and manufacturing, production scheduling, vehicle routing, production planning, etc. General algorithms for solving integer linear programs are cutting-plane method, branch and bound, branch and cut and branch and price. These general methods can be easily extended to the mixed-integer case. Branch and bound method is the most common approach to solve an MILP problem. MILP solutions generally take longer time than identical linear programming solutions. A mixed integer linear programming method is used in this dissertation for generating optimum long term production schedules for cement quarry operations.

Antoniou, A. and W.-S. Lu (2007). Practical Optimization - Algorithms and Engineering Applications, Springer.

Celeste, A. B., K. Suzuki, et al. (2004). "MATLAB Implementation of a Genetic Algorithm for Linearly Constrained Optimization Problems." Annual Journal of Engineering III.

Chong, E. K. P. and S. H. Zak (2001). An Introduction to Optimization, John Wiley & Sons, Inc.

Haupt, R. L. and S. E. Haupt (2004). Practical Genetic Algorithms, John Wiley & Sons, Inc., Publication.

Konak, A., D. W. Coit, et al. (2006). "Multi-objective optimization using genetic algorithms: A tutorial." Reliability Engineering & System Safety: 16.

Zydallis, J. B. (2003). Explicit building-block multiobjective genetic algorithms: theory, analysis and development School of Engineering and Management, Air Force Institute of Technology. Ohio, Air University. PhD: 391.

Holder, A., editor.Â Mathematical Programming Glossary. INFORMS Computing Society, http://glossary.computing.society.informs.org/, 2006-08. Originally authored by Harvey J. Greenberg, 1999-2006.