Engine Optimisation Design Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

This project will involve the use of optimisation methods to design car engine. You can use Species Conservation Genetic Algorithm (SCGAs) to explore all possible optimal solutions. When you finish the project, you will have gained basic knowledge of optimisation theory and algorithm, engineering optimisation design. The prerequisite knowledge for this project is basic optimisation theory and algorithms. (Available to 1-2 individual students).

This project involves the optimisation of a fuel-injection gasoline engine. An optimisation study of camshaft control strategies at part throttle for a port fuel-injection gasoline engine equipped with variable camshaft timing is already been performed. We are provided with the engine data at 9 different speed-load combinations in the form of an excel sheet. Engine dataset includes Inlet Valve Opening (IVO), Exit Valve Opening (EVC), RPM (N), Induced air charge per cylinder (MAF), Air fuel ratio (AFR), Engine out emission (NOx), Standard deviation of net mean effective pressure (SDNMEP), Torque. (Singh et al. 2010)

Objective is to explore all potential optimal solutions using the engine data we are provided with.

Optimisation theory

Optimisation is involved in every aspect of our life. In everyday life decisions people make are based on what they believe can maximize or minimize their set of objectives, for instance finding the lowest price for a particular item, taking the shortest or fastest route to minimize the time required to reach their destination, or searching for the best possible hotel that can satisfy maximum conditions within cost constraint. Most of these decisions are not made consulting any methodical mathematical theory but are based on their years of knowledge of the system. However, it becomes difficult to take optimal decisions based on previous knowledge when more and more simultaneous decisions are to be made within a much more constrained environment, for example designing an automobile involves several decisions to be made on every step of process taking into account durability, cost, safety, reliability factors etc. In such situations optimisation theory offers a better substitute for decision making as long as the system and decisions can be represented mathematically (Diwekar 2008).

Introduction

A simple definition of optimisation is solving a problem by minimizing or maximizing a function by methodically selecting the values of integer or real variables from an allowed set.

Optimisation is the method of attaining best results under given conditions. In any process involving manufacture, design and maintenance of engineering systems, various technological evaluations are needed to be taken at several stages. Most critical objective of all these decisions is to maximize the required remuneration or to minimize the required effort. In any practical situation required effort and the desired benefits can be stated as functions of certain resulting variables.

No single method is capable of solving all optimisation problems. For solving different types of optimisation problems a number of optimisation techniques and methods are developed. (Rao 2009)

Table 1.1 below lists classification of various mathematical programming techniques and well-defined areas of operation research.

Table 1.1 Methods of operation research (Rao 2009)

Chronological development

Lagrange, Weirstrass, Euler, Bernoulli, and established the theory of Calculus of variation which cope with the minimization of functional. Lagrange invented the method of optimisation for problems involving addition of unknown multipliers. (Rao 2009). Using this method he formulated a method to solve optimisation problems involving equality constraints in 1760. A set of functions known as Hamiltonians was developed by W.R. Hamilton in 1834; it was used in the statement of principal of optimality that combined optics and mechanics. Principal of optimality in relation to the equilibrium of thermo dynamical systems was further elaborated by J.W. Gibbs in 1875. (Foulds 1981)

World War II made a big contribution in the advancement of numerical optimisation. In early 1950s Dantzig, Kuhn and Tucker laid the foundations of nonlinear programming. (Diwekar 2008).

In 1960s, Zener, Duffin and Peterson developed the method of geometric programming. (Rao 2009)

Charnes and Cooper proposed a method for solving multiobjective optimisation problems called Goal Programming in 1961. (Rao 2009)

John Bernoulli is considered the father of calculus of variation. On June 4, 1964 he challenged the mathematicians around the world to solve his Branchistochron problem, and after many failed attempts by famous names like Galileo and Leibnitz, he solved the problem himself. (Diwekar 2008)

Statement of an optimisation problem

An Optimisation problem consists of at least one objective function and constraint. Mathematically a simple Optimisation problem can be written as

Minimize

Subject to:

Modern Optimisation Methods

The modern methods of optimisation have surfaced as very influential methods to solve multifaceted engineering optimisation problems. Such methods are also called non-traditional optimisation methods. These include

Simulated annealing

Fuzzy optimisation

Particle swarm optimisation

Neural network-based optimisation

Genetic algorithms

Ant colony optimisation

Gelatt, Kirkpatrick and Vecchi developed method of simulated annealing on the mechanics of annealing of liquefied metals.

Genetic algorithms initially proposed by John Holland in 1975, are founded on the technicalities of natural selection and natural genetics.

The ant colony optimisation solely stands on the supportive manner of ant colonies; ants are clever enough to locate shortest route from food source to their nests. Marco Dorigo developed this method in 1992

The fuzzy optimisation approach for both single and multiobjective optimisation was presented by Rao in 1986; it was developed to deal with optimisation problems involving objective function, design data, and imprecise limitations.

The particle swarm algorithms imitate the behaviour of colony of insects; it was primarily developed by Eberhart in 1995.

The neural network method was initially used by Tank and Hopfield in 1985; it is founded on the nervous system's enormous computational capacity, through parallel processing capability, nervous system can resolve perception problems in the existence of substantial quantity of sensory records.

2.4 Applications of Optimisation

Optimisation is widely used in solving engineering problems. It can virtually be applied to resolve all kinds of engineering problems. A few applications of optimisation from various engineering disciplines are listed below which point towards the significance of the subject:

Jet and aerospace structure design for least weight.

Design of civil engineering construction for minimum cost.

Engine optimisation for maximum performance and minimum fuel consumption or emissions.

Design of water resources systems for maximum benefits.

Optimum design of mechanical parts and machine tools.

Optimum design of electrical networks.

Planning of maintenance and replacement of equipment to reduce operation costs.

Design of turbines, pumps and heat transfer equipments for maximum efficacy.

Planning the best strategy to obtain maximum profit in the presence of a competitor.

Types of Optimisation problems

According to Variables

There are four main types of optimisation according to variables:

Optimisation problems (Minimisation)

It is mathematically represented as:

Where f(x) is the objective function which is subjected to minimisation. The set of inequalities and equalities g(x)_sign_less_equal.jpg (1033 bytes)0 are the constraints.

Optimisation problems (Maximisation)

Mathematical representation is:

max_1.jpg (5730 bytes)

Here the Objective function is subjected to maximisation.

Minimax optimisation problems

A continuous minimax optimisation problem can be written as:

minimax_1.jpg (7107 bytes)

Here the solution is a saddle point.

Mixed Optimization Problem

Mathematically represented as:

mixed.jpg (7934 bytes)

There are a number of objective functions which are either subjected to maximisation or minimisation.

According to function

Linear Programming

It is an optimisation method in which objective function and constraints are given as linear functions of the decision variable. Constraints can either be in the form of equalities or inequalities.

General mathematical representation of a linear programming problem can be written as:

The characteristics of a linear programming problem, as stated in standard form are

Each constraint may only have one signhttp://www.scholarpark.co.uk/mos/problem/sign_bigger_equal.jpg,http://www.scholarpark.co.uk/mos/problem/sign_less_equal.jpg or =. It can vary from on constraint to another in the same model.

All the decision variables are non-negative

The objective and all constraints are linear functions.

Nonlinear Programming

Nonlinear programming can further be divided in two subcategories according to the type of objective.

Quadratic model

In this type the objective function is quadratic.

Complex models

In complex models the objective function can be an increase function to the decision variable.

2.6 Types of Optimisation Algorithms

Depending

Genetic Algorithms

Introduction

Genetic algorithms are one of the most appealing developments in the field of heuristic methods for optimisation. Structure of genetic algorithms is stochastic and they are based on neo-Darwinism i.e. the principle of natural selection. That is the reason that they have been very prominent among the wider public which overshadows the popularities of other optimisation techniques. Massive funding has been given to projects based on genetic algorithms by European Commission. The concept of genetic algorithms is very simple. The operation of genetic algorithms starts with a population of strings also called chromosomes. Typically these chromosomes are randomly selected. These chromosomes are then evaluated and are managed in such a way that the strings which provide a better solution for the given problem are given more possibility of reproducing than the ones with poor solutions. This evolution normally occurs in generations. Fitness of chromosomes is evaluated in each generation and based on the fitness; numerous candidate solutions are selected and randomly mutated to form new chromosomes. These chromosomes are then used for the next repetition of the algorithm. This process keeps on going until a suitable solution is found. Figure below illustrates the basic cycle of a genetic algorithm.

Fig. Basic cycle of a genetic algorithm (Weise 2009)

Brief History

The original principal of natural selection came from the Darwin's theory of 'Survival of the fittest'. Mendelian laws of genetics which were at first considered to be unlinked and in opposition of Darwin's theory but the work by Cetverikov in 1920s proved that Darwin's natural selection and Mendel's genetics are not conflicting and their combination gave birth to modern evolutionary theory. (Michalewicz 1996)

John Holland invented genetic algorithms in 1975. His book 'Adaptation in Natural and Artificial Systems' provided the basis of research in this field which has evolved and is now much broader than the original idea of genetic algorithms. This gave birth to the concepts of evolutionary programming (EP), classifier systems (CS), artificial life (AL), evolution strategies (ES) and genetic programming (GP). These correlated research fields are now clustered under the banner of evolutionary algorithms (EAs).

Apart from Holland's influential work in the field several other scientists also suggested the similar ideas. Michael Vose's "The Simple Genetic Algorithm" is recognized as the main theoretical work on genetic algorithms. Two German scientists Ingo Rechenberg and Hans-Paul Schwefel came up with the idea of "evolution strategy" in 1960, while Lawrence Fogel in USA also put into operation his idea of evolutionary programming in the same year. All these proposals were based on the common concept of mutation and selection. Later on Bremermann and Fraser used the concept of recombination which was adopted by John Holland. However the concept of evolutionary computing did not turn out to be much popular because these techniques were computationally expensive due to the limited power of computers in those days. David Fogel (son of Lawrence Fogel) tracked down and collected the work of these early pioneers in his book called Evolutionary Computing: the Fossil record. (Reeves & Rowe 2002)

Numerous other scientists developed algorithms based on these concepts in 1950s and 1960s. Bledsoe (1960), Bremermann (1961), Reed, Toombs and bericelli (1966) all worked in this field. (Mitchell 1998)

Fascinating work by these early explorers provided the building blocks for the modern genetic algorithms.

Elements of a Genetic Algorithm

Basic description of GA operation is illustrated in Figure. It is obvious from the figure that genetic algorithms are considerably more complex than other search methods, with a number of interrelated elements or operators. These operators are listed and briefly explained below.

Fig. A genetic algorithm template (Reeves & Rowe 2002)

Initial population

Initial population as it is obvious from the name is the process of choosing the chromosomes for further processing. Question which arise here are the size of population and secondly the method by which these chromosomes are chosen. Ideally the size of population is a balance between efficiency and effectiveness. Several theoretical approaches have been used but the bottom line is population size is neither kept too small nor too large. A very small population size will not provide enough opportunity to explore the search space efficiently and large population size will damage the effectiveness and could not provide a solution in a sensible amount of computation. Initial population is normally chosen randomly; however more complicated arithmetic methods can be used as random selection does not always cover the search space evenly.

Termination

Genetic algorithms are stochastic search methods, and unlike other simpler methods they can run for ever in theory. So a termination method is needed to be established. Commonly the number of fitness valuations is set to a limit or a preset criterion stops the process if diversity of population falls below it.

Selection

Selection operator chooses chromosomes from a population and passes them on to the next generation. The fundamental idea behind the selection is the fitness of that chromosome. Scheme used for the implementation of selection operator is generally called Roulette-wheel selection (RWS). It uses a distribution method in which likelihood of selection of a string is directly proportional to its fitness. (Mitchell 1998)

Crossover

Once the parents are selected, crossover operator recombines them and then passes them on to the next generation. These parent chromosomes are selected randomly to minimise distributional or positional bias. The individual which are created by the crossover process have better characteristics then their parents because the alleles of one parent with better qualities replaced the alleles of other parent. This process mimics the genetic recombination between single chromosome organisms.

Mutation

Mutation process is even simpler than the crossover process. It is sometimes used in conjunction with crossover or either one of them is used according to the need of algorithm. Mutation process randomly chooses an individual or chromosome and changes its allele value. The purpose of mutation operator is to maintain a sound level of population range and properly utilise the search space. It is not ideal to always use mutation because it slows down the convergence of genetic algorithm and can sometime prove to be computationally expensive when the population size is very large.

New population

When selection, crossover and mutation are applied in loop to a population of chromosomes, it generates a new population with totally different characteristics then the parent population. This prevents the individuals (with good characteristics) of parent population to take part in further reproduction. On these grounds population overlap strategy comes in to action which ensures the survival of best individuals by replacing only a fraction of population at each generation. (Reeves & Rowe 2002)

Advantages of genetic algorithms

Criticism on genetic algorithms

The use of Genetic algorithms instead of other optimisation algorithms has been criticised several times. Some of the criticisms are listed below.

Genetic algorithms use reiteration of fitness function valuation to solve complex and multi-objective problems, which can often be very expensive and time consuming.

Genetic algorithms cannot solve decision problems effectively i.e. the problems with only one fitness criteria, either right or wrong. Such problems can possibly be solved by using random search methods promptly.

The use of genetic algorithms is not always the best choice. For certain optimisation problems, other optimisation algorithms may produce better results.

In case of dynamic data sets, Genetic algorithm operations can be difficult because genomes tend to converge towards solutions which might not be suitable for later data. Employing evolutionary programming in such cases can be more effective on dynamic problems.

(From Wikipedia)

Non-dominated Sorting Genetic Algorithms (NSGAs)

Multi-objective evolutionary algorithms (MOEAs) have been in use over the past few decades. An early example is the Vector evaluated genetic algorithm (VEGA) proposed by Schaffer in 1984. A multi-objective problem is solved by averaging the objectives with a weight vector to scalarize the vector of objectives in to a single objective (Deb et al. 2000). It is called sclarization process. Vega struggled to solve some multi-objective problems due to its biasness towards some Pareto-optimal solutions but it encouraged further research in this field.

Goldberg suggested the use of non-dominated sorting in Genetic algorithms to find multiple pareto-optimal solutions concurrently. Srinivas & Deb 1994 proposed NSGA based on Goldberg's idea.

In NSGA Crossover and mutation operators works same as a normal Genetic algorithm, only difference is the ways selection operator works. Before performing the selection operator, all the non-dominated individuals are ranked and given a dummy fitness value on the assumption that in the population they represent first non-dominated front. To give equal reproductive possibility, same fitness value is assigned to all these individuals. These individuals are then shared with their fitness value to retain diversity in the population. Degraded fitness value for each individual is then obtained which is employed by selection operator to achieve sharing. This produces multiple optimal points in the population. Rest of the population is then processed for the identification of second non-dominated individuals. Same process is then performed on them and a dummy fitness value which is slightly smaller than the least dummy value of the first non-dominated front. This process further goes on until all the population is categorized into a number of fronts.

To find the pareto-optimal regions, this population is then reproduced. First non-dominated front assigned with highest dummy value gets replicated more than the rest of the fronts. This way population is converged towards non-dominated regions which are distributed over the region by sharing. Figure below illustrates this process in the form of a flow chart.

Figure. Flow chart of NSGA (Srinivas & Deb 1994)

Criticism on NSGAs

NSGAs have been criticised mostly because of following reasons. (Deb et.al. 2002)

NSGAs are very complex and prove to be computationally very expensive when it comes to optimising problems with huge population size.

Previous versions of NSGAs lack the ability of elitism which makes the algorithms very slow and also increases the probability of losing good solutions after they are found.

NSGAs do not specify a sharing parameter, which makes it harder to ensure population diversity.

All the problems with NSGA were addressed and an improved version called NSGA-II was proposed in Deb et al. (2000).

Main Features of NSGA-II

NSGA-II uses elitist approach and it is one of the most capable algorithms surpassing two other modern MOEAs: Strength Pareto Evolutionary Algorithm (SPEA) and Pareto Achieved Revolution Strategy (PAES). A number of test problems were simulated in (Deb et al. 2002) and the results demonstrate that NSGA-II retained improved population diversity and converged better in the non-dominated fronts. Key features of NSGA-II are:

NSGA-II allows both types of design variables: Real-coded and binary-coded

NSGA-II applies elitism-preserving method to explore multi-objective solutions.

NSGA-II gives the user the freedom to set up discretization.

NSGA-II uses extension of dominance instead of constraint handling method to solve constrained multi-objective problems.

Diversity is maintained by using niche approach which assures population diversity without employing sharing parameters.

Port fuel Injection Gasoline Engine

Engine mapping

Engine mapping is the process of measuring important engine variables while engine is being operated on all the way through its speed and load ranges.

It is the process of distinguishing the steady state performances (e.g. torque, exhaust gas emission, and fuel consumption) of an engine over the engine's operating range as a function of engine design and control parameters. The maps produced are very useful for control strategy development and also the engine design analysis.

Procedure

Engine is connected to a dynamometer and it is then operated throughout its speed and load range. Important engine variables are measured whereas quantities, such as spark control and air/fuel ratio (AFR) are varied in a known and orderly manner. This process is done in engine test cells where engine dynamometer and complex instrumentations are fitted which collect the data under computer control. A mathematical model is then developed which explains the influence of all the quantifiable variable and parameter on engine performance. Care should be taken to select such a control configuration, control variables and the strategy that will satisfy all performance requirements as calculated from this model and which are also within other design limitations such as cost, quality and reliability (Ribbens, W.B. 2003).

In this analysis engine test data was collected at 9 different spead-load combinations all at part-throttle in a port -injected gasoline engine. The data set consists of 9 sets of 24 data points at each speed (N) and load (MAF) settings. Based on measured airflow load setting (MAF) is the fraction of the maximum cylinder air charge possible at a given RPM.

5 different responses were gathered for each data set.

Torque: brake torque output from engine [Nm].

SDNMEP: Standard deviation of net mean effective pressure, measuring the combustion stability of engine [ bar]

NMEP: Net indicated mean effective pressure [bar]

NOx; The emissions of nitrogen oxide [g/hr].

MAP: Manifold absolute pressure, the pressure at inlet manifold of engine [bar]

All these recorded responses are the functions of inlet valve opening (IVO) and exhaust valve closing (EVC) and the degrees of crank angle measures from top dead centre (TDC). These timing events are the experimental factors or variables. (Singh et al. 2010).

The fig. 1.1 below indicates the data points in design space.

Fig.1.1: Experimental design projections (Singh et al. 2010).

Appendix reference of excel file.

Problem

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.