Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

Solving Practical Global Optimization Problems Information Technology Essay

Computational models usually require huge effort due to a number of design variables that might be considered in the design processes. Probabilistic engineering design involves the simulation of some degree of variables, as the computational cost for design optimization are considerably high, using a less expensive models such as surrogate models which is also known as meta-models will serve as alternative to reduce costs for design optimization. This could be achieved using sensitivity analysis establish to determine significant and insignificant variables. Sensitivity analysis (SA) is the study of how the variation in the output of a model which could be numerical or otherwise can be apportioned, qualitatively or quantitatively, to different sources of variation (Saltelli et al., 2008). This can be carried out through either local or global sensitivity analysis. Local sensitivity analysis focuses on relatively small or local perturbations near a fixed domain while global sensitivity takes into account the effect of the range and shape of the probability distribution for each input (Saltelli et al., 2000)

One of the promising method which provide accurate sensitivity information for most models are namely Sobol’s variance-based approach and the Sobol’ indices; The application of Sobol’s approach for correlation analysis and the variance–based ANOVA lead to comparable results for the linear models and the obtained ranking of variables in terms of sensitivity which were almost equal. In the nonlinear case, it was assumed the Sobol’s approach will result in more satisfactory sensitivity measures. Sensitivity analysis based on the variance decomposition e.g. Sobol’ is relatively expensive due to uncertainty in the model input parameters, therefore an alternative method for qualifying the uncertainty of the response due to an input variable without any reference to the response moment is required. Meta-modelling is a proposed substituted in order to minimize the cost of model evaluation.

High dimensional model (HDMR) is another set of quantitative model assessment and analysis tools for capturing high dimensional input–output system behaviour. It is known as a set of tools explored by (Rabitz et al., 1999) in order to express the input-output relationship of complex models with a large number of output variables. HDMR expansion is computationally very efficient which is based on the common assumption of only low order variable correlation playing a significant role. Further studies have shown that HDMR application up to second order already provides satisfactory results (Li et al., 2001). The optimization of meta-models can be explicitly cheaper than the original functions.

Objectives and scope

This study is aimed at identifying, reviewing and to evaluate sensitivity analysis method that will be capable of addressing complex global optimization based on meta-models. This will be based on the formulation of HDMR component functions where all input variables can be considered to be independent.

HDMR meta-modelling will be used for the development of new software tools for solving global optimization problems and incorporating techniques such as Quasi Monte Carlo (QMC) for high quality optimization.

Application of ANOVA-HDMR and incorporating with an integrated graphical user interface will make the method more efficient for complex models with a large number of input parameters.

The project will also be concerned with the advanced methods for global optimization and applications to practical problems.

CHAPTER TWO

LITERATURE SURVEY

This chapter provides an overview of the principles involved in building meta-models for global optimization. Section one includes description of Monte Carlo methods and their importance in the evaluation of integrals. Section 2.2 examines the role of global sensitivity analysis in model building. Meta-models, as a tool for approximating complex functions, are highlighted in section 2.3. This chapter is concluded with a discussion on the techniques for global optimization in section 2.4 and a summary of the research objectives in section 2.5.

2.1 Monte Carlo Methods

Monte Carlo method is a numerical method of solving mathematical problems by random sampling. The Monte Carlo method is found to be applicable in queuing theory, quality, reliability estimations and numerical analysis. It involves the generation and exploration of a mapping from uncertain analysis inputs to uncertain analysis results. The important element to note is that analysis result; are functions of uncertain analysis inputs (Helton et al., 2006). It is essential to note that determination of sensitivity analysis results is usually more demanding than the presentation of uncertainty analysis results due to the need to actually explore the mapping. The model input parameters can serve as a guide to any application of the model.

The Monte Carlo computation may be regarded as estimating the value of a multiple integral (Sobol’, 1994). If an unknown quantity whose value is to be estimated as given by the one-dimensional integral (extension to the multi-dimensional case is obvious):

(2.1)

Where is a square integral function with probability density function, Then a Monte Carlo approach to estimating consist of drawing a set of independent identically distributed random variables given by according to the probability distribution , and computing the arithmetic mean:

(2.2)

The quantity is a crude Monte Carlo estimator of ; and the stochastic convergence as follows from the law of large numbers.

Since is square integral, the variance of exist and is

(2.3)

Then standard error of will thus be

(2.4)

Independent values of of the random variable uniformly distributed in the interval are called random numbers.

2.2 GLOBAL SENSITIVTY ANALYSIS

Global sensitivity takes into account individual inputs and the effect of the range and shape of the probability distribution for each input and the sensitivity estimates of individual inputs vary simultaneously (Saltelli et al., 2000). A well designed modelling practice requires sensitivity analysis to ensure the model quality by analysing the model structure, selecting the best type of model and effectively identifying the important model parameters. Global sensitivity analysis is superior to other sensitivity analysis methods. Global sensitivity analysis methods evaluates the effect of a factor while all other factors are varied as well and they account for the interaction between variables and do not depend on the choice of a nominal point like local sensitivity analysis methods. GSA is applicable in model evaluation processes and is use to assess and improve the reliability of computer models, it shows how many models use in engineering and other fields have a large number of uncertain parameters and are computationally expensive to run, which therefore restricts the usage and application of methods such as Monte Carlo analysis due to huge computational expense of the model and also the difficulty encountered in interpreting the results for large numbers of parameters e.g. scattered plots. Monte Carlo uses mathematical principles in solving problems and can be achieved through random sampling; this method is found to be applicable in queuing theory, quality and reliability estimations, and numerical analysis. This method is used in various fields for problem solving, mostly due to the simple structure of the computation algorithm

Sudret, 2008 has also classified global sensitivity approach into two groups which includes: Regression based and variance based methods. Regression based method also known as the standardized regression coefficients (SRC) are based on a linear regression of the output on the input vector. Studies show that in comparing standardized regression coefficient with linear regression is some worth the same but Pearson correlation determines the effect of each design variable by the correlation it has with the structural response. The other method is the Variance-based methods: The observed variance of the structural response is always partitioned into components induced by the respective structural variables. The decomposition of this variance is usually called ANOVA techniques ‘‘Analysis of Variance,’’ others are the Fourier amplitude sensitivities test (FAST) for general (nonlinear) models.

2.2.1 SENSITIVTY ANALYSIS: Definition and Concepts

Sensitivity analysis is the study of how the uncertainty in the output of a model can be allocated to the different sources of uncertainty in the model input. It involves the determination of contributions of individual uncertainty analysis inputs to the uncertainty in analysis result (Helton et al., 2006). Sensitivity analysis are essential parts of analysis for complex systems especially uncertainty analysis which can be referred to as the determinations of the uncertainty in analysis results which drives from uncertainty in analysis input. Frey et al., 2004, has also defined SA as the assessment of the input of changes in input values on model outputs. Different approaches to uncertainty and sensitivity analysis have been developed amongst which includes Monte Carlo analysis, variance decomposition and others (Saltelli et al., 1993)

Sensitivity analysis can be use in identifying the important uncertainties for the purpose of prioritizing additional data collection or research (Frey et al., 2004). In the course of model development sensitivity analysis ascertain the quality of the model as a tool for verification and validation and can also be used to provide insight into the robustness of model results when making decision (Saltelli et al., 2000). Sensitivity analysis is essential to ascertain the kind of input parameter and how it contributes to the output variability and to identify parameters capable of interacting with each other and ensuring insignificant parameters are eliminated from final model.

2.2.2 SCATTER PLOTS AND SENSITIVTY MEASURES

Scatter plot is used to depict how one variable is affected by another. It is known as a simple and informative way of examining the relative importance of an input on the output in sensitivity analysis. Scattered plots can be generated by running a Monte Carlo model.

2.2.3 SCREENING METHODS

There are various ways of classifying sensitivity analysis method: it could be classified as mathematical, statistical (or probabilistic), and graphical (Frey and Patil, 2002). These methods are used to make preliminary identification of the most sensitive model inputs: Screening method is very simple and may not be robust to key model characteristic such as nonlinearity, thresholds and interactions. For instance the application of Morris method is use on large scale input space dimensions, The Morris method uses random sampling of points from a fixed grid for averaging elementary effects which are calculated as finite differences with the increment delta comparable with the range of uncertainty, which makes it difficult to correctly account for the effect (Sobol’ and Kucherenko, 2010). But if methods such as screening approach and RS-HDMR are combined, it will definitely give a good and more efficient computational data (Ziehn et al., 2008a). This method is said to be applicable and significant if large parameter interactions exist and also local sensitivity analysis is one method that focuses on relatively small perturbations near a fixed point in the model domain and larger variation of inputs would be non linear due to its application.

The Local method concentrates on the local impact of input parameters on the model. It is based on the computation of the gradient of the response with respect to its parameter around a nominal value. The gradient can be computed using techniques such as finite-difference schemes and direct differentiation (Cacuci, 2003).

2.2.4 VARIANCE BASED METHODS

Variable based method is one of the most efficient and popular GSA technique, the ANOVA approach is based on response surface methodology, a design surface is fitted to predetermine response values using regression analysis. Least squares approximation is used for this purpose. If a single response variable dependent upon a number of variables, the exact functional relationship between these quantities:

(2.5)

The exact functional relationship is now approximated (e.g. polynomial approximation) as

(2.6)

The approximating function is assumed to be a summation of basic functions

(2.7)

Where is the number of basic function, used to approximate the model, the constants have to be determined in order to minimize the sum of square error

(2.8)

Where is the number of experimental points and is the exact functional response at the experimental points (Stander et al., 2007) and (Reuter and Liebscher, 2008).

The determination of the expansion coefficients for the orthonormal polynomials is based on Monte Carlo (MC) integration. Variance reduction method can be applied to improve the accuracy of the RS-HDMR as the error of the MC integration controls the accuracy of the RS-HDMR expansion. The application of the variance reduction method allows improvements in the accuracy of the meta-model without increasing the sample size N. ANOVA has been use to measure the influence of adding fumed silica on the thermal degradation of an epoxy resin (Tarrio-Saavedra et al., 2011). But one of the disadvantages encountered in ANOVA is it requires a large number of function evaluations to achieve reasonable convergence and can become impractical for large engineering problems (Feil et al., 2009).

2.2.4.1 Fast Amplitude Sensitivity Test

Methods such as Fourier amplitude sensitivity test (FAST) can identify the contribution of individual inputs to the expected value of the output variance (Cukier et al., 1978). FAST is a procedure that has been developed for uncertainty and sensitivity analysis (Cukier et al., 1973), It provides a way of estimating the expected values and variance of the output variable and the contribution of individual input parameters to this variance. The advantage of using FAST is that the evaluation of sensitivity estimates can be carried out independently for each parameter using just one simulation because all the terms in a Fourier expansion are mutually orthogonal (Chan et al., 1997). FAST does not assume a specific functional relationship such as linearity in the model structure and thus works for both monotonic and non-monotonic models (Saltelli et al., 2000). Reports show that FAST is faster than Monte Carlo method in terms of application, the effect of all input varying together can be assessed by FAST. One disadvantage of FAST is that it is not efficient in addressing higher order interaction (Saltelli and Bolado, 1998) and (McRae et al., 1982). However, further research into the application of FAST by Saltelli et al., 1999; has address the fact that FAST can deal with higher order interactions between inputs. Therefore, FAST is used to estimate the ratio of the contribution of each input to the output as the first order sensitive index. This index can be used to rank the inputs (Saltelli et al., 2000), but FAST suffers from computational complexity for a large number of inputs.

Another applicable method is Sobol’ which can be conducted by defining the input parameters and output variables, then assigning probability density functions to each input parameter, and further generating an input matrix through an appropriate random sampling methods. Evaluating the output and then assessing the influence of relative importance of each input parameters on the output variables (Sobol’, 1997).

Response surface method (RSM) and Mutual information index (MII) are also reported as applicable methods of GSA. RSM applied to large models so that statistical methods that require multiple model evaluations can be applied. It is effectively use as a step prior to application of techniques that require many model evaluations such as Monte Carlo simulation. RS could be linear or nonlinear and typically classified as first-order or second-order method (Myers and Montgomery, 1995). The other method is the Mutual information (MII) which acts as a measure of the information about the output that is provided by a particular input, the sensitivity analysis is calculated based upon conditional probabilistic analysis. MII is a computational intensive method that takes into account the joint effects of variation in all inputs with respect to the output. The MII is used for models with dichotomons outputs but can be used for outputs that are continuous (Critchfield and Willard, 1986).

2.2.4.2 Sobol’ Method

One of ANOVA based indices is the Sobol’ indices (Sobol’, 1993), the Sobol’ method is achieved through the direct computation of Monte Carlo or the Quasi Monte Carlo simulation. But one of the challenges is difficulty encountered during application, as a result of difficulty experience, other methods for approximation which are based on analysis have been considered. Methods such as HDMR have being considered since are reported to be somehow cost efficient. The Sobol’ indices were developed by estimating the effect of the random input variables onto the model output (Sobol, 2001). The model is described by a function, where is the random input vector consisting of n random variables (i.e. structural parameters) and where denotes the random output vector consisting of random variables (i.e. structural responses). (Sobol’, 1993) has stated that if is decomposed into summands of increasing dimension, will give the equation as

(2.9)

Further on if each random model response is characterized by the variance. According to equation (5) it will be possible to decompose each variance into partial variances associated with the single random input variable as follows:

(2.10)

Therefore each variance could be related to Sobol’ index

with (2.11)

It should be noted that each of the Sobol’ indices represents a sensitive measure that describes which amount of each variance is caused due to the randomness of the single random input variables and its mapping onto the output variables.

2.3 META-MODELS

Difficulties encountered while using different computational models can be tackled by the application of meta-modelling techniques. Meta-modelling are developed from statistical, mathematical and various engineering principles. They are developed as ‘‘surrogates’’ of the expensive simulation process in order to improve the overall computation efficiency. Meta-models are valuable tool used to support a wide scope of activities in modern engineering design, especially design optimization (Wang and Shan, 2006).

2.3.1 META-MODELLING TECHNIQUE

Meta-modelling techniques involves sampling, model fitting, process validation and exploration of design space; also multiple metric can be use to address the implementation of meta-models. The techniques as describe include: Kriging, Radial-Basis Function (RBF) network and Support Vector Machines (SVMs). The meta-model techniques are then used for sensitivity analysis, incorporating the two popular sensitivity analysis methods, Fourier Amplitude Sensitivity Test (FAST) and Sobol’, to determine the influence of variance in the input variables on the variance of the output variables. Kriging is an interpolative approximation method based on an exponentially weighted sum of the sample data; while the Radial-basis Function networks are known to be a particular class of feed- forward networks that incorporate radial basis functions as nodal functions and the Support Vector machines (SVMs) are rooted in statistical learning theory. For further knowledge on the application of such methods can be found in (Sathyanarayanamurthy, 2009). Therefore the use of meta-models in place of actual simulation model to evaluate designs for optimality is an alternative way to reduce costs.

2.3.2 HIGH DIMENSIONAL MODEL REPRESENTATION (HDMR)

The high–dimensional model representative (HDMR) is a family of representation where each of the terms in the expansion reflects the independent and cooperative contributions of the inputs upon the outputs. This model is based on optimization and projection operator theory, which can dramatically reduce the effort for learning the input–output behaviour of high dimensional systems. It is also based on the hierarchy of component functions of increasing dimensions (Li et al., 2008)

Most distinct systems can possibly use the HDMR approach due to the fact that only low order correlations of the input variables are normal expected to have an impact upon the output (Rabitz and Alis, 1999). They introduce the HDMR expansion for the purpose of representing the outputs of a physical system when the number of input variables is large; it is based on exploiting the correlated effects of the input variables, which are created by the input-output mapping. Each hierarchical level of HDMR can be obtained by applying a suitable projection operator to the output function.

HDMR expansion can be in the form for function mapping, for representing the mapping between the input variables defined on the domain and the output

(2. 12)

Where represents the zeroth-order effect which is a constant everywhere in the domain the function gives the effect associated with the variable acting independently, although nonlinearly upon the output. The function describes the cooperative effects of the variables and, and higher-order terms reflect the cooperative effects of increasing numbers of variables acting together to impact upon. The term is determined by the difference between and all other component functions. Equation (2.12) has a finite number of terms and is always exact (Rabitz et al., 1999) though other expansion have been suggested, but they commonly have an infinite number of terms with some specified functions example (Hermite polynomial) (Ghanem and Spanos, 1991). The basic fact underlying HDMR is that the component function in equation (2.12) arising in typical real problems are not likely to exhibit high order cooperatives among the input variables such that the significant terms in the HDMR expansion are expected to satisfy the relation: for . (Li et al., 2002) has shown the HDMR expansion in second as:

(2.13)

It provides a satisfactory description of for many high dimensional systems when the input variables are properly selected, HDMR is a tool use to enhance modelling where the interest centres on the input-output relationships. Its application is found useful in chemical kinetics (Shorter et al., 1999), material discoveries (Rabitz and Shim, 1999), and statistical analysis (Saltelli et al., 1995 and Sobol, 1993). It can be useful for the construction of a computational model directly from laboratory and field data. Other successful applications of HDMR technique include semiconductor formulation (Shim and Rabitz, 1998), amino acid mutation of proteins (Li et al., 2001),

Rabitz et al., 1999; has explored this technique in order to express the input-output relationship of complex models with a large number of output variables. The HDMR expansion has been reported to be computationally very proficient if higher order input variable correlations are weak. Li et al., 2001, has shown that expression up to second order already provides satisfactory results and the HDMR component functions are independent and requires a threshold (Ziehn and Tomlin, 2008b). HDMR has been applied in aerosol thermodynamic equilibrium prediction (Cheng et al., 2010).

Previous works as reported by Li et al., 2006 has express that the HDMR expansion truncate up to the second order often provides a satisfactory description of the output for many high dimensional systems when the input variables are properly chosen. Thus, it can be concluded that HDMR can render the originally perceived exponential difficulty of creating an input-output map down to a problem of only low order polynomic sampling complexity as a result of how efficient it is presumed. This gives the treatment of many high dimensional input-output problems feasible and ensures that the orders are maintained when the number of input variables n is in the thousands. (Rabitz et al., 1999), (Shorter and Rabitz, 1999) and (Wang et al., 1999).

2.3.1 Determining HDMR Component Function

(Li et al., 2001) has further review that exploiting the expected low order variable cooperative in high dimensional systems can only be done if the practical formulations of HDMR component functions can be found; Meaning that the HDMR expansion component in equation (2.12) are optimally tailored to each particular over the entire domain of. The decomposition of the model output in the form in equation (2.12) can utilize the HMDR expansion for some specific representation. Assuming the case of uncertainty analysis of the model output (e.g. analysing the variance of the output), the component functions in the HMDR could be use to represent the independent contributions of input variable to the overall uncertainty of the output. In order to exclude unimportant component functions from the HDMR and to reduce the number of component functions to be approximated by polynomials and also reduce the number of parameters so that the screening method can be avoided, more research is needed in this aspect to address most of the pending issues as regards the application of HDM.

Optimization of HDMR can be achieved through the incorporation of Regularized random sampling (RS-HDMR). RS-HDMR is a practical approach to HDRM which employs random sampling of the input variables. In order to reduce the sampling efforts, the RS-HDRM component functions are approximated in terms of a suitable set of basis functions for randomly sampling the input variable, for example orthonormal polynomials. One of the advantage of RS-HDMR is it provides a straight forward approach to explore the input output mapping of a model requiring a large model to run it (Rabitz et al., 1999), (Li et al., 2002). In order to address that, lead to the development of an optimisation algorithm as an extension to HDMR tools (Ziehn and Tomlin, 2008a) to improve the capabilities of HDMR technique. RS-HDMR component functions are approximated by an orthonormal polynomial expansion with the expansion coefficients determined by least squares regression, but insufficient data and significant errors in the data prevent a unique solution for the expansion coefficient (Li et al., 2008)

Furthermore, RS-HDMR are reported to be very efficient for treating high dimensional input-output mapping problems, and has been utilized in several modelling applications such as atmospheric chemistry (Li et al., 2002) and (Li, et al., 2003), Environmental metal bioremediation (Li et al., 2002) and in kinetic modelling (Feng et al., 2004). Therefore RS-HDMR is a tool for building Mata-model, and can be used for GSA which makes it cheaper when compared to traditional variance based method in terms of computational time. However, from studies and reports it shows that the existing set of RS-HDMR tools do not always provide the most efficient and most accurate way to construct a meta-model and to calculate sensitivity indices, hence the aim of this research to study, identify and develop a suitable meta-model based on HDMR.

2.4 Conclusion

Several sensitivity analysis technique in application have been reviewed and it can be deduced that the most recent and efficient is the HDMR and the RS-HDMR which is a practical approach for interpolating multi-dimensional functions based on randomly sampling the input variables. Other methods reported have some drawbacks which limit their applications; example is FAST which has an advantage over Monte Carlo for delivering faster output data but found not to be very efficient in addressing higher order differentials and suffers from computational complexity from large numbers of inputs. Also Monte Carlo has the advantage of simple structure of computer algorithm but huge amount of computational expense required has limited its usage, but on combining the Monte Carlo with ANOVA utterly improves the performance of RS-HDMR.

RS-HDMR method develops a huge number of sampling time and effort which is being addressed through approximated expansion based on orthonormal polynomials, integrating Monte Carlo integration which suffers from the tendency of oscillation as a result of sample size might prevail, therefore designing a technique that could tackle and reduce the error encounter during application will eventually optimise the package and make for higher suitability and performance.

Hence this project will use HDMR meta-modelling to develop new software tools for solving practical global optimization problems. And further identify advanced methods for global optimization.

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.


More from UK Essays