Application of non linear programming techniques using engineering

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Optimization is a technique of finding the best solution from a set of solutions. The solutions are obtained by maximizing or minimizing an objective function. The objective function is subjected to a set of constraints or in some cases no constraints are applied.

The objective function is defined with the help of the parameters that are critical in the evaluation of the objective function value. The nature of the objective function and the constraints are pivotal in deciding the method used to solve the equation.

The general definition of an optimization problem can be given as:

Or,

Or,

A given function can have more than one minimum and maximum in unrestricted range. The minima in a given range is known as local minima and the smallest of all local minima is known as global minima. Similarly, the maximum in a given range is known as local maxima and the largest of all maxima is known as global maxima.

A function with a single minima is known as a unimodal function.

1.2 Major Subfields

Some of the major optimization branches are:

a. Convex Optimization Problems:

This relates to finding the optimum value of functions which are convex in nature. It falls under several under branches like Non-linear programming, Quadratic and Linear Programming.

Example of a convex function:

b. Non-Linear Optimization Problems:

A non-linear problem is one in which any one of the constraints is nonlinear.

The above is an example of an constrained minimization problem.

c. Integer Programming

An integer program deals with a problem in which the variables can only take integer values. The values of the variable have to be round off to the nearest whole number. This kind of a problem is more complex to solve.

d. Trajectory Optimization

Trajectory Optimization is the field of optimization in which missile trajectories are designed. The equation of the trajectory can be determined with the help of several deciding factors like angle of release, horizontal distance to be covered, vertical distance constraints and other factors like nuzzle velocity and the mass of the missile.

e. Dynamic Programming

Dynamic programming relates to solving complex real-life problems. It deals with these problems by breaking the problem down into simple problems. These problems are solved step-by-step and consequently the optimal solution will be obtained. There are two approaches to this type of problem, namely: Top-Down Approach and Bottom-Up Approach.

f. Genetic Algorithm

Genetic Algorithm is one of the latest branches of optimization which is under development. In this method a learning algorithm is prepared which augments itself to suit the situations better.

Genetic Algorithm applies the biological concept of survival of the fittest. Concepts like mutation, cross over, inheritance etc. are used to make constant changes to the algorithm. It is similar to the fuzzy concept of making neural networks.

g. Simulated Annealing

Simulated Annealing is also an emerging trend in modern optimization. It is used in cases where the variables are to take up discrete values. It mirrors the physical process of annealing used to refine metals.

h. Stochastic Programming

This approach to optimization deals with uncertainty and uses probability deterministic to arrive at a global optimal solution.

Stochastic programming is a contradiction to deterministic optimization techniques. Randomized search techniques are used to find a value close to the final optimal value.

Other Methods to Classify Optimization Techniques:

a. Multi-Objective Optimization Techniques.

b. Multi-Modal Optimization Techniques.

c. Dimensionless Optimization.

1.3 Applications

Optimization techniques have a wide range of application. Here are some of its applications:

a. CPM - Critical Path Management.

b. Logistics.

c. Inventory Management.

d. Rigid-Body Mechanics.

Introduction To C

C is one of the oldest programming software to be defined. It is one of the most widely used software. There exists no computer architecture which doesn't have a compiler for C. C was further modified by Bjarne Stroustrup to come up with its enhancement C++.

C is also used as an application software as it can be applied across different platforms. Compatibility of C is one of its greatest features.

It is available to a wide range of platforms from supercomputers to semicontrollers.

Such is the compatibility of C that compilers of other languages have been written in C. The use of this language as an end user application has also led to the formation of other higher level programming software.

Non-Linear Optimization

3.1 Introduction

A Non-Linear optimization program in which any one of the constraints is non-linear. Here is an example,

Non-Linear Optimization Programs can be further classified into:

a. One-Dimensional Optimization Techniques.

b. Unconstrained Optimization Techniques.

c. Constrained Optimization Techniques.

3.2 One-Dimensional Optimization Techniques

These are the elementary search methods in the branch of non-linear optimization. These techniques use numerical search methods to find the optimal solution. The trial and error method is in application and after n number of iterations we arrive at a solution. They are suitable for finding solutions to unimodal functions. These are functions that have a single maxima or a single minima in the given range of values.

Basic Algorithm of One-dimensional Search Techniques:

Here is the basic algorithm for 1-Dimensional Search Techniques:

1. Start with an initial value of Xi = 0.

2. Evaluate the value of the function f(X) at Xi.

3. Select a suitable direction of movement, ki

4. Select a step length of suitable tolerance(ji).

5. Let Xi+1=Xi+(ki*ji).

6. Calculate F(Xi+1).

7. Compare the two values and select the optimal direction.

Classification of One-Dimensional Search Techniques

a. Elimination Methods

b. Interpolation Methods

Unrestricted Search Method

a. Golden Section Method.

b. Dichotomous Method.

c. Unrestricted Search Method.

d. Exhaustive Search.

e. Fibonacci Method.

Interpolation Methods

a. Require Higher Derivatives.

b. Doesn't Require Derivatives.

3.3 Unconstrained Minimization Techniques

These can be classified as : Direct Search Method and Descent Method.

Direct Search Methods don't require the derivatives of the function while Descent Methods require derivatives.

Direct Search Method

a. Random Search Method

b. Grid Search Method

c. Univariate Method

d. Pattern Search Method

e. Rosenbrock's Method

f. Simplex Method

Descent Methods

a. Steepest Descent Method

b. Fletcher-Reeves Method

c. Newton's Method

d. Marquardt Method

e. Quasi-Newton Method

3.4 Constrained Minimization Techniques

This is the most complex of all the non-linear optimization techniques.

The above two forms form the basis for this particular technique.

It can be classified into:

Direct Methods:

a. Random Search Methods

b. Heuristic Search Methods

c. Complex Method

d. Sequential Linear Programming

e. Sequential Quadratic Programming

f. Zoutendjik's Method

g. Rosen's gradient Projection Method

h. Generalized Reduced Gradient Method

Indirect Methods:

a. Transformation of Variables Techniques

b. Sequential Unconstrained Minimization Techniques

c. Interior Penalty Method

d. Exterior Penalty Method

e. Augmented Lagrange Multiplier Method.

Unrestricted Search Method

4.1 Introduction

This is the simplest of all the search methods. In this, method there is no range available for the optimal solution. The optimum value can be found by progressing towards the final solution by using small incremental/decremental steps.

It is the slowest of all the methods available.

4.2 Algorithm

Step 1: Let Xi be the intial value of X.

Step 2: Calculate Xi+1 and Xi-1 by adding and subtracting ki to Xi. Where ki is the tolerance value.

Step 3: Calculate the value of F(X) at X, Xi+1, Xi-1.

Step 4: If(F(X)>F(Xi+1) and If(F(X)>F(Xi+1)

Then the Solution is Optimum.

Goto Step 6.

Else

Step 5: If (F(X)>F(Xi-1))

Then Xi=Xi+1;

Else

Xi=Xi-1;

Step 6: Print The Optimum Value.

4.3 Problem I

Objective: To find the maximum value of the sine function starting with 0 radians using Unrestricted Search Method.

Code:

#include<iostream.h>

#include<iomanip.h>

#include<conio.h>

#include<stdio.h>

#include<math.h>

/*Defining Function To Evaluate The Objective Function on call */

float execute(float a, float b, float c, float d, float x)

{

float s;

s=(a*x*x*x)+(b*x*x)+(c*x)+(d*sin(x));

return s;

}

void main()

{

clrscr();

float a,b,c,d;

cout<<"\nInput the Function in the Form: a(x^3) + b(x^2) + cx + dsin(x) \n";

cout<<"a: ";

cin>>a;

cout<<"b: ";

cin>>b;

cout<<"c: ";

cin>>c;

cout<<"d: ";

cin>>d;

cout<<"Equation: \n"<<a<<"(x^3) + "<<b<<"(x^2) + "<<c<<"x +"<<d;

float f,g,h;

float x;

float j=-0.1;

float k=0.1;

int flag=0;

int i=0;

x=0;

do

{

f=execute(a,b,c,d,x);

g=execute(a,b,c,d,(x+k));

h=execute(a,b,c,d,(x+j));

cout<<x<<"\n";

if(f>g)

{

if(f>h)

{

flag=1;

cout<<"\nOptimum Found";

cout<<"\nIterations: "<<i;

}

else

{ x+=j; }

}

else

x+=k;

i+=1;

}while((flag==0));

Cout<<"\nOptimal Value: "<<x<<" Radians";

getch();

}

Sample Input:

Input the Function in the Form: a(x^3) + b(x^2) + cx + dsin(x)

a:0

b:0

c:0

d:1

The Function is: sin(x)

Optimum Found

Iterations: 16

Optimal Value: 1.6 radians

Figure : Screenshot of Problem- 1 Using Unrestricted Search Method

4.4 Problem 2

Objective:

To Find the Maximum Value of a Parabola using Unrestricted Search Method.

Coding:

#include<iostream.h>

#include<iomanip.h>

#include<conio.h>

#include<stdio.h>

#include<math.h>

/*Defining Function To Evaluate The Objective Function on call */

float execute(float a, float b, float c, float d, float x)

{

float s;

s=(a*x*x*x)+(b*x*x)+(c*x)+(d);

return s;

}

void main()

{

clrscr();

float a,b,c,d;

cout<<"\nInput the Function in the Form: a(x^3) + b(x^2) + cx + d \n";

cout<<"a: ";

cin>>a;

cout<<"b: ";

cin>>b;

cout<<"c: ";

cin>>c;

cout<<"d: ";

cin>>d;

cout<<"Equation: \n"<<a<<"(x^3) + "<<b<<"(x^2) + "<<c<<"x +"<<d<<"\n";

float f,g,h;

float x;

float j=-0.1;

float k=0.1;

int flag=0;

int i=0;

x=0;

do

{

f=execute(a,b,c,d,x);

g=execute(a,b,c,d,(x+k));

h=execute(a,b,c,d,(x+j));

cout<<x<<"\n";

if(f>g)

{

if(f>h)

{

flag=1;

cout<<"\nOptimum Found";

cout<<"\nIterations: "<<i;

}

else

{

x+=j;

}

}

else

{

x+=k;

}

i+=1;

}while((flag==0));

getch();

}

Sample Input:

Input Function in the Form: a(x^3)+(b*(x^2))+(c*x)+(d)

a:0

b:-1

c:0

d:4

The Function is: 4-(x^2)

Optimum Value: 0

Iterations: 0

Figure : Screenshot of Problem-2 using Unrestricted Search Method

Golden Search Method

5.1 Introduction

Golden Search Method is a faster method than the above described Unrestricted Search Method.

In the golden search method, a range is given within which the optimal value lies. At each iteration the interval is halved as a result of which we narrow down on the final value. When the interval range is as small as the tolerance range, the optimal value is obtained.

5.2 Algorithm

Step 1: Set the values of Xr and Xl.

Step 2: Compute F(Xr) and F(Xl).

While (Xr-Xl>T)

Step 3: If(F(Xr)>F(Xl))

Then,

Xl=((Xr+Xl)/2)

Repeat Step 2

Else,

Xr=((Xr+Xl)/2)

Repeat Step 2.

Step 4

Print Optimum Value

5.3 Problem 1

Objective:

To Find the Optimum Value of a Cosine Function in a Given Range Using Golden Section Method

Program:

#include<iostream.h>

#include<iomanip.h>

#include<conio.h>

#include<stdio.h>

#include<math.h>

float execute(float a, float b, float c, float d, float x)

{

float s;

s=(a*x*x*x)+(b*x*x)+(c*x)+1+(d*(cos(x)));

return s;

}

void main()

{

clrscr();

float a,b,c,d,m,n;

cout<<"a: ";

cin>>a;

cout<<"b: ";

cin>>b;

cout<<"c: ";

cin>>c;

cout<<"d: ";

cin>>d;

cout<<"Equation: \n"<<a<<"(x^3) + "<<b<<"(x^2) + "<<c<<"x +"<<d<<"sin(x)";

float f,g,h;

float x,y;

cout<<"\nEnter Range:\na: ";

cin>>m;

cout<<"\nb: ";

cin>>n;

int i=0;

x=m;

y=n;

do

{

f=execute(a,b,c,d,x);

g=execute(a,b,c,d,y);

cout<<"\n"<<f<<"\n"<<g;

if(f>g)

{

y=((x+y)/2);

x=x;

}

else

{

x=((x+y)/2);

y=y;

}

cout<<"\nx: "<<x;

cout<<"\ny: "<<y;

i+=1;

}while(((y-x)>0.1));

cout<<"\nOptimum Value of Cosine Function in the given range is: "<<((x+y)/2);

cout<<"\nNo. of Iterations: "<<i;

getch();

}

Sample Input:

a:0

b:0

c:0

d:1

The equation is: cos(x)

Enter Range:

a:0

b:2

Optimum Value: 0.0325 radians

Iterations: 5

Figure : Screenshot of Problem 1 Using Golden Section Method

5.4 Problem 2

Objective:

To find the maximum value of a curve using Golden Section Method.

Program:

#include<iostream.h>

#include<iomanip.h>

#include<conio.h>

#include<stdio.h>

#include<math.h>

float execute(float a, float b, float c, float d, float x)

{

float s;

s=(a*x*x*x)+(b*x*x)+(c*x)+ d;

return s;

}

void main()

{

clrscr();

float a,b,c,d,m,n;

cout<<"a: ";

cin>>a;

cout<<"b: ";

cin>>b;

cout<<"c: ";

cin>>c;

cout<<"d: ";

cin>>d;

cout<<"Equation: \n"<<a<<"(x^3) + "<<b<<"(x^2) + "<<c<<"x +"<<d;

float f,g,h;

float x,y;

cout<<"\nEnter Range:\na: ";

cin>>m;

cout<<"\nb: ";

cin>>n;

int i=0;

x=m;

y=n;

do

{

f=execute(a,b,c,d,x);

g=execute(a,b,c,d,y);

cout<<"\n"<<f<<"\n"<<g;

if(f>g)

{y=((x+y)/2);

x=x;}

else

{ x=((x+y)/2);

y=y; }

cout<<"\nx: "<<x;

cout<<"\ny: "<<y;

i+=1;

}

while(((y-x)>0.1));

cout<<"\nOptimum Value of the Function in the given range is: "<<((x+y)/2);

cout<<"\nNo. of Iterations: "<<i;

getch();

}

Sample Input:

a: 0

b: -1

c: 0

d: 4

Equation is: 4-(y^2)

Enter Range:

a: -4

b: 4

Optimum Value: 0.03125

Iterations: 7

Figure : Screenshot of Problem 2 using Golden Section Method

Conclusion

1. The two methods of Optimization have been successfully implemented using C++.

2. The Golden Section Method is faster than Unrestricted Method.

3. This is because it has a higher rate of convergence.

4. These methods can be best used in unimodal functions. In cases, where there is more than one maximum or minimum, the above algorithms don't function well.

Following are some of the improvements that can be done to this program in the future:

a. The method can be augmented to be able to detect more than one maxima or minima. It should then also be able to select the global maxima from the set of local maxima.

b. The program should be able to run in the event of the function producing a value of infinity.

Writing Services

Essay Writing
Service

Find out how the very best essay writing service can help you accomplish more and achieve higher marks today.

Assignment Writing Service

From complicated assignments to tricky tasks, our experts can tackle virtually any question thrown at them.

Dissertation Writing Service

A dissertation (also known as a thesis or research project) is probably the most important piece of work for any student! From full dissertations to individual chapters, we’re on hand to support you.

Coursework Writing Service

Our expert qualified writers can help you get your coursework right first time, every time.

Dissertation Proposal Service

The first step to completing a dissertation is to create a proposal that talks about what you wish to do. Our experts can design suitable methodologies - perfect to help you get started with a dissertation.

Report Writing
Service

Reports for any audience. Perfectly structured, professionally written, and tailored to suit your exact requirements.

Essay Skeleton Answer Service

If you’re just looking for some help to get started on an essay, our outline service provides you with a perfect essay plan.

Marking & Proofreading Service

Not sure if your work is hitting the mark? Struggling to get feedback from your lecturer? Our premium marking service was created just for you - get the feedback you deserve now.

Exam Revision
Service

Exams can be one of the most stressful experiences you’ll ever have! Revision is key, and we’re here to help. With custom created revision notes and exam answers, you’ll never feel underprepared again.