# Multiple Objectives Linear Fractional Programming Problems Biology Essay

**Published:** **Last Edited:**

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

In this paper we present a feasible direction method to find all efficient extreme points for a special class of multiple objective linear fractional programming (MOLFP) problems when all denominators are equal . . Our method is based on the conjugate gradient projection method such that we start with an initial feasible point to generate a sequence of feasible directions towards all efficient adjacent extremes of the above problem. Since methods based on vertex information may have difficulties as the problem size increases we expect that this method is less sensitive to problem size.

A simple production example is given to clarify this method

Keywords: multiple objective linear fractional programming, efficient solution,

non-dominated solution.

1-Introduction:

Multiple objectives linear fractional programming (MOLFP) arises when several linear fractional objectives (i.e. ratio objectives that have linear numerator and denominator) are to be maximized over a convex constraints polytope X. These kinds of programming problems have attracted considerable research and interest, since they are useful in production planning, financial planning and corporate planning, health care, and hospital planning. However for a single objective linear fractional programming the Charnes and Cooper [4] transformation can be used to transform the problem into a linear programming problem .Some other approaches have been reported for solving the (MOLFP) problems. Kormbluth and Steuer [18] considered this problem and presented a simplex -based solution procedure to find all weakly efficient vertices of the augmented feasible region.

Also Benson [2] in his article showed that the procedure suggested by Kormbluth and Steuer for computing the numbers to find break points may not work all the time and he proposed a fail safe method for computing these numbers. Geoffrion [8] in his article introduced the notion of proper efficiency for (MOLFP) and Choo [3] later on proved that every efficient solution for (MOLFP) is proper efficient On the other hand the problem of multiple objectives linear programming (MOLP) arises when the objective functions to be maximized are linear . Different approaches have been suggested for solving this kind of problems, among which are the ones suggested in [5,6 ,7 ,9,15], most of these methods depend on the canonical simplex tableau in multiple objective forms to find the efficient set of (MOLP). Recently more novel work on solving general multiple objective mathematical programming has been done for more details we refer the reader to

[13, 16, 17, 20, 21]. In this paper we suggest a new method to find all efficient extreme points for (MOLFP) when all the denominators are equal. This method provides us with a feasible direction of movement from an efficient extreme point to its adjacent one and is based mainly on the conjugate gradient projection method. In Section 2 some notations and definitions of the (MOLFP) problem is given while in Section 3 we give the main result of this method together with a simple production example and finally Section 4 gives a conclusion remarks about this proposed method .

## 2-Definitions and Notation:

Multiple objectives linear fractional programming arises when several linear fractional objectives (i.e. ratio objectives that have linear numerator and denominator) are to be maximized over a convex constraints polytope X, this problem can be formulated as

Maximize Z(x) = (z1(x), z2(x)â€¦ z k(x))

Subject to

x X ={x Rn, Ax â‰¤ b} (2.1)

where z i(x) = , i = 1, 2â€¦.k

Here ci, di are vectors in Rn, Î± i and Î² i are scalar, A is an (m +n) x n matrix and b Rm+n. We point out that the non negativity condition is added to the set of constraints and we also assume that X is compact set and i > 0, i = 1, 2â€¦k for every x X.

Solving multiple objectives linear fractional programming seeks for the set of solutions called the efficient set every member of this set has the following definition [1, 3]

## Definition 2-1:

A solution x0 is said to be efficient for (MOLFP) if x0 X and these is no x X such that

â‰¥, we also define z0 = Z(x0) to be non-dominated if there is no

z= Z(x) such that Z(x) â‰¥ Z(x0) and Z(x) â‰ Z(x0).

A Characterization of an efficient solution of (MOLFP) has been made through the following lemma [1]

Lemma: let x0 X, then x0 is an efficient solution of (MOLFP) if and only if there exist

> 0 for i =1, 2â€¦ k and ui â‰¥ 0 for all i such that

(2.2)

where is the set of indices of the binding constrains at x0.

In this paper we are interesting in solving the above (MOLFP) when the denominators are all equal; in this case the (MOLFP) takes the form:

Maximize Z(x) = (z1 (x), z2 (x)â€¦, z k (x))

Subject to

x X ={x Rn, Ax â‰¤ b} (2.3)

where z i(x) = , i = 1, 2â€¦.k

In (2.3) we shall assume that Î² â‰ 0, then an equivalent form of (2.3) can be formulated as Maximize Z(x) = , i = 1, 2â€¦k

Subject to

(2.4)

If we define y = then (2.4) can be written in the form

Maximize Z(y) = , i = 1, 2â€¦ k

Subject to

(2.5)

Simply (2.5) can be written in multiple objectives linear programming (MOLP)

Maximize Z(y) =C y + , i = 1, 2â€¦k

Subject to

Gy â‰¤ g (2.6)

where C is k x n matrix whose rows are those represented by (ciT - dT), the constraint matrix G is given by G= (A +dT) , and g = . In the above definition of y,

we can get x = (2.7)

This can be easily derived as follows, since y =. then x = y (dT x + Î²)

Also by operating on both sides by dT , we have dT x = dT y (dT x + Î²)

Hence (dT x + Î²) - Î² = dT y (dT x + Î²) which gives

(dT x + Î²) = , and on substituting in x we get (2.7)

The multiple objective linear programming problem (MOLP) defined by (2.6) is an equivalent for the (MOLFP) when the denominators of the objective functions are all equals, hence due to the well known theorem for (MOLP) [7, 9, 23] that a feasible point y is efficient (2.6) if and only if there is a k vector >0 (weights) such that y is an optimal solution for the linear program

Maximize T C y

Subject to (2.8)

G y â‰¤ g

If we define the dual of (2.8) to be in the form

Minimize uTg

Subject to uTG =T C (2.9)

u â‰¥ 0

Since the set of constraints of the dual problem is in matrix form hence on multiplying this set of constraints by a matrix T = (T1 | T2), where T1 = CT (C CT)-1, and the column of the matrix T2 constitute the bases of N(C) = {v; C v = 0}.

We get uTG T1 = T, uTGT2 = 0 and u â‰¥ 0. In the case when k=n,

we have GT2 = 0, and the dual of (2.9) takes the form

Maximize T

Subject to

G T1 â‰¤ b (2.10)

where T1 is the inverse matrix of the given matrix C. On the other hand if kn then we have G T2 â‰ 0, in this case an L Ñ… (m + n) matrix P of non-negative entries is defined such that P GT2 = 0, this matrix P matrix which can be considered as the polar matrix of the given matrix GT2 will play an important role for the construction of the set of objective space Y to be in the form Y= { Rk |P GT1 â‰¤ P b} or simply can be written as Y= { Rk | Q â‰¤ q}, where Q=P GT1 and q= P b , and in our case the dual of (2.9) will take the form

Maximize T

Subject to

Q â‰¤ q (2.11)

A sub matrix of the given matrix P satisfying

GT1 = T > 0 (2.12)

will play an important role for specifying the positive weights needed for detecting the non dominated point of this multiple objective linear programming (2.6) also our proposed method will depend mainly on these previously specified weights given .

Since at a given optimal point of (2.8) we must find u â‰¥ 0, > 0 such that

uT = T C , (2.13)

is satisfied , where is an n x n sub -matrix of the given matrix G containing only the coefficients of the set of active constraints at the current point .Now if we consider the multiple objective linear programming (MOLP) of the form

Maximize I

Subject to

Q â‰¤ q (2.14)

Here I is k x k identity matrix, hence according to the objective space point of view we have the following proposition

## Proposition

A point 0 is non dominated point of (2.14) if there exists, u â‰¥ 0, > 0 satisfying

uT = T where represents the set of active constraint at the given point0

Proof: straight forward.

Corollary: If " represents the set of non active constraints at the given point 0

then uT " =0 has only the zero solution.

Now let us consider the linear programming (LP) problem defined by

Maximize F(y) = eTCy

Subject to:

G y â‰¤ g

where e Rk with all entries equals 1. If eT C = pT, the above linear program is written as

Maximize F(y) = pTy

Subject to

G y â‰¤ g (2.15)

This problem can also be written in the form:

Maximize F(y) = pTy

Subject to

Gy gi, i = 1, 2... , m + n. (2.16)

Here G represents the i th row of the given matrix G, then we have in the non degenerate case an extreme point (vertex) of Y lies on some n linearly independent subset of Y. We shall give an iterative method for solving this problem and our task is to find the optimal extreme point for this (LP) program (first efficient extreme point). Starting with an initial feasible point then a sequence of feasible directions toward optimality is generated to find this optimal extreme of this programming, in general if yk-1 is a feasible point obtained at iteration k-1 (k = 1, 2 â€¦) then at iteration k our procedure finds a new feasible point yk given by

yk = yk-1 + Î³k-1 Î¼k-1 (2.17)

where Î¼k-1 is the direction vector along which we move and given by

Î¼ k-1 = Hk-1 p (2.18)

Here Hk-1 is an n Ñ… n symmetric matrix given by

I, for k =1

Hk-1 = (2.19)

If k > 1

while is defined as follows, for each active constraint

s; s = 1, 2â€¦, q. at the current point

= âˆ’ (2.20)

with H = I .Then Hk-1 is given by Hk-1 = H . The step length Î³k-1 is given by

Î³k-1 = { ei / ei = , and ei >0} (2.21)

This relation states that Î³k-1 is always positive. Proposition 2-3 below shows that such a positive value must exist if a feasible point exists. Also due to the well known Kuhn-Tucker condition [10, 14] for the point yk to be an optimal solution of the linear program (2-15) their must exist u â‰¥ 0 such that GrTu = p , which can be written in the form

u = (Gr GrT )-1Gr p (2.22)

Here Gr is a sub -matrix of the given matrix G containing only the coefficients of the set of active constraints at the current point yk. This fact will act as a stopping rule of our proposed algorithm.

## 3- New algorithm for solving (MOLFP) problems

Phase I: Use first the linear programming (2.16) to find an initial efficient extreme point for the equivalent (MOLP) through the following steps:

Step 0: set k=1, Ho =I, Î¼0 =p, let y0 be an initial feasible point, and applies relation

(2.21) to compute Î³0.

Step 1: Apply relation (2.17) to find a new solution yk.

Step 2: Apply relation (2.22) to compute u, if u â‰¥ 0 stop. The current solution yk is the optimal solution otherwise go to step 3.

Step 3: Set k = k+1, apply relations (2.19),(2.18) and (2.21) to compute Hk-1, Î¼k-1and Î³k-1 respectively and go to step 1.

Given an initial feasible point y0 and a vector p, step 0 computes Î³0.in Ðž(m+n) steps. Computing yk in step1 requires O (n) steps while testing the optimality of the current solution yk, in step 2 requires O(n3) steps. Step 3 of the algorithm requires O(n3) steps to compute Hk-1 while computing Î¼ k-1 , the feasible direction that increase the value of the objective function, requires O(n2) steps, finally to compute Î³k-1 requires O(m+n) steps. Hence the application of each iteration of our algorithm requires O(max{m+n,n3}) steps.

Remark 3-1 Assuming that q is the number of active constraints at point yk then if q<n and relation (2.22)is satisfied this indicates that yk is an optimal non-extreme point, in this case the objective function can not be improved through any feasible direction

Remark 3-2 If yk-1 is an extreme but non optimal (i.e. there are n active constrains at point yk-1 and relation (2.22) is not satisfied).then a move is made through a direction

Î¼k-1 lies in the nullity of a subset of the set the active constrains at yk-1, each constraint in this subset satisfies relation (2 .21).

Next, we shall prove that the number of iterations that our algorithm requires to solve the (L P) problem is limited by n iterations.

## Proposition

Our algorithm solves the (L P) problem in at most n iterations.

Proof:

Science our allowed directions (given by 2.18) which improve the value of the objective function lies in the nullity of a subset of the given matrix G, then in all iterations we are moving in a direction parallel to a certain subset of the (m+n) constraints and since at least one constraint is added at a time starting with H= I, hence an optimal extreme point may be reached in at most n steps

In our analysis to find all efficient extreme points in multiple objective linear program we do this by proceeding from a given efficient point defined by phase I to its adjacent efficient extremes . This can be done by defining a frame for Cone (H) denoted by F, called a minimal spanning system. For a n x n matrix H denote the set of indices of the columns of H by IdH .Hence if H = (h1,â€¦..hn), then I dH={1,2â€¦n}. For a matrix H we define the positive cone spanned by the columns of H (called a conical or positive hull by Stoer and Witzgall [19] ) as

Cone (H) = Cone (hi; i I dH)

= {h Ð„ Rn; h = Ï„i hi, Ï„i â‰¥ 0}

A frame F of Cone (H) is a collection of columns of H such that

Cone (hi; i I dH) = Cone (H) and for each j Ð„ IdH , we have

Cone (hi ;iI dH\{j}) â‰ Cone (H)

Based on the above definitions, we start phase II to find all efficient extreme points for the equivalent (MOLP) problem through a finite number of steps as follows

## Phase II:

Step 1: Let yk be an efficient point, compute Hk correspond to this point yk

Step 2: Construct a frame F of cone Hk using e.g. .the method of Wets and Witzgall [22].

Step 3: for each hi F determines Î³* obtained by solving the system of linear inequalities of the form Î³ A hi g-G yk,

(the boundary points of this interval gives Î³ *).

Step 4: compute y*= yk + Î³ *hi, as an efficient extreme point for this (MOLP) problem, and go to step 1.

## An application to production example:

let us consider a company that manufactures three kinds of products A1, A2 and A3 , with a profit of 4, 5 and 3 dollar per unit respectively. .However the cost for each one unit of the above products is 1, 2 and 1 dollar. It is assumed that a fixed amount of 10 dollar is added to the profit function and also a fixed cost of 5 dollars is added to the cost function due to expected duration through the process of production. the first product needs 5 employment hours to be produced while the second product needs 3 hours and finally the third product needs 7 hours .A fixed employment demands amount 5 hours is added to the employment function Further more one of the objectives of this company is to maximize the profit in return to the total cost and also the company wishes to maximize the employment return to total cost in order to received state aid to business development , provided that the company has a raw materials for manufacturing and suppose the material needed per tons are 1, 2and 1 with supply for this raw material restricted to 10 tons ..

If we consider x1 ,x2 and x3 as the amount of units of A1 , A2 and A3 to be produced then the above problem can be formulated as

Subject to;

x1 + 2 x2+ x3 1 0

x1 â‰¥ 0, x2 â‰¥ 0 , x3 â‰¥ 0

For this (MOLFP) we have

=( 4 5 3 ) , = ( 5 3 7 ) , dT =( 1 2 1 ) , = 10 , ..Then the equivalent (MOLP) takes the form

Maximize z1 (y) = 2 y1 + y2+ y3 + 2

Maximize z2 (y) = 4 y1 + y2+ 6 y3 +1

Subject to;

3y1 +6 y2+3 y3 2

y1 â‰¥ 0, y2 â‰¥ 0 , y3 â‰¥ 0

For this (MOLP) the first efficient extreme point is obtained by solving the (LP) problem

Maximize z = 6y1+2y2+7y3,

Subject to:

3y1 +6 y2+3 y3 2

-y1 0, - y2 0, -y3 0 ,

1 0 0 6 1/8

Step 0: k=1, H0= 0 1 0 . Î¼0 = 2 , and let y0 = 1/8

0 0 1 7 1/8

be an initial feasible point, then (2.21) gives Î³ 0 = 1/102 , we go to step 1

1/8 6 150/816

Step 1: apply relation (2.17) to get y1 = 1/8 + 1/102 2 = 118/816 and we go to step 2 1/8 7 158/816

Step 2: for this point y1 the first constraint is the only active constraint and since relation (2.22) is not satisfied indicates that this point is not optimal, we go to step3.

Step 3: set k = 2, then

45/54 -18/54 -9/54 171/54

H1 = -18/54 18/54 -18/54 Î¼ 1 = -198

-9/54 -18/54 45/54 225/54

and Î³ 1 = 6372/161568 we go to step 1, to get

150/816 171/54 49878/161568

y2= 118/816 + .6372/161568 -198/54 = 0

158/816 225/54 57834/161568 .

For this point the first and the second constraints are the only active constraints, since (2.22) is not satisfied we go to step 3 to get

1/2 0 -1/2 -1/2

H2 = 0 0 0 , Î¼2 = 0 and Î³ 2 = 99756/161568

-1/2 0 1/2 1/2

and again we go to step 1 to get

49878/161568 -.1/2 0

y3 = 0 + 99756/161568 0 = 0

57834/161568 1/2 2/3

For this point y3 (2.22) is satisfied with uT = [8 39], indicating that this point is optimal for this linear programming and consequently this point y3 is the first efficient extreme point generated for this (MOLP) and we start phase II.

Phase II: For the above (MOLP) problem we compute the following matrices

46/93 -6/93 -5/2

T1 = 38/93 -9/93 , T2 = 4

-37/93 21/93 1

the matrix P such that PGT2 = 0, can take the form

8/47 0 39/47 0

P= 2/41 0 0 39/41

0 8/13 5/13 0

0 2/7 0 5/7

This matrix P is used to compute

6/47 3/47

21/41 -9/41

PGT1 = -6/13 1/13

1/7 -1/7

We note that the first raw in PGT1 has only strictly positive values so we have

T = [6 3] is the only positive weight defined for this problem , so at the point y3

a subset of the set of the active constraints such that

[6 3] 2 1 1 = [u1 u2] 3 6 3

4 1 6 0 -1 0

has a solution u1=8 ,u2 = 39 is defined to compute

1/2 0 -1/2

H3 = 0 0 0 ,

-1/2 0 1/2

A frame of the columns of H3Â is used as feasible directions to find the adjacent extreme point for x3 by solving the system of linear iniquities

3 6 3 1/2 0

-1 0 0 0 Î³ â‰¤ 0 , then with Î³ * = 4/3 , we have an adjacent

0 -1 0 -1/2 0

0 0 -1 2/3

Efficient extreme point of the form

0 1/2 2/3

y1*= 0 +4/3 0 = 0

2/3 -1/2 0

Is explored, if we repeat the above steps for this point y1* we get y3 is the adjacent point of the form

2/3 1/2 0

y3 = 0 -4/3 0 = 0

0 -1/2 2/3

We conclude that the two efficient extreme points y3 and y1* are the only efficient extremes points for this multiple objective linear programming problem. Finally using (2.7) we get the points x1* = ( 0 0 10 ) and x2* = ( 10 0 0 ) , as the only efficient extreme points for this (MOLFP)

## 4-Conclusion

In this paper we present a new method to find all efficient extreme points for multiple objective linear programming (MOLFP) problems with equal denominator based on using a feasible direction of movement from an efficient extreme point to its adjacent one. Our method start with an initial efficient extreme point found by solving a single (LP) problem by moving through the constraint polytope using a modified conjugate gradient projection method [11, 12] .Since most of the proposed methods for solving multiple objective linear fraction programs depend on the simplex tableau, we expect that our method will be less sensitive to problem size