# Multi Objective Shortest Path Problem Mospp Computer Science Essay

Published:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

An algorithm is a well defined computational procedure that takes as a input some value, or set of values, and produces some value, or set of values, as output. It is thus a sequence of computational steps that transform the input into the output. It is a tool for solving a well specified computational problem.. Algorithms should be analyzed to understand their behavior, and (Job Selection, performance, modify) improve them (Research). Also for Correctness (i.e. Does the input/output relation match algorithm requirement?), Amount of work done (i.e. Basic operations to do task), Amount of space used (i.e. Memory used), Simplicity and clarity (i.e. Verification and implementation), Optimality (i.e. Is it impossible to do better?). The complexity of an algorithm is simply the amount of work the algorithm performs to complete its task. Multi objective shortest path problems, Job sequencing problems, Linear programming problems are considered here and these problems are solved by means of algorithms like Dijkstra's algorithm, new heuristic algorithm, Direct heuristic algorithms. All these algorithms will be explained later.

## 1.1 MULTI OBJECTIVE SHORTEST PATH PROBLEM (MOSPP)

A computer network is an interconnected group of computers with the ability to inter change data. At present computer networks are the core of modern communication [70]. Route in which data can be sent from one node to another is path. Finding the shortest path to exchange data in the network is important. The problem of finding the best or shortest path in a network from source node to destination is called shortest path problem (SPP) [49]. The shortest path problem in a network has attracted many researchers since it is important to many applications such as communications, routing, and transportation [32]. For the traditional shortest path problem, the objective function is often single, such as minimum cost or minimum distance [6]. Reducing the number of modal change nodes to be considered in the computation of the shortest paths [10].

The problem of the shortest, quickest or cheapest path is ubiquitous. We can solve it daily. When we are at source and want to move to a destination, we ask for the quickest path from source to destination. The Fire department may want to compute the quickest routes from a Fire station s to all locations in town the single source problem. Sometimes we may even want a complete distance table from everywhere to everywhere the all-pairs problem. In a road network, you will usually ï¬nd an all-pairs distance table for the most important cities.

Shortest path problems are attractive to both researchers and to practitioners for several reasons like, they arise frequently in practice, they are easy to solve efficiently, they provide both a benchmark and a point of departure for studying more complex network models and they arise frequently as sub problems when solving many combinatorial and network optimization problems. There are several shortest path problems like all pairs, point-to-point, k-shortest path, and various others [8]. One of the shortest path problem is the bi-objective shortest path (BSP) [2] problem. It is the natural extension of the single objective problem. Bi-objective shortest path belongs to the class of multiple objective combinatorial optimization (MOCO) problems [2].

Early shortest paths algorithms were Shimbel's Information networks, Ford's RAND, economics of transportation, Dantzig's Simplex method for linear programming, Bellman's Dynamic programming, Moore's Routing long distance telephone calls for Bell Labs, Dijkstra's Simpler and faster version of Ford's algorithm. The shortest path or geodesic between two pair of vertices is a path with the minimal number of vertices. Shortest paths algorithm calculates the lengths of pair wise shortest paths from a set of vertices to another set of vertices. It uses different algorithms, depending on the algorithm argument and the edge weight attribute of the graph. The algorithms implemented are breadth-first search , this only works for un weighted graphs; the Dijkstra algorithm , this works for graphs with non-negative edge weights; the Bellman-Ford algorithm , and Johnson's algorithm . The latter two algorithms work with arbitrary edge weights, but only for graphs with no negative cycle.

The most common algorithm for finding the shortest path is Dijkstra's algorithm. Dijkstra finds the shortest path from a source to all destinations in a directed graph (single source shortest path problem). In this process it will also determine a spanning tree for the graph. A graph is made out of nodes and directed edges which defines a connection from one node to another node. A node (or vertex) is a discrete position in a graph. Edges can be directed or undirected. Edges have an associated costs (also called distance or weight). The distance between two nodes n1 and n2 is labeled as [n1, n2]. The mathematical description for graphs is G= {V, E}, meaning that a graph is defined by a set of vertexes (V) and a collection of edges. The "order" of a graph is the number of nodes. The "size" of a graph is the number of edges. Typical graph problems are: finding the shortest path from a specific node to another node, finding the maximum possible flow through a network where each edges has a pre-defined maximum capacity (maximum-flow minimum-cut problem).

Shortest-paths will be a broadly useful problem-solving model in Maps, Robot navigation, Texture mapping, Typesetting in TeX, Urban traffic planning, Optimal pipelining of VLSI chip, Subroutine in advanced algorithms, Telemarketer operator scheduling, Routing of telecommunications messages, Approximating piecewise linear functions, Network routing protocols (OSPF, BGP, RIP), Exploiting arbitrage opportunities in currency exchange, Optimal truck routing through given traffic congestion pattern.

The most obvious applications arise in transportation or communications, such as finding the shortest route to drive between two cities or figuring how to direct packets to a destination across a network. Other one is consider the problem of image segmentation that is, segregating two characters in a scanned, bit-mapped image of text documentation. We need to find the segregating line between two points that cuts through the fewest number of black pixels. This grid of pixels can be modeled as a graph, with any edge through a black pixel with a high cost. The shortest path from top to bottom defines the best separation between left and right.

A major problem in speech recognition is distinguishing between words that sound alike (homonyms), such as to, two, and too. We can form a graph whose vertices correspond to possible words, with an edge between possible acquainted words. If the weight of each edge estimates the likelihood of conversion, the shortest path across the graph defines the best interpretation of a sentence. Suppose we want to draw an informative image of a graph. The center of the page should match with the ``center'' of the graph, whatever that means. A good definition of the center is the node that minimizes the maximum distance to any other node in the graph. Finding this center point requires to determine the distance (i.e. shortest path) between all pairs of vertices.

In many real applications it is often found that a single objective function is not sufficient to adequately characterize the problem. In such cases, multi objective (multi criteria) shortest path (MOSP) problems are used [33]. Multi objective shortest path problem is one of the types of multiple objective network problems [35]. Most real-world problems in the field of science, engineering, and business management are multi-objective optimization problems (MOPs) [1].The Multi-Objective Shortest Path Problem (MSPP) is an extension of the Shortest Path Problem that aims to find efficient (non-dominated or Pareto-optimal) paths from a source vertex to a target vertex with multiple objectives in a single execution [5]. As MSPP is an extension of the traditional shortest path problem it is concerned with finding a set of efficient paths with respect to two or more objectives that are usually in conflict [31].

Some of the Multi objectives like optimization time, delay, cost, distance, risk, quality, reliability, of service and environment effect etc., may arise in network optimization problems [62]. Two types of objectives usually considered in MOSP are a linear function or a max-min function [9]. Route associated metrics (or a sole function of different metrics) such as a cost, delay, number of hops or blocking probability [4] are also considered. We can also consider several other objective functions such as Max-Sum, Max-Production or Max-Min function [51]. In multi objective problems there is typically no uniformly best solution in all objectives, but rather a trade-off between the different objectives [40]. Thus MOSP appears to be one of the most intensively studied problems. A Constraint Optimization Problem (COP) is the minimization (or maximization) of an objective function subject to a set of constraints (hard and soft) on the possible values of a set of independent decision variables [3].

The shortest path problem is an optimization problem for finding the shortest path connecting two specific nodes of a directed or undirected graph. The multi-objective shortest path problem is a kind of optimization problem of multi-objective network [7]. In general, an instance of a multi objective optimization problem is associated with a set of feasible solutions [39]. The aim is thus to find the efficient set (which consists of all the efficient solutions) or, more often, a reduced efficient set (which consists of only one solution for each non-dominated criterion vector) [38]. A solution is said to be Pareto-optimal if it cannot be improved on one objective without being depreciated on another one [50].

The solution to an instance of the standardÂ shortest path problem is a single shortestÂ route in a directed graph. Suppose, however, that each edge hasÂ bothÂ a distance and a cost, and that a route that isÂ bothÂ short and inexpensive is to be determined. Unfortunately, a typical instance of thisÂ multi-criteriaÂ problem does not contain a route that is optimal for both cost and distance attributes. Instead, a solution generally consists of multipleÂ efficientÂ or Pareto optimalÂ routes, each with the characteristic that the value of one objective can be improved within the feasible region only at the expense of the value of the other objective. The objective function values of these points define aÂ tradeoff curveÂ orÂ efficient frontier. There are various algorithms and approaches for solving multi objective shortest path. Some of them were Dijkstra's algorithm, K shortest path, Evolutionary algorithm etc. We make use of Geoffrion's approach. To get the Pareto optimal solutions by using this approach we get solution for multi objective shortest path.

## 1.2 JOB SEQUENCING PROBLEM

Job Sequencing is the arrangement of the tasks required to be carried out sequentially. A set of jobs need to be served by a server which can process only one job at a time. Every job has a finite processing time and a per unit cost of waiting time [12]. Each job has a specified processing order through the machines, i.e. a job constitute an ordered list of operations each of which is determined by the machine required and the processing time on it [16]. Processing times of some operations are represented by a uniform distribution with given lower and upper bounds. The objective is to find a predictive schedule that can deal with this uncertainty [13].

In job sequencing, there may be a finite set of n jobs where each job consists of a chain of operations and it consists of a finite set of m machines. Each machine at most can handle one operation at a time. Each operation required to be processed during an uninterrupted period of a given length on a given machine. Then find a schedule, i.e. an allocation of the operations to time intervals to machines that has minimal length. To schedule several jobs, the intervals selected for the jobs must not overlap. The objective is to schedule as many jobs as possible under the constraints [11].

For more complex designs, you can build sequences of jobs to run. Job sequencing allows to build in programming type controls, such as branching and looping. Job Sequencer which allows specifying a sequence of parallel jobs or several jobs to run. The sequence can also contain control information; for example, specifying different courses of action to take depending on whether a job in the sequence succeeds or fails.Â Designing a job sequence is similar to designing a job. Create the job sequence in the designer, add activities (as opposed to stages) from the tool palette, and join these together with triggers (as opposed to links) to define control flow. Every activity has properties that can be tested in trigger expressions and passed to other activities further on in the sequence. Activities can also have parameters, which are used to provide job parameters and routine arguments. The job sequence itself has properties, and also have parameters, which are passed to the activities it is called sequencing.

Job sequences are arbitrarily restart able. If you run a restart able sequence and one of the jobs fails to finish correctly, one can fix the problem, then re-run the sequence from the point at which it left off. The sequence records checkpoint information to enable it to do this. Checkpoint information enablesÂ to restart the sequence in a sensible way. It can enable or disable check pointing at a project-wide level, or for individual job sequences. If we have check pointing enabled for a job, we can specify that individual components within a sequence are not check pointed, forcing them to be re-executed whenever the sequence is restarted regardless of whether they were executed successfully before. We can also specify that the sequence automatically handles any errors it encounters when executing jobs. This means we do not have to specify an error handling trigger for every job in the sequence. This can also be enabled on a project-wise basis, or for individual job sequences.

Parameters can also be specified for the job sequences. Values for the parameters are collected when the job sequence is run in the directory. The parameters are available to all the activities in the job sequence, so where we are sequencing jobs that have parameters, we need to make these parameters visible here. For example, if we are scheduling three jobs, each job is expecting a file name to be provided at run time, specify three parameters here, calling them, for example, filename-1, filename-2, and filename-3. Then edit the Job page of each of these job activities in the sequence to map the job's filename parameter onto filename-1, filename-2, or filename-3 as suitable. When run the job sequence, the Job Run Options dialog box appears, prompting you to enter values for filename-1, filename-2, and filename-3. The suitable filename is then passed to each job as it runs.

In multi programming systems, when there is more than one run able process (i.e., ready), the operating system must decide which one to activate. The decision is made by the part of the operating system called the scheduler, using a scheduling algorithm. Scheduling refers to a set of policies and mechanisms to control the order of work to be performed by a computer. Of all the resources in a computer system that are scheduled before use, the Central processing unit is by far the most important. Multiprogramming is the (efficient) scheduling of the Central processing unit. The basic idea is as much as possible to keep the central processing unit busy by executing a (user) process until it must wait for an event, and then switch to another process. Processes alternate between consuming CPU cycles (CPU-burst) and performing I/O (I/O-burst).

In general, job scheduling is performed in three stages: short, medium, and long-term. The activity frequency of these stages is implied by their names. Long-term (job) scheduling is done when a new process is created. It initiates processes and so controls the degree of multi-programming (number of processes in memory).Medium-term scheduling involves suspending or resuming processes by swapping (rolling) them out of or into memory. Short-term (process or CPU) scheduling occurs most frequently and decides which process to execute next. The scheduler is concerned with deciding policy, not providing a mechanism. In general, scheduling policies may be preemptive or non-preemptive. In a non-preemptive pure multiprogramming system, the short-term scheduler lets the current process run until it blocks, waiting for an event or a resource, or it terminates. Preemptive policies, on the other hand, force the currently active process to release the CPU on certain events, such as a clock interrupt, some I/O interrupts, or a system call. Scheduling is significantly investigated in manufacturing systems. Scheduling has been applied to meet customer demand, which plays an important role in customer satisfaction [77].

Generally, every job is different (an exception is an assembly line where the jobs are equal). For example, in case of a make-to-order or an assemble-to-order production system, every order is put at a different time, requires a different quantity of processing time and is delivered at a different time. Job characteristics are important inputs to batch production, job shop, and flow shop scheduling. Each job has a processing time: the time required to process the job, due date: the time when the job must be completed, ready time: the time when the job arrives at the shop floor.

In a job shop, it is common for several jobs to wait in a queue for processing at each work centre. The order in which the waiting jobs are processed can have a profound effect on the number of jobs waiting in the queue, the number of jobs that are delivered on-time, the time the jobs spend in the queue, the work in process inventory, and theÂ average lateness of deliveries to consumers. Therefore, the sequencing of jobs at work centers is a significant decision. Since thousands of sequences must be determined every week in an moderate plant, it is important to evolve simple and practical rules for job sequencing. Job sequencing rules, or dispatching rules are used for sequencing jobs at a work center. A few of the common rules are, Earliest due date ruleÂ (EDD) which means job that is due first is started first and used when companies are sensitive to due date changes. Shortest processing time ruleÂ (SPT)Â - this gets the most work done rapidly and minimizes work-in-process.

Sequencing problems arise when there is a choice as to the order in which a number of tasks can be completed. In such problems, One can determine an appropriate order or sequence for a series of jobs to be done on a finite number of service facilities [63]. The Job-Shop Scheduling Problem (JSSP) is considered as one of the difficult combinatorial optimization problems and treated as a member of NP-complete problem class [14]. The problem has been known for years in the scheduling theory as a particularly hard combinatorial optimization case [26]. The researcher must be concerned not only with obtaining an optimal solution but also with the practical and economical application of the solution technique [15]. Multi-objective scheduling problems often involve minimizing the make span while considering setup costs and times [17].

Job sequencing can be carried out by priority rule and Johnson's rule generally. Sequencing is a process that determines the priorities, job orders should have in the constructing process. Sequencing results in priority rules for job orders. The basic function of priority rules is to provide direction for developing the sequence in which jobs should be performed. This assists administration in ranking job loading decisions for manufacturing centers. There are several priority rules which can be applied to job loading. The most widely used priority rules are: DD-Due Date of a job. The job having the earliest due date has the highest priority, First Come, First Served(FCFS). The first job reaching the production center is processed first, Longest Processing Time(LPT). Jobs having the longest processing time have the highest priority, Preferred Customer Order(PCO). A job from a preferred customer receives the highest priority, SPT-Shortest Processing Time. The job having the shortest processing time has the highest priority.

The critical ratio (CR) rule combines SPT and EDD rule. The critical ratio method assigns a priority that is a continually updated ratio between the time remaining until due date and the required job processing time. Each time a job is scheduled, CR is recalculated for every unscheduled job.CR selection criteria will include Jobs with the smallest CR are run first, Jobs with negative CR are scheduled first, If there is more than one job with negative CR, then such jobs are sequenced in SPT order. When used in conjunction with other jobs waiting to be processed, it is a relative measure of critical job order priority. The critical ratio gives the highest priority to jobs that must be done to maintain a predetermined shipping inventory. Jobs that are falling behind a shipping inventory receive a ratio of less than 1, while a job receiving a crucial ratio greater than 1 is ahead of schedule and is less critical. A job receiving a critical ratio score of 1.0 is exactly on schedule.

Another rule that is Johnson`s rule provides an optimum prioritization based on minimum processing time when N jobs have to be sequentially processed in two production centers. The final result of utilizing Johnson`s rule is a minimization of total idle time at a production center. The procedure for using Johnson`s rule is, show all the processing times for all orders at each respective processing site, Find the job having the shortest processing time. If the job is at the first processing site, schedule it first; however, if job is at the second processing site, schedule it last, Once the job is scheduled, it is no longer receives further consideration, The remaining jobs are scheduled using rules 2 and 3. The sequencing rules such as LCFS, EDD,SPT, FCFS and CR provide a specific sequence. Often, these simple rules furnish good and useful results.

Normally performance can be measured by means of Job flow time: Time a job is completed minus the time the job was first available for processing; average flow time measures receptiveness, average jobs in system: Measures amount of work-in-progress; average number of measures receptiveness and work-in-process inventory, Make span: The time it requires to finish a batch of jobs; measure of efficiency, Job lateness: Whether the job is completed ahead of, on, or behind schedule, Job tardiness: How long after the due date a job was completed, measures due date performance.

Scheduling and sequencing are similar terms. Scheduling provides a detail plan over time. Sequencing does not refer to time at all. Sometimes, a unique schedule follows from a given sequence. In such a case, it's enough to solve the sequencing problem and not worry about the detail scheduling problem. Job sequencing deals with the determination of the appropriate order in which waiting jobs should be served. Due to the combinational nature of sequencing problems, when the number of jobs is very large, the number of alternative job sequences also becomes very large. That is why numerous heuristic procedures have been developed for special case problems. There is several priority sequencing rules (e.g., first in first out, last in first out, shortest processing time first and longest processing time first) used in the assignment of jobs to one machine. A new heuristic technique called Time deviation method will be used to obtain a sequence of jobs for solving job sequencing problem by which the total elapsed time will get reduced.

## 1.3 LINEAR PROGRAMMING PROBLEMS

Linear programming is a relatively young mathematical discipline, dating from the invention of the simplex method by G. B. Dantzig in 1947. Historically, development in linear programming is driven by its applications in economics and management. Dantzig initially developed the simplex method to solve U.S. Air Force planning problems and planning and scheduling problems still dominate the applications of linear programming. One reason that linear programming is a relatively new field is that only the smallest linear programming problems can be solved without a computer. There are two types of problems; they are linear programming and nonlinear programming problems. Non linear programming comprises of two parts like, constrained problems and unconstrained problems. The problem size can be of, small scale: five or fewer unknowns and constraints, intermediate scale, from five to a hundred variables, large scale: from a hundred to thousands variables. Most algorithms designed to solve large optimization problems are iterative.

For LP problems, the generated sequence is of finite length, reaching the solution point exactly after a finite number of steps. For non-LP problems, the sequence generally does not ever exactly reach the solution point, but converges toward it. Some of the LP examples are, Political election problem which has: Certain issues like building roads, gun control, farm subsidies, and gasoline tax. Advertisement on different issues, Win the votes on different areas like urban, suburban, and rural and the Goal is to minimize advertisement cost to win the majority of each area. Flight crew schedule, minimize the number of crews like Limitation on number of consecutive hours, Limited to one model each month.oil well location decision with maximum of oil output like A location is associated a cost and payoff of barrels of oil and Limited budget. A linear program problem with additional constraint that all variables must take integral values is integer linear program. However the general linear program problem is poly time solvable.

A linear program (LP) is a minimization problem where we are asked to minimize a given linear function subject to one or more linear inequality constraints. For a maximization problem, an optimal solution to an LP is a point in the feasible region with the largest objective function value. Similarly, for a minimization problem, an optimal solution is a point in the feasible region with the smallest objective function value. Most LPs have only one optimal solution. However, some LPs have no optimal solution, and some LPs have an infinite number of solutions. A key problem manager's face is how to allocate scarce resources among various activities or projects. Linear programming, or LP, is a method of allocating resources in an optimal way. It is one of the most widely used operations research tools and has been a decision-making aid in almost all manufacturing industries and in financial and service organizations.

Mathematical programming is used to find the best or optimal solution to a problem that requires a decision or set of decisions about how best to use a set of limited resources to achieve a state goal of objectives. Various Steps involved in mathematical programming are Conversion of stated problem into a mathematical model that abstracts all the essential elements of the problem, exploration of different solutions of the problem, finding out the most appropriate or optimum solution, linear programming depend upon all the mathematical functions in the model be linear functions. Various Assumptions of the linear programming model are, the parameter values are known with assuredness, the objective function and constraints show constant returns to scale, there are no interactions between the decision variables (the additivity assumption), the Continuity assumption: Variables can take on any value within a given feasible range.

One of the major applications of linear algebra involving systems of linear equations is in ï¬nding the maximum or minimum of some quantity, such as cost or proï¬t. In mathematics the process of ï¬nding an extreme value (maximum or minimum) of a quantity (normally called a function) is known as optimization. Linear programming (LP) is a branch of Mathematics which deals with modeling a decision problem and subsequently solving it by mathematical techniques. In a linear programming problem, the linear function of the variables which is to be optimized is called objective function. The linear function is also called the objective function. In linear programming terminology the maximization or minimization of a quantity is referred to as the objective of the problem. In linear programming problems, projects, the products, services, etc. that are competing with each other for sharing the given limited resources are called the variables or decision variables. The limitations on resources (like production capacity, cash in hand, man power, machines, time, etc.) which are to be allocated among various competing variables are in the form of linear equations or in equations (inequalities) and are called constraints or restrictions.

In Linear Programming problems there is a single objective function to be maximized or minimized (subject to constraints). In some problems there may be more than one competing objective (or goal) and we need to trade-off objectives against each other. One way of handling problems with multiple objectives is to choose one of the goals as the supreme goal and to treat the others as constraints to ensure that some minimal satisfying level of the other goals is achieved. However, Goal Programming provides a more satisfactory treatment where in many cases problems can still be solved using standard Linear Programming algorithms.

Steps for Setting up the Problem are the first step is to identify the objective. What is being minimized or maximized? (In the Location homework you were minimizing shipping costs.). Next you must identify the infrequent resources. (In any attempt to minimize or maximize an objective, we are constrained by resources). These scarce resources are called Constraints. Constraints may be anything required by a process. (Time, Money, material, people, etc.). The Requirements of an LP Problem are LP problems seek to maximize or minimize some quantity (usually profit or cost) expressed as a goal function, the presence of constraints or restrictions, limits the degree to which we can pursue our intention, there must be alternative courses of action to select from, the constraints and objectives in linear programming problems must be expressed in terms of linear equations or inequalities.

Many applications in business and economics involve a process called optimization, in which we will be required to find the maximum profit, the minimum cost, or the minimum use of resources. One type of optimization problem is called linear programming. Linear programming ï¬nds many uses in the industry and business, where a decision maker may want to utilize limited available resources in the best possible manner. The limited resources may include material, money, manpower, time and space. Linear Programming provides different methods of solving such problems. The formulation of linear programming problem as a mathematical model involves the following key steps. First, it identifies the decision variables to be determined and express them in terms of algebraic symbols. Second, it identifies all the limitations in the given problem and then expresses them as linear equations or inequalities in terms of above defined decision variables. Finally, it identifies the objective which is to be optimized (maximized or minimized) and express it as a linear function of the above defined decision variables.

Consider various variants of linear programming problems and indicate how they are analogues. However, the case where there are only linear equalities and all variables are free is variant. We will call this a scheme of equations. Linear programming will refer to any problem where there is either at least one non-negative variable or at least one linear inequality. For both systems of equations and linear programming having an additional constraint that at least one variable must be integral again changes things. Thus we can distinguish four variants that are qualitatively diï¬€erent: Systems of Equations, Integral Systems of Equations, Linear Programming, and Integer Linear Programming.

We can also distinguish between feasibility problems ( ï¬nd a solution to some linear system) and optimization problems (maximize or minimize a linear function over the set of solutions to some linear system the possible set). If we have a method for solving the optimization problem then we can easily solve feasibility using the same method. Simply maximize the zero function. Interestingly, feasibility and optimization are 'equivalent' in each of the four cases in the sense that a method for to solve the feasibility problem can be used to solve the optimization problem.

The various versions {x | Ax = b, x â‰¥ 0}, {x | Ax â‰¤ b}, {x | Ax â‰¤b, x â‰¥ 0} or general linear systems with combinations of inequalities, equalities, non-negative and unrestricted variables all can easily be converted into the other forms. We will see from the duality theorem that we can solve the optimization problem by solving a related feasibility problem. If we take a linear optimization problem and in addition require that the variables take on integral values only we get integer linear optimization problems. A mixed integer programming problem requires that some, but not all of the variables are integral. Except in the case of integral systems of equations these problems are NP-complete. Fortunately, many graph and network problems formulated as integer linear programming problems are special cases where we can ï¬nd nice theorems and eï¬ƒcient solutions.

Some problems that can be formulated as integer programming problems are, the Transportation Problem, Assignment Problem, Shortest Route Problem, Maximum Flow Problem, and Knapsack Problem. In the Transportation Problem, it has given a ï¬nite number of suppliers, each with ï¬xed capacity, a ï¬nite number of demand centers, each with a given demand, and costs of transporting a unit from a supplier to a demand center, ï¬nding the minimum cost method of meeting all of the demands without exceeding supplies will be the target. In the assignment Problem, it has given equal numbers of people and jobs and the value of assigning any given person to any given job, ï¬nding the job assignment (each person is assigned to a different job) that maximizes the total value will be the target. In the Shortest Route Problem, it has given a collection of locations the distance between each pair of locations, ï¬nding the cheapest way to get from one location to another will be the target.

In the maximum Flow Problem, it has given a series of locations connected by pipelines of ï¬xed capacity and two special locations (an initial location or source and a ï¬nal location or sink), ï¬nding the way to send the maximum amount from source to sink without violating capacity constraints will be the target. In the Knapsack Problem, it has given a knapsack with ï¬xed capacity and a collection of items, each with a weight and value, ï¬nding the items to put in the knapsack that maximizes the total value carried subject to the requirement that weight limitation not be exceed will be the target. Portfolio selection problem [25] is also a linear programming problem. A network problem can be represented as a linear programming model. Types of network problems are Shortest Path, Maximum Flow/Minimum Cut, Minimum Cost Flow, Minimum Spanning Tree and Special case: Transportation and Assignment Problems, Partitioning/Set Covering, Traveling Salesperson, Facility Location and many more.

The optimization problem is always transformed into a linear programming problem [21]. Metric labeling problem is one of major problem in linear programming [18]. The traveling Salesman Problem (TSP) is the problem of finding a least-cost sequence which is also a linear programming problem [23]. Multi-objective linear programming model is more adequate to describe the problem in the real world [22]. There are many equivalent ways to represent instances of the linear programming problem [20]. The methods used are applicable for other graphic and geometric problems as well as quadratic programming [19]. Linear programming model have well-known successful applications in: Manufacturing, Marketing, Finance (investment), Advertising, Agriculture.

There are various methods for solving the linear programming problems. Some of them are approximation algorithm [24], branch and bound methods, cutting plane method etc. Other than Gomory's cutting plane method, Branch and bound method LPP along with the DHALP (Direct heuristic algorithm for linear programming) algorithm which is more efficient than these existing methods will be used for solving linear programming problems. An optimality test will also be included in this. Numerical experiments will depict the utility/scope of such a procedure.