This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The study of linear programming prompted by the classic book of Ford and Fulkerson has led to persistent proficiency of network flow algorithms. Network flows problems have been among the most studied operational research problems, computer science, and engineering and yet arise in many real world applications. Encouraged by the classic book of Ford and Fulkerson, network flow algorithm emerged and catalyzed rapid prove a theorem that suggest a new scaling algorithm to solve the feasibility problem . The study of such problems has led to continuing improvements in the efficiency of network flow algorithms, sensed by instigation of linear programming. This report examines some of the recent developments and the ideas behind history which has drew substantial results obtained within the last several years.
On top of this chapter, some of the history of network flow research is mentioned, meanwhile the results is commented. People normally interested in algorithms which running time as small as a function of the size of the network and the numbers involved. As a measure of the network size, n is usedto denote the number of vertices and m to signify the number of arcs. As measures of the number sizes, U is used to denote the maximum arc capacity, C to denote the maximum arc cost, and B (in the case of the generalized flow problem) to denote the maximum numerator or denominator of the arc capacities and gains. In accordance of using U, C, or B,an assumption can be made that the capacities and costs are integers, and that the gains in the generalized flow problem are rational numbers.
Polynomial-time algorithms is been considered in comparative study due to some distinction. An algorithm is pseudo polynomial if its running time has a bound that is polynomial in n,m, and the appropriate subset of U, C, and B. An algorithm is polynomial if its running time has a bound that is polynomial in n,m, and the appropriate subset of log (log U, log C , and log B). An algorithm is strongly polynomial if its running time has a bound that is polynomial in n and m2 where similarity assumption of m2(log n) should be used when comparing polynomial algorithms to strongly polynomial. The network flow concepts deliberated in this study are unique cases of linear programming; therefore they can be solved by general-purpose linear programming algorithms. However, the problems structure makes it potential to attain more efficient algorithms.
In this study, the detail algorithms that are based on linear programming methods will be briefly discussed. Nevertheless, the first algorithm designed for network flow problems was the network simplex method of Dantzig . It is an alternative of the linear programming simplex method designed to take credit of the combinatorial structure of network flow problems. Variants of the simplex method that avoid cycling give an exponential bound on the complexity of all the network flow problems. Recently, Goldfarb and Hao  have designed a variant of the primal network simplex method for the maximum flow problem that runs in strongly polynomial time. The maximum flow problem is one of the common combinatorial optimization problems with various applications, such as logistics, traffics, communications, computer networks and electrical powers. The problem is to identify a flow of maximum value on a network from a source to a sink. Orlin  designed a variant of the dual network simplex method for the minimum-cost circulation problem that runs in strongly polynomial time. For a long time, the network simplex method has been the method of choice in practice, in particular for the minimum-cost circulation problem. For large instances of hard problems, the new scaling algorithms are performs superior.
This report roughly discuss the classical network flow problems and a less standard problem, the generalized flow problem, sometimes called the problem of flows with losses and gains. The next section consists of literature review which examines the maximum flow circulation problem. The third section discusses on summarizing on the comparative study between network flow algorithms and another distinctive network design. The fourth and final section concludes the paper.
The maximum flow problem is one of the most fundamental problems in network optimization. Its intuitive appeal, mathematical simplicity, and wide applicability have made it a popular research topic among mathematicians, operations researchers and computer scientists.
The maximum flow problem arises in a wide variety of situations. It occurs directly in problems as diverse as the flow of commodities in pipeline networks, parallel machine scheduling, distributed computing on multi-processor computers, matrix rounding problems, the baseball elimination problem, and the statistical security of data. The maximum flow problem also occurs as a subproblem while solving more complex problems such as the minimum cost flow problem and the generalized flow problem. The maximum flow problem also arises in combinatorics, with applications to network connectivity, and to matching and coverings in bipartite networks. The book by Ahuja et al. (1993) describes these and other applications of the maximum flow problem.
Due to its wide applicability, designing efficient algorithms for the maximum flow problem has been a popular research topic. The maximum flow problem is distinguished by the long line of successive contributions researchers have made in obtaining algorithms with incrementally better worst-case complexity (see, e.g., Ahuja et al., 1993 for a survey of these contributions). Indeed, no other fundamental network optimization problem has witnessed as many incremental improvements in solution techniques as has the maximum flow problem. Some, but not all, of these theoretical improvements have produced improvements in practice. The purpose of this paper is to test some of the major algorithmic ideas developed in recent years and to assess their utility in practice.
Prior to the advent of preflow-push algorithms due to Goldberg and Tarjan (1986), the algorithms of Dinic (1970) and Karzanov (1974) were considered to be the fastest maximum flow algorithms. Subsequent developments from 1974 to 1986 included several algorithms with improved worst-case complexity but these theoretical improvements did not translate into empirically faster algorithms. The novel concept of distance labels, in contrast to the layered
(or, referent) network concept in Dinic's and Karzanov's algorithms, proposed by Goldberg and
Tarjan (1986) led to breakthroughs both theoretically as well as empirically. Using distance labels in preflow- push algorithms, Goldberg and Tarjan (1986), and subsequently, Ahuja and Orlin (1989), Ahuja et al. (1989), Cheriyan and Hagerup (1989), and Alon (1990), obtained maximum flow algorithms with incrementally improved worst-case complexities. Some of these algorithms are also substantially faster than Dinic's and Karzanov's algorithms empirically, as the computational testings of Derigs and Meier (1989) and Anderson and Setubal (1993) revealed.
In this paper, we present the results of an extensive computational study of maximum flow algorithms. Our study differs from the previous computational studies in several ways. Whereas the previous studies focus primarily on CPU time analysis, our analysis goes farther and provides detailed insight into algorithmic behavior. It observes how algorithms behave and also tries to explain the behavior. We perform our empirical study using the representative operation counts, as presented in Ahuja and Orlin (1996) and Ahuja et al. (1993). The use of representative operation counts allows us (i) to identify bottleneck operations of an algorithm; (ii) to facilitate the determination of the growth rate of an algorithm; and (iii) to provide a fairer comparison of algorithms. This approach is one method of incorporating computation counts into an empirical analysis. We have limited our study to the best previous maximum flow algorithms and some recent algorithms that are likely to be efficient in practice. Our study encompasses ten maximum flow algorithms whose discoverers and worst-case time bounds are given in Table 1. In the table, we denote by n, the number of nodes; by m, the number of arcs; and by U, the largest arc capacity in the network. For Dinic's and Karzanov's algorithms, we used the computer codes developed by Imai (1983), and for other algorithms we developed our own codes.
We have limited our study to the best previous maximum flow algorithms and some recent algorithms that are likely to be efficient in practice. Our study encompasses ten maximum flow algorithms whose discoverers and worst-case time bounds are given in Table 1. In the table, we denote by n, the number of nodes; by m, the number of arcs; and by U, the largest arc capacity in the network. For Dinic's and Karzanov's algorithms, we used the computer codes developed by Imai (1983), and for other algorithms we developed our own codes.
One of the most studied problems in network optimization is the classical maximum flow problem, but this is not true for its variants. One particular variant is the constrained maximum flow problem. The constrained maximum flow problem is to send the maximum possible flow from a source node to a sink node in a directed capacitated network subject to a budget constraint that the total cost of the flow cannot exceed the available budget, D. It is important to study the constrained maximum flow problem both from a theoretical and a practical point of view because it provides insight into similar problems in the networks and graphs area such as the node-capacitated maximum flow problem. It may also provide ideas for some important problems that are variants of classical problems such as the constrained shortest path problem, constrained transportation problem, or constrained assignment problem, all of which have important applications in practice. The constrained maximum flow problem itself has very important applications, such as in telecommunications and computer networks. Ahuja and Orlin  study the constrained maximum flow problem and propose an O(mS(n, m, nC) log U) capacity scaling algorithm, where n is the number of nodes in the network; m, the number of arcs; C, the largest arc cost; U, the largest arc capacity; and S(n, m, nC), the time to find a shortest path from a single source to all other nodes where nonnegative reduced costs are used as arc lengths.
The maximum flow algorithms of Dinic  and Edmonds and Karp  are strongly polynomial. Network flow problems with side constraints may sometimes be solved as pure network problems by replacing the side constraints with a number of flow balance equations and a few additional nodes and arcs (Glover et al., 1974; Klingman, 1977).
Table 1: Fastest currently known algorithms for network flow problems.
Hopcroft and Karp
Gabow and Tarjan
O(n(m + n\og,n))
Goldberg and Tarjan
Ahuja, Orlin, and
0(nm\og(iJfc£U + 2))
Edmonds and Karp
Goldberg and Tarjan
Orlin, and Tarjan
0(mlog(/(m + nlogn))
0(nm log(n2 /m) log(nC))
O(m log n(m + n log n))
When the labeling algorithm was introduced by Ford and Fulkerson (1956), numerous algorithms were proposed for the classical maximum flow problem, but its variants received little significance. A maximum flow problem with an additional convex budget constraint has been describes by Fulkerson (1959). Malek-Zavarei and Frisch (1971) describe a maximum flow problem with side constraints on some nodes and propose a heuristic solution. Ahuja and Orlin (1995) introduce the constrained maximum flow problem that we study in this paper and propose a polynomial time capacity scaling algorithm. In CalÄ±skan (2009), we point out that in some cases Ahuja and Orlin (1995)'s algorithm does not converge to the optimal solution and propose a slight modification to the algorithm. Currently, no specialized simplex algorithm was proposed for the constrained maximum flow problem. In the simplex algorithm focused for the maximum flow problem, dual variables and reduced costs do not need precise calculation due to the particular structure of the basis. If the simplex algorithm is explicit for the constrained maximum flow problem, the resulting algorithm will benefit from this unique structure as well.
Figure 1: An example of a flow network with a maximum flow. The source is S, and the sink t. the number denote flow and capacity.
The max-flow problem is, roughly speaking, to calculate the maximum amount of "stuff" that can move from one end of a network to another, given the capacity limitations of the network's links. The stuff could be data packets travelling over the Internet or boxes of goods travelling over the highways; the links' limitations could be the bandwidth of Internet connections or the average traffic speeds on congested roads.
More technically, the problem has to do with what mathematicians call graphs. A graph is a collection of vertices and edges, which are generally depicted as circles and the lines connecting them. The standard diagram of a communications network is a graph, as is, say, a family tree. In the max-flow problem, one of the vertices in the graph which is one of the circles is designated the source, where all the stuff comes from; another is designated the drain, where all the stuff is headed. Each of the edges is the lines connecting the circles that have an associated capacity, or how much stuff can pass over it.
Constraints given by the definition of a legal flow. See the linear program here.
O(E max| f |)
As long as there is an open path through the residual graph, send the minimum of the residual capacities on the path.
The algorithm works only if all weights are integers. Otherwise it is possible that the Ford-Fulkerson algorithm will not converge to the maximum value.
A specialization of Ford-Fulkerson, finding augmenting paths with breadth-first search.
Dinitz blocking flow algorithm
The maximum flow in a layered graph can be calculated in O(VE) time, and the maximum number of the phases is n-1. In networks with unit capacities, Dinic's algorithm terminates in time.
Dinitz blocking flow algorithm with dynamic trees
The dynamic trees data structure speeds up the maximum flow computation in the layered graph to O(Elog(V)).
Push-relabel algorithm with dynamic trees
The algorithm builds limited size trees on the residual graph regarding to height function. These trees provide multilevel push operations.
Table 1: The table lists algorithms for solving the maximum flow problem .
We conclude this section with a brief discussion of the other algorithm. The algorithm is based on ideas from two flow algorithms: the cycle-canceling algorithm of Goldberg and Tarjan  described in Chapter 4, and the fat-path h maximum flow algorithm of Edmonds and Karp , which finds the maximum flow in a network by repeatedly augmenting along a highest-capacity residual path.
Each iteration of the algorithm starts with cycle-cancelling. Cancelling a flow generating cycle in a generalized flow network creates an excess at some vertex of the cycle. If we cancel cycles other than the most efficient ones, the residual graph of the resulting generalized pseudo flow will have flow-generating cycles that do not contain the source. The algorithm cancels flow-generating cycles by an adaptation of the Goldberg-Tarjan algorithm. Using the dynamic tree data structure, this phase can be implemented to run in 0(nÂ² m log n log B) time. The resulting excesses at vertices other than the source are moved to the source along augmenting paths in the second phase of the algorithm.
Consider a generalized pseudo flow that has non-negative excesses and whose residual graph has no flow-generating cycles. The key idea of the second phase is to search for augmenting paths from vertices with excess to the source that result in a significant increase in the excess of the source. The algorithm maintains a scaling parameter A such that the maximum excess at the source is at most 2mA more than the current excess. It looks for an augmenting path that can increase the excess of the source by at least A. If the residual graph of the current generalized pseudo of low has no flow-generating cycles, then one can find a sequence of such paths such that, after augmenting the generalized flow along these paths, the maximum excess at the source is at most mA more than the current excess. This phase can be implemented in O(m(m + n log n)) time by a sequence of 0(m) single-source shortest path computations on graphs with arcs of non-negative cost. Now A is divided by two and a new iteration is started. After O(m log B) iterations, when the excess at the source is very close to the optimum value, Truemper's algorithm can be applied to bring all of the remaining excesses to the source.
Maggs (personal communication, 1989) has observed that this algorithm can be improved through better balancing of the two phases. Dividing A by a factor of 2Â° after each iteration decreases the number of iterations by a factor of a and increases the time required for the second phase by a factor of 2".