Several Algorithms For Computing The Shortest Path Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Several algorithms for computing the shortest between the two nodes of a graph are known. This one is due to dijkstra. Each node is labeled with its distance from the source node along the best known path. Intially no paths are known, so all nodes are labeled with infinity. As the algorithms proceeds and paths are found the labeles may change reflecting better paths [1]. A label may either tentative of permanent. Initially all labels are tentative. When its discovered that a label represents the shortest path from the source to that node it is made permanent and never changed thereafter.

Shortest path Background

Efficient management of networks requires that the shortest route from one point (node) to another is known; this is termed as the shortest path. It is often necessary to be able to determine alternative routes through the network, in case any part of the shortest path is damaged or busy. The analysis of transportation networks is one of many application areas in which the computation of shortest paths is one of the most fundamental problems. These have been the subject of extensive research for many years. The shortest path problem was one of the first network problems studied in terms of operations research. Fixed two specific nodes s and t in the network, the goal is to find a minimum cost way to go from s to t. Several algorithms for computing the shortest path between two nodes of a graph are known. This one is due to Dijkstra (1959).[2] Each node is labeled with its distance from the source node along the best-known path. Initially, no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds and paths are found, the labels may change, reflecting better paths. A label may be tentative or permanent. Initially, all labels are tentative. When it is discovered that a label represents the shortest possible path from the source to node, it is made permanent and never changed thereafter

Clearly, this approach is not feasible for dynamic networks, where the travel cost is time-dependent or randomly varying. However, the majority of published research on shortest paths algorithms has dealt with static networks that have fixed topology and fixed costs. A few early attempts on dynamic approaches, referenced by Chabini (1997), are Cooke and Halsey (1966) and Dreyfus (1979). Not more than a decade ago, Van Eck (1990) reports several hours as an average time for a computer to churn through an all-to-all calculation on a 250-nodes small-scale static network, and several days on a 16.000-nodes large-scale network.[2,33]

One way of dealing with dynamic networks is splitting continuous time into discrete time intervals with fixed travel costs, as noted by Chabini (1997). Thus, understanding shortest path algorithms in static networks becomes fundamental to working with dynamic networks.[3]

Shortest path in different networks

Shortest Path In Static Networks

Several algorithms and data structures for algorithms have been put forward since the classic shortest path algorithm by Dijkstra (1959). In its modified version, this algorithm computes a one-to-all path in all directions from the origin node and terminates when the destination has been reached. Deonardo and Fox (1979) introduce a new data structure of reaching, pruning, and buckets.[3]

The original Dijkstra algorithm explores an unnecessary large search area, which led to the development of heuristic searches, among them the A* algorithm, that searches in the direction of the destination node.[4] This avoids considering directions with non- favorable results and reduces computation time. A significant improvement is seen in the bi-directional search, computing a path from both origin and destination, and ideally meeting at the middle. In relation to this search technique, it should be remarked that Jacob et al. (1998) discard bi-directional algorithms as impractical in their computational study of routing algorithms for realistic transportation networks.[4]

Zhan and Noon (1996) had a comprehensive study of shortest path algorithms on 21 real road networks from 10 different states in the U.S., with networks ranging from 1600/500 to 93000/264000 nodes/arcs. In this study, Dijkstra-based algorithms, however differing in data structure, outperform other algorithms in one-to-one or one-to-all fastest path problems.[5]

Shortest Path In Dynamic Networks

A result of the recent advances in computer and communications technology, together with the developments in ITS, that have flared a renewed interest in dynamic networks. This interest in the concept of dynamic management of transportation has also brought forward a set of algorithms that are particularly aimed at optimizing the run-time of computations on large-scale networks.[6]

Chabini (1998) lists the following types of dynamic shortest path problems depending on (a) fastest versus minimum cost (or shortest) path problems; (b) discrete versus continuous representation of time; (c) first-in-first-out (FIFO) networks versus non-FIFO networks, in which a vehicle departing at a later time than a previous vehicle can arrive at the destination before the pervious vehicle; (d) waiting is allowed at nodes versus waiting is not allowed; (e) questions asked: one-to-all for a given departure time or all departure times, and all-to-one for all departure times; and (f) integer versus real valued link travel costs. [6]

Fu and Rilett (1996) investigate what they call the dynamic and stochastic shortest path problem by modeling link travel times as a continuous-time stochastic process. The aim of their research was to estimate travel time for a particular path over a given time period. They deviate from the mainstream appraisal of the A* algorithm and advocate the k-shortest path. The reason for this is that standard shortest path algorithms may fail to find the minimum expected paths, particularly when dealing with non-linear optimization, as is the case in developing travel time estimation models. However, in lieu of real data, their research is based on a hypothetical change pattern in travel time.[6]

Chabini (1997, 1998) places emphasis on the all-to-one minimum cost path as the key algorithm with relation to ITS, the reason being that only a limited set of all network nodes are destination nodes in realistic road networks, while there is a considerably larger number of nodes that will be origin nodes. (Moving vehicles tend more to converge to the same goal than to spread in all directions)[5]

Horn (1999) continues along the research trails of Chabini (1997) and Fu and Rilett (1996), but uses a less detailed articulation of travel dynamics, reflecting as he puts it, the recognition that information about network conditions in most parts of the world are most likely to be sparse and that merely estimates of average speed on individual network links are available in most cases. With the presumption that these estimates allow for variation in speed, congestion and delays at nodes, he studied a number of Dijkstra variant algorithms that address these particular conditions. Most important, he propounds an algorithm that calculates an approximation of shortest time path travel duration (path travel time), independent of the particular navigation between nodes. For an experienced vehicle driver, estimated travel time may be more important than the exact route that is to follow. This is a noteworthy addition to the fastest path algorithms in dynamic networks.[5]

Shortest path constraints


When too many packets are present in (a part of) the subnet, performance degrades. This situation is called congestion. When the number of packets dumped into the subnet by the hosts is within its carrying capacity, they are all delivered (except for a few that are afflicted with transmission errors), and the number delivered is proportional to the number sent. However, as traffic increases too far, the routers are no longer able to cope, and they begin losing packets. This tends to make matters worse. At very high traffic, performance collapses completely, and almost no packets are delivered.[7]

Congestion can be brought about by several factors. If all of a sudden, streams of packets begin arriving of three of four input lines and all need the same output line, a queue will build up. If there is insufficient memory to hold all of them, packets will be lost. Adding more memory may help up to a point, but Nagle (1987) discovered that if routers have an infinite amount of memory, congestion gets worse, not better, because by the time packets get to the front of the queue, they have already timed out (repeatedly), and duplicates have been sent. All these packets will dutifully forward to the next router, increasing the load all the way to the destination.

Slow processors can also cause congestion. If the routers’ CPUs are slow at performing the bookkeeping tasks required of them (queuing buffers, updating tables, etc.), queues can build up, even though there is excess line capacity. Similarly, low bandwidth lines can also cause congestion. Upgrading the lines but not changing the processors, or vice versa, often helps a little, but frequently just shifts the bottleneck. Also, upgrading part, but not all, of the system, often just moves the bottleneck somewhere else. The real problem is frequently a mismatch between parts of the system. This problem will persist until all the components are in balance.

Congestion tends to feed upon itself and become worse. If a router has no free buffers, it must ignore newly arriving packets. When a packet is discarded, the sending router (a neighbor) may time out and retransmits it, perhaps ultimately many times. Since it cannot discard the packet until it has been acknowledged, congestion at the receiver’s end forces the sender to refrain from releasing a buffer it would have normally freed. In thin manner, congestion backs up, like cars approaching a toll booth.[7]

General principles of congestion control:

Many problems in complex systems, such as computer networks, can be viewed from a control theory point of view. This approach leads to dividing all solutions into two groups: open loop and closed loop. Open loop solutions attempt to solve the problem by good design, in essence, to make sure it does not occur in the first place. Once the system is up and running, midcourse corrections are not made.[8]

Tools for doing open-loop control include deciding when to accept new traffic, deciding when to discard packets and which ones, and making scheduling decisions at various points in the network. All of these have in common the fact that they make decisions without regard to current state of the network.

In contrast, closed loop solutions are based on the concept of a feedback loop. This approach has three parts when applied to congestion control:

1 monitors the system to detect when and where congestion occurs.

2 pass this information to places where action can be taken.

3 adjust system operation to correct the problem.

Various metrics can be used to monitor the subnet for congestion. Chief among these are the percentage of all packets discarded for lack of buffer space, the average queue lengths, the number of packets that time out and are retransmitted, the average packet delay, and the standard deviation of packet delay. In all cases, rising numbers indicate growing congestion.

The second step in the feedback loop is to transfer the information about the congestion from the point where it is detected to the point where something can be done about it. The obvious way is for the router detecting the congestion to send a packet to the traffic source of sources, announcing the problem. Of course, these extra packets increase the load at precisely the moment that more loads is not needed, namely, when the subnet is congested.

When a router detects this congested state, it fills in the field in all outgoing packets, to warn the neighbors.

Still another approach is to have hosts or routers send probe packets out periodically to explicitly ask about congestion. This information can then be used to route traffic around problem areas. Some radio stations have helicopters flying around their cities to report on road congestion in the hope that their listeners will route their packets (cars) around hot spots.

In all feedback schemes, the hope is that knowledge of congestion will cause the hosts to take appropriate action to reduce the congestion. To work correctly, the time scale must be adjusted carefully. If every time two packets arrive in a row, a router yells STOP, and every time a router is idle for 20 micro sec it yells GO, the system will oscillate wildly and never converge. On the other hand, if it waits 30 minutes to make sure before saying anything, the congestion control mechanism will react too sluggishly to be of any real use. To work well, some king of averaging is needed, but getting the time constant right is a nontrivial matter.

Many congestion control algorithms are known. To provide a way to organize them in a sensible way, Yang and Reddy (1995) have developed taxonomy for congestion control algorithms. They begin by dividing all algorithms into open loop or closed loop, as described above. They further divide the open loop algorithms into ones that act at the source versus ones that act at the destination. The closed loop algorithms are also divided into two subcategories: explicit feedback versus implicit feedback. In explicit algorithms, packets are sent back from the point of congestion to warn the source. In implicit algorithms, the source deduces the existence of congestion by making local observations, such as the time needed for acknowledgements to come back.[7]

The presence of congestion means that the load is (temporarily) greater than the resources (in part of the system) can handle. Two solutions come to mind: increase the resources of decrease the load. For example, the subnet may start using dial-up telephone lines to temporarily increase the bandwidth between certain points. In systems like SMDS, it may ask the carrier for additional bandwidth for a while. On satellite systems, increasing transmission power often gives higher bandwidth. Splitting traffic over multiple routes instead of always using the best one may also effectively increase the bandwidth. Finally, spare routers that are normally used only as backups (to make the system fault tolerant) can be put on-line to give more capacity when serious congestion appears.[8]

However, sometimes it is not possible to increase the capacity, or it has already been increased to the limit. The only way then to beat back the congestion is to decrease the load. Several ways exist to reduce the load, including denying service to some users, degrading service to some or all users, and having users schedule their demands in a more predictable way.