This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
DHT algorithms are very promising approach for their routing performance. However, nowadays, DHT algorithms are not commonly used in commercial systems. Most of the P2P file sharing systems are still using non-structured P2P mechanisms. It is because there are two big issues in current DHT algorithms.
The first issue is the routing information maintenance overhead in DHT algorithms. The maintenance overhead can be very high in a large-scale P2P systems, especially in a dynamic environment, the system will definitely produce routing maintenance traffic. The system will send update messages to all the affected nodes when the system detects any node joins. However, for a frequently join or leave node, the system will need to send node leave or join messages to all the affected nodes to update their routing information again after those affected nodes modified their routing information. Therefore, it will definitely cause a routing information thrashing problem for the network.
In "Reducing Maintenance Overhead in DHT Based Peer-to-Peer Algorithms" research paper said that there is a proposed solution for this issue that is to design an optimal node availability measurement and routing table maintenance strategy. First, there is a simple scheme that is used to measure a node's availability. We will use an average live time table (ALT). It is used to maintain the duration that a node stay in the system. Each ALT table has 10 entries. These entries is used to record the durations for the most recent 10 appearances of system. If the table has ten records during it joins the system, the oldest value in entry will be removed from the table. Next, we will use an Average Duration Time, T. This is used to record the average value. Every minute, the present ALT entry will be updated by the node for addressing sudden node failure problem. Therefore, we can have a near correct value even in case of any node failure. The other strategy is to let any one of its neighbour nodes to do this task. This work does not bring any extra overhead, since neighbour nodes must change a message periodically among them. In this proposal, T for that node is recorded in each entry in routing tables. Just in case a new node joins the system, before an affected node changes its routing table, it first compares the value T of this new added node and the node in the affected entry. If the new node got a higher T, the entry sure will be updated, otherwise, nothing is done. For those nodes which just join system for the first time, the node of T is set to zero and it is not suitable to be added in routing tables. Another way to decrease the overhead problem is delayed update the routing information. In the system, the nodes are relatively stable in current routing tables. This is because a large amount of routing failures are caused by other issues like network traffic, many of them can be recovered very fast. In original DHT algorithm, if the current node of the next routing hop does not send the responses to the current node, it will notice the node is fail and the entry is affected should be replaced by other node in its routing table. In this solution, we assume that most of these issues are not the real node failure. System will check T of the node in this entry first, if it is larger than a certain threshold, we do not substitute it and note this entry as possible failure. If a new request is sent, this entry still got a chance to be chosen. If only there is a number of failure occurs, we trust that this node is fail and then start the replacement procedure. There is other situation that any entries in a routing table is updated when a node make sure one of its neighbour node is fail. A node can detect a failure of its neighbour immediately since neighbour nodes exchange messages periodically. If the failed node got a high T, that means the failed node is relatively stable, the node will broadcast the message quickly and the affected routing tables will be updated immediately. If the failed node got a low T, the node will check it again many times, in case that a specific time passed, the node will then broadcast the node failure message to all nodes. This strategy sure can effectively reduce this overhead since the original DHT algorithm does many useless maintenance work for frequently leave or join nodes.
The second issue is lookups have high latency. It is because each lookup requires contacting many nodes in sequence. We are going to propose two peer-to-peer routing algorithms with small lookup paths which are one-hop routing scheme and a two-hop routing scheme that are mentioned in "Efficient routing for peer-to-peer overlays" research paper.
First, we discuss one-hop routing scheme. We show how to spread information about join/leave nodes changes immediately so that the affected nodes can maintain correct routing tables with complete join/leave nodes information. Nodes joining and leaving raise two important problems that the algorithms are going to be addressed. First, we must update local information about the membership change, in order for each node in the system to determine precisely which interval in the id space it is responsible for. The second issue is conveying information about the change to all the nodes in the ring so that these nodes will maintain correct information about the system membership and consequently manage to route in a single hop. To maintain correct local information, every node n runs a stabilization routine periodically, wherein it sends keep-alive messages to its successor s and predecessor p. Node s checks if n is indeed its predecessor, and if not, it notifies n of the existence of another node between them. Similarly p checks if n is indeed its successor, and if not it notifies n. If either of s or p does not respond, n pings it repeatedly until a time-out period when it decides that the node is unreachable or dead. A joining node contacts other system node to get its view of the now membership; this protocol is same to the Chord protocol [15, 16]. The node information enables it to get in touch with its predecessor and successor, thus informing them of its presence. To maintain accurate full routing tables, notifications of membership change events, i.e., joins and leaves, must reach every node in the system within a specified amount of time. The goal is to do this in a way that has low notification delay but reasonable bandwidth consumption, since bandwidth is likely to be the scarcest resource in the system. We achieve this goal by superimposing a well-defined hierarchy on the system. This hierarchy is used to form dissemination trees, which are used to propagate event information. We impose this hierarchy on a system with dynamic membership by dividing the 128-bit circular identifier space into k equal contiguous intervals called slices. The i-th slice contains all nodes currently in the overlay whose node identifiers lie in the range [i Â· 2128/k, (i + 1) Â· 2128/k). Since nodes have uniformly distributed random identifiers, these slices will have about the same number of nodes at any time. Each slice has a slice leader, which is chosen dynamically as the node that is the successor of the mid-point of the slice identifier space. For example, the slice leader of the i-th slice is the successor node of the key (i + 1/2) Â· 2128/k. When a new node joins the system, it know about the slice leader from one of its neighbours along with other information like the data it is responsible for and its routing table.
"Similarly, each slice is divided into equal-sized intervals called units. Each unit has a unit leader, which is dynamically chosen as the successor of the mid-point of the unit identifier space. Figure 1 depicts how information flows in the system. When a node (labelled X in Figure 1) detects a change in membership (its successor failed or it has a new successor), it sends an event notification message to its slice leader (1). The slice leader collects all event notifications it receives from its own slice and aggregates them for tbig seconds before sending a message to other slice leaders (2). To spread out bandwidth utilization, communication with different slice leaders is not synchronized: the slice leader ensures only that it communicates with each individual slice leader once every tbig seconds. Therefore, messages to different slice leaders are sent at different points in time and contain different sets of events. The slice leaders aggregate messages they receive for a short time period twait and then dispatch the aggregate message to all unit leaders of their respective slices (3). A unit leader piggybacks this information on its keep-alive messages to its successor and predecessor (4)." This paragraph is quoted from Efficient routing for peer-to-peer overlays" research paper.
Many benefits can be found for selecting this design. First, A structure on the system is imposed, with well-defined event dissemination trees. This structure helps us make sure that there is no redundancy that makes to efficient bandwidth usage. Second, we can avoid small message by aggregation of many events into one message. Small messages are a issues since the protocol overhead becomes important relative to the message size, advancing to higher bandwidth usage. This scheme is a three-level hierarchy. The choice of the number of levels in the hierarchy involves a trade off: A bigger delay in propagating the information is implied by a large number of levels, whereas a small number of levels produce a huge load at the nodes in the upper levels. A three level hierarchy is chosen because it has low delay, but bandwidth consumption at top level nodes is reasonable.
If one-hop scheme beyond a few million nodes, the bandwidth requirement will become too large. To overcome this problem, we are going to talk about the design of a two-hop lookup scheme. This design provides faster lookups than the existing peer-to-peer routing algorithms.
This design that routes in two hops is based on a structure that is similarly in the one-hop scheme that is described previously. A group of its own nodes is chosen for each other slice by every slice leader. Therefore, there are (y - 1) such groups. Each group is of size L. These groups are depended on the proximity metrics or they are chosen randomly. One group's routing information is sent to the other slice leader by the slice leader. The group's routing information is sent to all members of that slice same as in the one-hop lookup algorithm. Thus, routing information is contained in each node for L nodes in every other slices. An ordering for the L nodes that is based on network distance to itself is maintained by each node. An ordering is maintained for every slice and then builds a table of (x - 1) nodes which are near to it, one from every other slice.
Besides that, every node in the system saves all routing information about that related nodes in its own slice. When a node tends to search the successor of a key, a request is sent to its chosen node in the slice that having the key. The chosen node then searches its own routing table to identify the successor of the key and forwards the request to that node. The forwarding nodes is formed by intermediate nodes. The information flow in the system is same to the one-hop lookup algorithm. The only difference is the other slice leader is sent by a slice leader in step 2 as described previously. The message that it sends to the i-th slice leader is empty unless one or more of the L nodes it previously sent to that slice leader have left the system. If that case occurs, it sends all information about the leave nodes and the nodes that it selects to substitute them. When a node knows of different nodes for some other slice, it finds the nodes that it has just known and its proximity information is updated in that slice. For the slice leader failure and tolerating unit, they works similarly to the one-hop lookup algorithm.
Zhiyong Xu, Rui Min and Yiming Hu, "Reducing Maintenance Overhead in DHT Based Peer-to-Peer Algorithms"