Prefetching Scheme In Ad Hoc Networks Computer Science Essay

Published:

ABSTRACT

In Mobile Ad-hoc Network (MANET), all the wireless mobile nodes are forming a temporary network which does not have any infrastructure support. Also the nodes forwarding packets for each other to allow multi-hop communication between nodes not directly within wireless transmission range. Due to lack of infrastructure support routing and resource constraint are the challenging issues in MANET. Even Though the mobile nodes act as a routers and so many proactive and reactive protocols are developed the ultimate goal of ad-hoc network is accessing data at anytime and anywhere. This can be achieved by Caching schemes, caching is a process of storing the frequently used data's for future use. Most of the popular caching schemes do not consider the prefetching scenario in the algorithm implementation. Here the extension of path caching is proposed to improve the caching technique based on prefetching factor called Path PreFetching algorithm (PPF). In PPF algorithm a node will prefetches some path from its neighbor for future use. To prefetch the path the neighbor should be in one hop distance from the source and it stores the path upto the TTL (Time to Live) value expires. Storing the path in the nearby node will increase the cache hit ratio, because mobile node may move far away from the actual source node after Tsec. Simulation with NS-2 software is used to analyze the performance of the proposed scheme and finally the proposed scheme is compared with the other schemes.

INTRODUCTION

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

In the ad-hoc network architecture, there is no pre-existing fixed network infrastructure. Nodes of an ad-hoc network are mobile hosts, typically with similar transmission power, bandwidth and computation capabilities. Direct communication between any two nodes is allowed when adequate radio propagation conditions and network channel assignment exist. Otherwise the nodes communicate through multi-hop routing (every node is act as a router). Because of lack of infrastructure several routing protocols are implemented like proactive, reactive and Hybrid etc. The main challenging factors of MANETs are limited memory, bandwidth consumption and limited battery power [1]. To utilize the limited resource in the effective way caching techniques are adopted. Caching can be broadly classified as cache data, cache path. Cache data stored the actual data in the cache memory and the cache path stores only the path where the actual data resides. In the basic case, a source node caches routes so that a route is available when an application running within the same node, demands it. We call this source route caching. As an extension to the above, many on-demand routing protocols, such as AODV and DSR, allow an intermediate node that has a cached route to the destination reply to the source with the cached route, we call this as intermediate route caching.

In this paper, we design and evaluate path prefetching technique to efficiently support data access in ad-hoc networks. The DSR algorithm saves some paths which are discovered from the route discovery phase for future use. In the same way the proposed PPF algorithm saves the path and broadcasts only to the one hop neighbor nodes for effective caching. By implementing PPF the problem of stale paths and the use of low quality paths are avoided even when significantly shorter paths become available.

PROBLEM STATEMENT

When a source node does not have a route to a destination node in its link cache, it initiates a route discovery by packet flooding. This is the backbone concept of all on-demand routing protocol. To avoid the unnecessary data flooding and resource wastage caching is used. Route discovery can potentially cover a large part of the network, yielding a high flooding cost. In ad hoc networks, the flooding cost translates not only to delay and communication control overhead but also to consumption of node power resources. Minimization of transmissions is crucial for power conservation and extension of the overall network lifetime. The problems associated with frequent route discover process is given below.

Stale path

Data flooding

Limited power

In order to avoid the above mentioned problems an efficient cache optimization which reduces the frequency of route request floods should be adopted.

OBJECTIVES

The primary objectives of MANET routing protocols are to maximize network throughput, to minimize energy consumption, and to minimize delay. The network throughput is usually measured by packet delivery ratio while the most significant contribution to energy consumption is measured by routing overhead which number or size of routing control packets. Caching the data path can reduce bandwidth, power and memory because nodes can obtain the required data using fewer hops. One of the paths caching technique is discussed in this work called PPF. The PPF can improve the Cache Path's performance by the following way.

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Avoiding stale paths

Prefetching popular data

Bandwidth reduction

Reducing delay

The rest of the paper is organized as follows: In Section 2, we present the CacheData scheme and the CachePath scheme. Section 3 discuss about the related works. Objective and the performance of the proposed schemes is evaluated in Section 4. Section 5 evaluates the performance of PPF over other algorithms. Section 6 concludes the paper.

TRADITIONAL CACHING SCHEMES

In Cache Data, intermediate nodes cache the data to serve future requests instead of fetching data from the data center. To avoid storage space wastage, a conservative rule should be followed: A node does not cache the data if all requests for the data are from the same node. When the cache size is very large or for some particular data that are interested by most nodes, the conservative rule may decrease the cache performance because data are not cached at every intermediate node. However, in mobile networks, nodes usually have limited cache spaces to store only frequently accessed data.

Data-path caching scheme is also a kind of effective scheme to reduce data request delays without causing a large number of data clones. In Cache Path, mobile nodes cache the data path and use it to redirect future requests to the nearby node which has the data instead of the faraway data center.

3 RELATED WORKS

By reference [2] data caching can significantly improve the efficiency of information access in a wireless ad hoc network by reducing the access latency and bandwidth usage. However, designing efficient distributed caching algorithms is non-trivial when network nodes have limited memory. This article considers the cache placement problem of minimizing total data access cost in ad hoc networks with multiple data items and nodes with limited memory capacity. Defining benefit as the reduction in total access cost, we present a polynomial-time centralized approximation algorithm that provably delivers a solution whose benefit is at least one-fourth (one-half for uniform-size data items) of the optimal benefit.

Aggregate cache management policy [3] including a cache admission control and a cache replacement policy. In IMANETS, caching data items in the local cache helps to reduce latency and increasing accessibility. If a Mobile Terminal (MT) is located along the path in which the request packet travels to an AP, and has the requested data item in its cache, then it can serve the request without forwarding it to the AP. In the absence of caching, all the requests should be forwarded to the appropriate APs. Since the local cache of the MTs virtually form an aggregate cache, a decision as to whether to cache the data item depends not only on the MT itself, but also on the neighboring MTs. In the aggregate cache, a cache hit can be of two types: a local cache hit or a remote cache hit.

Each node in a PCache system has a cache of a limited and predefined size. The cache is used to store a fraction of all the data items advertised [4]. Each data item is composed of a key, a value, an expiration time and a version number with application dependent semantics. Nodes continuously pursue a better distribution of the items, by varying the content of their caches. The goal of PCache is to provide an adequate distribution of data items so that each node is able to find a significant proportion of the total items in its cache or in the cache of the neighbors within its transmission range. PCache provides two distinct operations: data dissemination and data retrieval. The protocol first verifies if the value is stored in its local cache and if it is not, it broadcasts query messages. Nodes having in their cache the corresponding value address a reply message to the source of the query.

COACS is a distributed caching scheme that relies on the indexing of cached queries to make the task of locating the desired database data more efficient and reliable [5]. Nodes can take on one of two possible roles: CNs and QDs. A QD's task is to cache queries submitted by the requesting mobile nodes, while the CN's task is to cache data items (responses to queries). When a node requests data that is not cached in the system (a miss), the database is accessed to retrieve this information. Upon receiving the response, the node that requested the data will act as a CN by caching this data. The nearest QD to the CN will cache the query and make an entry in its hash table to link the query to its response. The CachePath and CacheData schemes that were discussed earlier have nodes with functions similar to a CN, but they do not offer functionality for searching the contents of all the CNs. In order to find data in a system of only CNs, all the nodes in the network would need to be searched. This is where QDs come into play in the proposed system. QDs act as distributed indexes for previously requested and cached data by storing queries along with the addresses of the CNs containing the corresponding data. In this paper, we refer to the node that is requesting the data as the RN, which could be any node, including a CN or a QD.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Several other studies discuss about the caching techniques [1] [6] [7] to effectively cache the data's and paths. CHAMP [7] introduces cooperative packet caching, technique that exploits temporal locality in dropped packets, aimed at reducing packet loss due to route breakage. The article [8] [9] addresses the issue of minimizing the delay in on-demand routing protocols through optimizing the Time-to-Live (TTL) interval for route caching. In [10] discusses a utility based cache replacement policy, least utility value (LUV), to improve the data availability and reduce the local cache miss ratio.

4.1 PROPOSED SYSTEM

In this work, we focus on caching as an attractive technique to efficiently meet the challenges like battery consumption, memory in ad-hoc network. It is easy to see that, whenever access latency and energy cost of data transfer are high, the best approach is to cache the requested information at a limited number of nodes distributed across the network. Caching, in fact, allows us to optimally trade-off between access latency and system energy expenditure.

PREFETCH BUFFER

A prefetch buffer is a separate buffer used to store data that are prefetched from the main memory. Its purpose is to reduce the interference between the prefetched data and the demand-fetched data in cache. Since all prefetched data are streamed into a prefetch buffer, the working set of the demand-fetched data in cache will not be disturbed by data prefetching. Prefetch buffer is usually found in the instruction cache design, but not in data caches. Here the path caching is performed on the prefetching scenario.

4.1.1 PPF INTEGRATED WITH DSR

The simulator models DSR according to [10] with nodes storing route information in a path cache. DSR is an on demand routing protocol that allows the caching of multiple routes for a single destination with nodes adding a complete source/destination paths to it as they are learned. Cache entries are timed out after 300s (TTL value) as the cache has a limited capacity. Nodes with multiple entries for the destination select the route having the shortest number of hops as the preferred route over which to transmit a data packet. This shortest path will be broadcasted by the source node to the one-hop neighbors for future use. These process held only when the destination is so far from the source node (i.e. number of hop should be large). The DSR route discovery process is initiated at the source node on an as needs basis by issuing a route request broadcast packet. Intermediate nodes receiving this request rebroadcast it if they have not already received it. The path from source to destination node is built up in the route request header with nodes appending their own address before rebroadcast the packet. The required destination node replies to a route request packet by reversing the route record in the request packet header or an intermediate node with a cached path to this destination can generate a route reply by concatenating its path with that of the route record and the source uses this route to forward data packets. Fig 1, Fig 2 depicts a typical DSR algorithm with RREQ and RREP packets. A is the source node, that sending the RREQ to the neighbor nodes such as D, C, B in order to find the path.

A

B

C

D

E

F

G

H

I

J

K

RREQ

RREQ

RREQ

L

H

Fig1: Source Broadcasting RREQ

A

B

C

D

E

F

G

H

I

J

K

L

RREP

RREP

RREP

Fig2: Source getting RREP from different path

In Fig 2, the RREP packet received from different path. DSR algorithm chooses a shortest path among these RREP to send data and other discovered paths are saved for future use. Fig 3 shows the data transmission in the shortest path selected by the Source node A i.e. A-C-E-J-L.

A

B

C

D

E

F

G

H

I

J

K

L

Data

Data

Data

Fig3: Sending data in a shortest path

A

B

C

D

E

F

G

H

I

J

K

L

A-C-E-J-L

A-C-E-J-L

A-C-E-J-L

Fig 4: Broadcasting the current path to one-hop neighbors

Fig 4 shows the shortest path broadcasted to the one-hop neighbors of A i.e. D, C, B prefetch the path [11]. Here we assumed that the number of hop in the discovered path is greater than the threshold value. Because of the node mobility characteristic of MANETs, nodes can moves far away from the current position after a Tsec. So replicating the path to the nearby node will increase the efficiency of caching and cache hit ratio dramatically.

4.1.2 ALGORITHM DESCRIPTION

The use of caching can substantially reduce the overhead of the routing protocol and also reduces the latency in delivering data packets when a cached route is already available. PPF algorithm uses the DSR protocol for effective routing. Optimizing the operation of the routing, caches requires taking measures to reduce the effect of the latency and invalid/missing cache information. The proposed PPF uses the replica technique to achieve the above mentioned criteria's. In PPF, the source node saves all the paths which are discovered in the route discovery phase of DSR in its local cache. Then it sends this current route information to the one hop neighbour if number of hop in the cached path is greater than the threshold value. If the number of hop is very less means it will create unwanted traffic in the network. Furthermore instead of getting the replicated data, the requester can get the data from the original source. In order to overcome this problem the source node will broadcast the path to one-hop neighbour, only when the destination is far away from the source. The saved path will be deleted if the TTL value expires also the PPF uses LRU algorithm for cache replacement.

4.2.3 TTL VALUE

One approach to minimize the effect of invalid route cache is to purge the cache entry after some Time-to-Live (TTL) interval [12]. If the TTL is set too small, valid routes are likely to be discarded, and large routing delay and traffic overhead may result due to the new route search. On the other hand, if the TTL is set too large, invalid route-caches are likely to be used, and additional routing delay and traffic overhead may result before the broken route is discovered. Thus, an algorithm that optimizes the TTL setting is necessary for the optimal performance of an on-demand routing protocol.

4.1.4 ALGORITHM

Nh : Number of hop in the path;

Th : Threshold hop count

Step1: Received a new DSR packet

Step2: Source send RREQ packet

If node IP address matches

- sent Route Reply;

Else If RREQ table already has an entry with this

Request ID and from the same source;

Fetches the path from PATH CACHE;

ELSE

Rebroadcasts the RREQ packet until reaches destination;

END If;

Step3: Destination sends RREP message

Step4: Source saves all the paths for future use

Step5: If Nh in the current path > Th

Step6: Send a copy of current path to the one-hop neighbors;

Step7: If no path currently available

Send RERR packet to the source;

5 SIMULATION MODEL

The network topology consists initially of 50 nodes uniformly positioned over an area of 700m x 400m, with mobility being defined by the Random Waypoint model. All nodes move from a random location scattered uniformly over the network area towards an arbitrary destination with a random speed (uniformly distributed on 0-5m/s). After it reaches its destination, the node stays there for a pause time and then moves again. Network nodes generate 20 data packets with a mean Inter arrival time of 30s (using a normal distribution with a variance of 5s) for random destinations with packet sizes of 512 bytes. Note that the nominal radio range for two directly communicating nodes in the ns2 simulator is about 250 meters. The two-ray ground reflection approximation is used as the radio propagation model. Various simulation scenarios were studied by varying parameters related to geometry and mobility of network nodes. The simulation time for each scenario was 900 sec and the simulated traffic was constant bit rate (CBR). Over the mobility range the cache size is stable around the value of 75. The results of this approach are compared, using DSR, without the use of cache provisioning for new network nodes.

5.1 PERFORMANCE EVALUATION

5.1.1 AVERAGE DELAY

In most previous path cache algorithms, updating data path may result in many cache misses [13]. We address the problem by asking the clients to prefetch data that may be used in the near future. In this section, the performance of the PPF scheme to the Simple Cache scheme and the CachePath scheme are compared in terms of the delay in seconds. In Fig 5 (a), when the cache size increases beyond 1000KB all three algorithms reduces the delay and maintains a constant level. In Simple Cache the maximum delay is nearly 0.3 because the request and replies have to travel larger number of hops. CachePath is an effective technique to reduce the number of hop traveled by the request and replies, even though it generates minimum of 0.2 percent delay. In case of PPF the delay is reduced to 0.1 when the cache size reaches 1000, the reason behind the reduction is prefetching of cached path. Our PPF algorithm performs 10% better than the traditional CachePath algorithm.

Fig 5(b) shows the delay variations with respect to number of nodes, the delay increases when the number of nodes increases. Fig 5(c) calculates the delay with respect to TTL value, when the TTL value is below 1000 sec all three algorithms behaving the same i.e. produces more delay after that it decreases gradually.

Fig 5(a): Cache size Vs Average delay

Fig 5(b): Number of nodes Vs Average delay

Fig 5(c): Mean TTL Vs Average delay

5.1.2 PREFETCHING RATIO

In Fig 6, the number of prefetches increases as the mean update arrival time decreases for the non-prefetch approach. In our PPF algorithm, when the mean update arrival time decreases, data are updated more frequently and more clients have cache misses. Prefetching data into the local cache can improve the cache hit ratio. When some data items are marked as Non-prefetch, the cache hit ratio may be reduced. Although there is a trade-off between the cache hit ratio and the number of prefetches, our approach outperforms the non-prefetch approach in general. For example PPF gives 100 prefetches for mean arrival time 0 sec, but non-prefetching algorithms gives only 50 prefetches. So PPF produces 50% improvement in the cache hit ratio.

Fig 6: Number of prefetch Vs Update arrival time

5.1.3 CACHE HIT RATIO

Prefetching data into the local cache can improve the cache hit ratio [14] [15]. When some data items are marked as non-prefetch, the cache hit ratio may be reduced. In Fig 7, X-axis shows the update arrival time and Y-axis shows the cache hit ratio. When the mean update arrival time is high, there are not too many data updates and most of the queried data can be served locally. When the mean update arrival time decreases, data is updated more frequently. With prefetching, the cache hit ratio increases rapidly until it reaches the maximum threshold value. When the update arrival rate is 10 non prefetching algorithms provide 0.2 percent of cache hit ratio but PPF produces 2%.

Fig 7: Update arrival time Vs Cache hit ratio

5.1.4 THROUGHPUT

Fig 8 compares the throughput of three algorithms such as, no-caching, no-prefetching and our proposed algorithm. X-axis shows the pause time and Y-axis shows the throughput in percentage. With caching, there is a high probability of the requested data being cached in the MTs local cache or at other MTs. In the no-caching scenario it produces very low throughput i.e. maximum of 15%, No prefetching gives maximum of 50% throughput but PPF produces nearly 60% for 50ms pause time. Compare to no-prefetching algorithm our algorithm generates 10% higher throughput. Fig 9 shows the number of hop required to transfer data with respect to pause time. Throughput decreases when number of hop count increases. Because of prefetching our PPF algorithm gets the data with minimum number of hop count. Compare to no-caching algorithm PPF algorithm produces 30% better performance.

Fig 8: Throughput Vs Pause time

Fig 9: Average number of hop Vs Pause time

CONCLUSION

The path caching problem is analyzed and the optimal form of path prefetching technique is implemented. The proposed technique is based on the implementation of Prefetching called Path PreFetching algorithm. Although such prefetching have been extensively used in the past various applications, they have never been used in the effective way. Specifically in PPF implementation we take advantage of one-hop neighbour for data prefetching, that data will be used in near future, also we consider the cached path size. We evaluated the impact of such an implementation through extensive simulation results, using the DSR protocol which is the most representative of those utilizing caching.