Extended Adaptive Ttl Based Cache Invalidation Strategy In Manet Computer Science Essay

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Ad-hoc network contains a set of wireless nodes without any infrastructure Access Point/Base station support. Caching frequently accessed data items in mobile nodes is shown to be effective techniques to enhance the communication performance in Mobile Ad-hoc Networks (MANETs). The objective of the caching policy is providing efficient data access to users, sometimes it is become a critical issue when data update occurs. When cache techniques are used, data consistency issues must be addressed to ensure that clients see only valid states of the data, or at least do not unknowingly access data that is stale according to the rules of the consistency model. One of the effective techniques to avoid the stale data is cache invalidation, which maintains data consistency between the client cache and the server. This thesis discusses the Time-To-Live (TTL) based invalidation. There are basically two types of TTL based schemes namely Fixed TTL and Adaptive TTL. Fixed TTL scheme always set the fixed TTL value for all kind of data item and it also take additional process time for cache invalidation. Adaptive TTL scheme refresh the cache data and reset the TTL. Cache refresh and TTL reset of Adaptive TTL scheme produce some process overhead to the cache node. So this study proposes a new scheme called Extended Adaptive TTL (Ex-ATTL) in which each 1-hop distance node's to data cache node maintain a hash table for cache invalidation. The hash table contains data item name as key and TTL as its value. Finally the simulation using Ns-2 is conducted to evaluate the performance of the proposed scheme then it has been compared with fixed TTL and Adaptive TTL schemes


MANET, Cache Consistency, Cache Invalidation, Stale Data, TTL value


In the ad-hoc network architecture, there is no pre-existing fixed network infrastructure. Nodes of an ad-hoc network are mobile hosts, typically with similar transmission power and computation capabilities. Direct communication between any two nodes is allowed when adequate radio propagation conditions and network channel assignment exist. Otherwise the nodes communicate through multi-hop routing. Without pre-existing infrastructure, MANET can be easily deployed for the applications such as battlefield and disaster recovery. It can also be used as a wireless extension for the users to access the Internet in an area without network infrastructure [1]. MANET's features in multi-hop message relay, frequent link disconnection and power-constrained operation, which complicate the design of network protocols [2] and the application development.

The concept of caching frequently accessed data is adopted to provide efficient communication among mobile nodes. By this caching average data access latency is reduced and access requests are satisfied from the local cache, which makes efficient data accessing. However, node mobility and frequent disconnection makes cache consistency a challenging issue. Therefore effective cache invalidation mechanisms are required to ensure the consistency between source node and cached data at the clients. TTL based cache invalidation problem has been well studied in mobile computing environments [3] [4]. The existing strategies use TTL and Adaptive TTL based techniques for cache invalidation. These previous work's mainly depend the data cache node .Cache node maintaining the state of the data based on TTL. Maintaining cache invalidation process in cache node produces some additional overhead to cache and also decrease the performance of the network.


While a tremendous amount of research has been done on routing protocols [2] in MANETs, little work has been done on providing data services in MANETs. Recently, there are growing interests on providing data caching to promote data services in MANETs. There are several caching techniques and routing protocols are implemented to tackle the problem of MANET environment such as node mobility and resource (battery and bandwidth) consumption [6] [7] [8] [9]. But, the policies ignored the important factors such as TTL and distance of the requester from the cache/source satisfying the request.

Caches represent learned portions of the network topology, but a cache entry may become invalid due to changes such as two nodes moving out of wireless transmission range of each other; a node is not notified when one of its cache entries becomes invalid, unless the node actually attempts to use the cache entry in routing a packet it sends [10]. Although a periodic routing protocol such as a link-state or distance-vector routing protocol could distribute updated information in a somewhat timely manner, periodic protocols have been shown to have higher overhead in a number of studies and periodic protocols still take some amount of time to detect a link failure and to distribute this information. When a node uses information from its route cache that was learned from overheard packets, the cache staleness problem is compounded, since stale information could circulate in the network indefinitely.


Caching the data based on the frequency of the access in the client side is an effective technique for improving communication facility in a mobile environment. Caching has been well accepted method to ease the ever growing bandwidth needs, reduce the server load, and decrease the client access latency. Since the frequently changing topology make cache consistency a challenging issue [9]. Here the TTL based cache invalidation scheme is discussed in which a data is considered as a invalid if its TTL value expires. The Ex-ATTL scheme also focuses on the following factors.

To reducing cache node's computing overhead

To reducing request delay

To avoid stale information

To provide data reliability

The rest of this paper is organized as follows. Section 2 presents the background of this paper, including the system model for data service in MANETs and a brief review of the related works. In Section 3, the Extended Adaptive TTL scheme is presented. In Section 4, the performance evaluation simulation results for Extended Adaptive TTL are discussed. Finally, Section 5, concludes this paper.



The system model for data application in a MANET has been discussed by several researchers [11] [12]. The system model which is summarized in the following:


There are n mobile nodes in the system. N= {N1, N2, . . . , Nn} is denote the set of mobile nodes. A mobile node, or simply called node, has a cache in its local storage, and may issues queries for data items from the data source. Here the assumption is that each query requests only one data item. The mobile nodes are free to move in a designated area.


One of the mobile node assumes the role of a data source. The data source, or simply called a server, contains a database. A mobile node can communicate with the data source, directly if the data source is within its radio range, or indirectly through multi-hop relay by other mobile nodes. Here the assumption is that there are m data items in the database. Data items are numbered from 1 to m, which serve as their identifiers. We use D= {D1, D2, . . ., Dm} to denote the set of data items in the database.


The data source may repeatedly updates the data items in the database. To help a mobile node to keep track of the version of a cached copy of a data item, each copy of a data item is associated with a TTL and also records the time Tc when the copy is created in the source. Where Tc is a data item copy creation time, this Tc will catch from nodes timer after the copy of the data item is created successfully in mobile nodes cache area.


Caching data or path of the data at mobile client side is an essential mechanism for improving system performance in a mobile computing environment [13], [8]. In the following, the existing studies on cache invalidation strategies for mobile clients are discussed.

As categorized in [14], [15], there are two kinds of cache invalidation methods for mobile databases:

Temporal-dependent invalidation

Location-dependent invalidation

Location-dependent invalidation always needs the geographical position of the mobile nodes. It only well supported to single hop communication. Generally cellular network uses location-dependent invalidation scheme for cache invalidation. Temporal dependent invalidation is carried out by data updates. To carry out temporal-dependent invalidation, the data source keeps track of the update history (for a reasonable length of time) and sends it, in the form of an invalidation report (IR), to the clients, either by periodic/aperiodic broadcasting or upon individual requests from the clients [8], [16], [17], [18]. In the basic IR approach, the server broadcasts a list of IDs for the items that have been changed within a history window. The mobile client, if active, listens to the IRs and updates its cache accordingly.

TTL is the heart of the temporal dependent invalidation scheme. In [3] to handle cache consistency, CachePath and CacheData use a simple weak consistency model based on the time-to-live mechanism. In this model, a routing node considers a cached copy up-to-date if its TTL hasn't expired. If the TTL has expired, the node removes the map from its routing table (or removes the cached data). As a result, the routing node forwards future requests for this data to the data source. Author's of [3] optimize this model by allowing nodes to refresh a cached data item if a fresh copy of the same data passes by. If the fresh copy contains the same data but a newer TTL, the node updates only the cached data's TTL field. If the data source has updated the data item, the node replaces both the cached data item and its TTL with the fresh copy. This scheme takes more round trip between cache node and data source for invalidation and TTL refreshment. The performance of the overall network is decrease due to more round trip between data source and cache node.

In [3] [5] authors also use the Adaptive TTL based cache invalidation scheme for Hybrid Cache and Cooperative Cache. In these schemes, if TTL of particular data item expires, some cached data can be invalidated. Usually, the node removes such invalid data from the cache. To save space, when a cached data item expires, it is removed from the cache while its id is kept in "invalid" state as an indication of the node's interest. Certainly, the interest of the node may change, and the expired data should not be kept in the cache forever. For example, caching the data indicates the mobile node's interest in it. While performing data forwarding, if a mobile node finds an invalid copy of that data in its cache, it delete he old copy and stores new copy for future use. If an expired path or data item has not been refreshed for the duration of its original TTL time, it is removed from the cache.



In this section, the fundamental building blocks which inspire to develop the Ex-ATTL scheme are discussed.


In a typical ad hoc environment each node acts as router and transceiver. They are make multihop hop communication with data source. Generally ad hoc networks are dynamic network; this could often result in shortage of communication bandwidth as well as unacceptable response times. To reduce the number of remote data retrieval operations over long distances, copies of data objects are stored locally. This concept of locally storing copies of remote data objects for faster retrieval is called caching.


Generally data items are replicated over all cache nodes which make the system redundant and available. It also eliminates the single point of failure which occurs when the 'one copy per data item' scenario is implemented. Replication also improves response time as the workload can be distributed among the different cache nodes. This however introduces complexity in maintaining the cache consistency as more than one node is now accountable for the same data item.


With multiple nodes caching the same data item, there is a need for maintaining cache consistency. Changes to a cached data item occur when it is updated as a result of a transaction. These changes cause the data item become stale in the other cached copies. To rectify this problem there is a need to propagate the change to all other caches. The policy employed to notify all caches about an update is called invalidation policy.


In this scheme whenever a cached data item is retrieved from the data source, it gets an associated with time to live (TTL). This is a number indicating the time interval during which the data item is considered to be fresh. When a data item is requested for the first time the data source also sends with its associated TTL. When the data item is referenced, its TTL value should be checked; in case the TTL is still valid then the object is retrieved otherwise a new value is requested from the server. The request node caches both the data item and its TTL. Whenever the data item is requested, the client checks the TTL value for that data item. If the TTL has expired, a fresh copy of the object is brought from the source. This is an instance of the weak consistency model. When the data is distributed and replicated, it may have been updated on another client node. The other data items caches shall be informed about the update, only when the TTL for the data item expires. After this TTL has expired on a client, the object is treated as stale and a fresh value is retrieved from the data source.

In data source, TTL value for the data items are fixed by original nature (application) of the data. For example the whether forecast, stock market, sports news and battle field conditions monitoring update the data item very short interval time periods, so these kind of data items in data source updated very often periods. Due to updates replica data items in cache node become stale one. Another kind of data items are not updated in short time of periods. For Example video films and geographical information of particular places not update in short time period.

By the updating nature the data item's TTL is divided into two categories.

Short TTL

Long TTL


It has very little TTL threshold value because quick updating nature of the data item. These TTL to the data items are set manually by data administrator and field reporter.


It has long TTL threshold values for data invalidations. Data access rate and popularity of the data items are main facts for fixing TTL threshold for theses kind of data. It is predicted manually by data source administrator.


This section discusses about the work nature and pseudo-code of the Extended Adaptive TTL.


In Extended Adaptive TTL scheme each and every node maintaining the cache invalidation hash table. This hash contains data item as its key and TTL of the data item as its value. It also store Tc after node cache the data item. Tc is a time when successful copy of the data item is created in a mobile node. The proposed scheme is divided into three phases. First phase of the algorithm do the cache invalidation hash table verification for current request. Second phase of the algorithm retrieve the data item from appropriate source. In third phase cache node broadcasts the data item names and its current TTL value to its one hop neighbor.

Phase 1

If a node generates a data request then this request is verified from current nodes hash invalidation hash table. If cache invalidation hash table contains the requested data item then algorithm retrieve the TTL and Tc from cache table. The request generation time Treq also cached from the nodes timer then it subtract the Treq from Tc. If this subtraction greater than TTL then this scheme removes the data item, it's TTL and it's Tc from hash table and forward the request to data source. The data source calculates the new TTL and also send with data item. If the subtraction less than TTL it moves to phase 2.

Phase 2

In this phase algorithm check the local cache for nodes request. If local cache contains the requested data item then request node simply use it local cache. Other wise this phase use the current implementing cache algorithm and retrieve the data item from nearest cache node or from the data source. In this phase also recalculate the TTL of the particular data item. The following calculation is used to find new TTL

NewTTL = Treq - Tc

It appends the new TTL with corresponding data item and send the response to client node. Data cache node also updates the new TTL in its cache table only for the particular data item.

Phase 3

After successful copy of the data item is created in localized cache or in any intermediate cache node for future requests this phase cache the Tc from the cache node's timer and updates this Tc in its cache invalidation hash table. Cache node also broadcasts the new cached data items id, TTL and its Tc only to one hop neighbors.

ALGORITHM: The pseudo-code of Extended Adaptive TTL

SET TTL, Tc, Treq /* declaration */

Treq = getCurrentTime () \* node's time when request is generated */

if (local hash table contains requested data item Di)


/* phase 1*/

TTL = retrieveFromHashTable (Di's TTL)

Tc = retrieveFromHashTable (Di's Tc)

if (Treq - Tc > TTL)


deleteFromHashTable (Di's id, Di's TTL, Di's Tc)

deleteFromLocalCache (Di)

SendRequestToDataSource (Di's id)

/* phase2*/

SET hopcount = 1

newTTL = Treq - Tc /* Done by data source*/

ReplyFromServer (Nj id, Di , newTTL )

dataCache (Di)

/ phase 3 */

localHashTableUpdation (Di , newTTL,Tc)

broadCast (Di , newTTL,Tc , hopcount)


else /* phase 2 */



if (localCacheContains (Di) == TRUE)


retriveFromLocalCache (Di)




SET hopcount = 1

SendRequestToDataSource (Di's id)

newTTL = Treq - Tc /* Done by data source*/

ReplyFromServer (Nj id, Di , newTTL )

dataCache (Di)

/ phase 3 */

localHashTableUpdation (Di , newTTL,Tc)

broadCast (Di , newTTL,Tc , hopcount)






/* phase2*/

SET hopcount = 1

newTTL = Treq - Tc /* Done by data source*/

ReplyFromServer (Nj id, Di , newTTL )

dataCache (Di)

/ phase 3 */

localHashTableUpdation (Di , newTTL,Tc)

broadCast (Di , newTTL,Tc , hopcount)



Figure 1: Shows Extended Adaptive TTL block Architecture


Each and every node maintains up-to-date TTL.

Data source calculate the new TTL for every response.

Every data access from data source sends the newly calculated TTL with corresponding data item.

There is no chance to access stale data item from cache, because this scheme maintains the updated TTL.

Every node done cache verifications by itself then according to verification it deletes the expired data item from cache and also from hash table.

After every successful data cache process then cache node send the data items id and its TTL to its 1-hop neighbor. It reduces the data cache nodes processing overhead.

It reduce unnecessary round trip between data cache node and client node.

Additional hardware's are not required to implementing this scheme.

Node use hash table for data invalidation, it reduces the search latency in local hash verification process.



In order to evaluate the efficiency of Ex-ATTL invalidation algorithms, a similar system to [2] and [12] is employed in this work. It consists of a single data source that serves multiple client nodes. The database can be updated only by the server, whereas the queries generated and utilized on the client side. There are 1,000 data items in the database, which are divided into two subsets: the short TTL data subset and the Long TTL data subset. The short data subset includes data items from one to 150 (out of 1,000 items) and the long data subset includes the remaining data items of the database. Clients have a large probability (75 percent) to access the data in the short TTL set and a low probability (20 percent) to access the data in the Long TTL set.


The server generates a single stream of updates separated by an exponentially distributed update interarrival time. All updates are randomly distributed inside the short TTL data subset and the Long TTL data subset, whereas 65.5 percent of the updates are applied to the short TTL data subset. In the experiment, the assumption is that the server processing time (not data transmission time) is negligible.


Each client generates a single stream of read-only queries. Each new query is generated is based on exponentially distributed time. The client processes generated queries one by one. If the referenced data are not cached on the client side, the data id's are sent to the server for fetching the data items. Once the requested data items arrive on the channel, the client brings them into its cache. Client cache management follows the Extended Adaptive TTL based cache invalidation policy.


Experiments were run using different workloads and system settings. The performance analysis presented here is designed to find the effects of different workload parameters such as mean update arrival time, mean query generate time, and system parameters such as cache size, replicate times (m), and short TTL data access probability (pq). Then the performance comparison is carried out among Adaptive TTL and Ex-ATTL. The performance is measured by the cache hit ratio, the query delay, and the throughput. Note that minimizing the number of uplink requests is a desirable goal as clients in a mobile environment have limited battery power and transmitting data requires a large amount of power.

Query Delay

The query delay is measured as a function of the mean query generates time and the mean update arrival time. As shown in Fig. 2, Ex-ATTL algorithm significantly outperforms the Adaptive TTL scheme. As explained before, each client generates queries according to the mean query generate time. If the queried data is in the local cache, the client can serve the query locally; otherwise, the client has to request the data from the server. If the client cannot process the generated query due to waiting for the server reply, it queues the generated queries. Since the broadcast bandwidth is fixed, the server can only transmit a limited amount of data during one transaction interval, and then it can only serve a maximum number (α) of queries during one transaction interval. If the server receives more than (α) queries during one transaction interval, some queries are delayed to the next transaction interval. If the server receives more than (α) queries during each transaction interval, many queries may not be served, and the query delay may be out of bound. Fig. 2 shows the query delay as a function of the mean query generate time with Tu = 10s and c = 100 items. When the query generates time is lower than 70s (e.g., 60s), the query delay of the Adaptive TTL algorithm becomes infinite long. However, even when the query generates time reaches 30s, the query delay of proposed algorithm is still less than 20s. This is due to the fact that the cache hit ratio of Ex-ATTL algorithm is still high and a large number of queries can be served locally.

Figure 2: A comparison of the query delay. The query delay as a function of the mean query generate time (Tu = 10s, c = 100 items).

The Query Delay Evaluation with Mean Update Arrival Time

As shown in Fig. 3, as the mean update arrival time increases, the cache hit ratio increases and the query delay decreases. Since the proposed algorithm has high cache hit ratio than the Adaptive TTL algorithm, the query delay of Ex-ATTL algorithm is shorter than the Adaptive TTL algorithm. For example, with Tu = 10,000s, Ex-ATTL algorithm reduces the query delay by a factor of 3 compared to the Adaptive TTL algorithm. Although the cache hit ratio of the Adaptive TTL algorithm is more than doubled from Tu = 10s to Tu = 33s, the query delay of the Adaptive TTL algorithm does not drop too much (from 18.7s to 16.1s). Since the query generated time is exponentially distributed, multiple queries may arrive at a client during one transactional interval. In case of a cache miss, the query delay may be longer than the normal interval 20s; this is due to queue effects. Since requests are generated following an exponential distribution, the server may have a long queue of requests during some period of time. The requests in the back end of the queue will have much higher query latency, and it increases the average query latency.

Figure 3: A comparison of the query delay. The query delay as a function of the mean update arrival time (Tu = 100s, c = 100 items).

Cache Hit Ratio under Different Number of Node Size

Figure 4 shows the cache hit ratio as a function of the number of clients. It can be seen that the cache hit ratio of Ex-ATTL algorithm increases as the number of clients increases, but the cache hit ratio of the Adaptive TTL does not change with the number of clients, e.g., Adaptive TTL (n = 1) and Adaptive (n = 100) have the same cache hit ratio. When the number of clients in drops to one, the cache hit ratio of proposed algorithm is similar to the Adaptive TTL algorithm. In the Adaptive TTL algorithm, a client only downloads the data and its TTL that it has requested from the server. However, in proposed algorithm, clients also download the data with newly calculate TTL. In Ex-ATTL algorithm each and every node maintain the cache invalidation hash table, it increase the cache hit ratio of the data item which is not expire .If data item is expire Ex-ATTL algorithm forward the request to data source, so the stale data is always expired. As the number of clients decreases, clients have less opportunity to download data requested by others and, hence, the cache hit ratio decreases. This explains why Ex-ATTL algorithm has similar cache hit ratio when the number of clients drops to one.

Figure 4: A comparison of the cache hit ratio (Treq = 100s). The cache hit ratio as a function of the number of clients when the cache size is 100 items.

A Comparison of the Cache Hit Ratio under Different Cache Size

Figure 5 shows the cache hit ratio under different cache sizes when the number of clients is 100. It is easy to see that the cache hit ratio of Ex-ATTL algorithm is always higher than that of the Adaptive TTL algorithm for one particular cache size (cache size is 50 items, 100 items). The figure shows that the cache hit ratio grows as the cache size increases. However, the growing trend is different between the Adaptive TTL and Ex-ATTL algorithm. For example, in the Adaptive TTL, when the update arrival time is 1s, the cache hit ratio does not have any difference when the cache size changes from 50 data items to 100 data items. However, under the same situation the cache hit ratio increases from about 40 percent to 58 percent in the proposed scheme. In Ex-ATTL algorithm, clients may need to download interested data for future use, so a large cache size may increase cache hit ratio. When the server updates data frequently, increasing the cache size does not help. This explains why different cache size does not affect the cache hit ratio of the Adaptive TTL when Tu = 1s.

As shown in Fig. 5, the cache hit ratio drops as the update arrival time decreases. However, the cache hit ratio of the TS algorithm drops much faster than the proposed algorithm. When the update arrival time is 10,000s, both algorithms have similar cache hit ratio for one particular cache size. With c = 300 items, as the update arrival time reaches 1s, the cache hit ratio of Ex-ATTL algorithm still keeps around 58 percent, whereas the cache hit ratio of the Adaptive TTL drops to near 0. This can be explained as follows: When the update arrival time is very low (e.g., 1s), most of the cache misses are due to short TTL data access; when the update arrival time is very high (e.g., 10,000s), most of the cache misses are due to Long TTL access. Since Ex-ATTL algorithm is very effective to improve cache performance when accessing short TTL data, the cache hit ratio of the proposed scheme can be significantly improved when the update arrival time is low. However, as the mean update arrival time drops further (Tu < 1s), the cache hit ratio of the Ex-ATTL drops much faster than before. At this time, the short TTL data changes so fast that the down loaded short data may be updated before the client can use it and, hence, failing to improve the cache hit ratio.

Figure 5: A comparison of the cache hit ratio (Treq = 100s). The cache hit ratio under different cache sizes when the number of clients is 100.

Performance Analysis in the Number of Uplink Requests

Figure 6: The number of uplink requests per IR interval (Tq = 100s, c = 100 items).

Figure 6 shows the uplink cost of both algorithms. Since Ex-ATTL algorithm has a lower cache miss rate than the Adaptive TTL and clients only send uplink requests when there are cache misses, proposed algorithm has lower uplink cost compared to the Adaptive TTL. It is interesting to find that both algorithms have similar uplink cost when the mean update arrival time is very high (e.g., 10,000s), but a significant difference when the mean update arrival time is 10s. From Fig. 5, it is found that both algorithms have similar cache miss ratio (1 - cache hit ratio) when Tu = 10,000s, but a significant difference when Tu = 10s. As shown in Fig. 6, Ex-ATTL algorithm can cut the uplink cost by a factor of 3 (with Tu = 10s) and, hence, the clients can save a large amount of energy and bandwidth. When the update arrival time is smaller than 1s, the uplink cost of the Adaptive TTL does not increase, but the uplink cost of proposed algorithm increases much faster than before. This can be explained by the fact that the cache hit ratio of the Adaptive TTL already drops to near 0 when Tu = 1s, but the cache hit ratio of proposed algorithms continues dropping when Tu < 1s.


In this thesis, TTL based ache invalidation technique is designed to efficiently support data reliability in ad hoc networks. TTL-based cache invalidation techniques have received considerable attention due to its reliability, scalability and simple implementation mechanism. In this work the problems of cache staleness are studied and a new scheme called Ex-ATTL is implemented to overcome the above mentioned problem. The existing technologies, such as Fixed TTL scheme and Adaptive TTL are not 100% supporting valid data accessing, because these schemes does not have clear TTL calculation mechanism. In proposed scheme if any node caches the data item then it broadcast the data items id and its current valid TTL to its one hop neighbors for future request. This advanced mechanism improves the network performance and also reduce the cache nodes overheads. Simulation results showed that Ex-ATTL algorithm can cut the query delay by a factor of 3 and double the throughput compared to the Adaptive TTL.


[1] I. Chlamtac, M. Conti and J. J.-N Liu, Mobile ad hoc networking: imperatives and challenges, Ad Hoc Networks, Vol.1, 13-64, 2003.

[2] M. Abolhasan, T. Wysocki and E. Dutkiewicz, A review of routing protocols for mobile ad hoc networks, Ad Hoc Networks, Vol. 2, pp.1-22, 2004.

[3] G. Cao, L. Lin and C. Das, Cooperative cach-based data access in ad hoc networks, IEEE Computer, 37(2), pp.32-39, 2004

[4] L. Yin and G. Cao, Supporting cooperative caching in ad hoc networks, IEEE Transactions on Mobile Computing, 5(1), pp.77-89, 2006.

[5] Y. Du, and S. K. S. Gupta, A cooperative caching service in MANETs, Autonomous Systems and International Conference on Networking and Services, ICAS/ICNS 2005.

[6] G.H. Forman and J. Zahorjan, "The Challenges of Mobile Computing," Computer, Vol. 27, No. 6, pp. 38-47, April 1994.

[7] S. Acharya and S. Muthukrishnan, "Scheduling On-Demand Broadcasts: New Metrics and Algorithms," Proc. ACM MobiCom '98, pp. 43-54, October 1998.

[8] D. Barbara and T. Imielinski, "Sleepers and Workaholics: Caching Strategies for Mobile Environments," Proc. ACM SIGMOD, pp.1- 12, 1994.

[9] G. Cao, "Proactive Power-Aware Cache Management for Mobile Computing Systems," IEEE Trans. Computers, Vol. 51, No. 6, pp. 608-621, 2002

[10] N. Vaidya and S. Hameed, "Scheduling Data Broadcast in Asymmetric Communication Environments," ACM/Baltzer Wireless Networks (WINET), pp.171-182, May 1999

[11] Y. Du, and S. K. S. Gupta, A cooperative caching service in MANETs, Autonomous Systems and International Conference on Networking and Services, ICAS/ICNS 2005.

[12] S. Lim, W. Lee, G. Cao, and C. Das, Performance Comparison of Cache Invalidation Strategies for Internet based Mobile Ad Hoc Networks, IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS), pp.104-113, 2004.

[13] S. Acharya, R. Alonso, M. Franklin, and S. Zdonik, "Broadcast Disks: Data Management for Asymmetric Communications Environments," Proc. ACM SIGMOD Conf. Management of Data, pp. 199-210, May 1995.

[14] J. Xu, X. Tang, and D.L. Lee, "Performance Analysis of Location- Dependent Cache Invalidation Schemes for Mobile Environments," "IEEE Transaction on Knowledge and Data Engineering"

[15] J. Xu, X. Tang, D.L. Lee, and Q.L. Hu, "Cache Coherency in Location-Dependent Information Services for Mobile Environments", Proceedings of First Int'l Conf. Mobile Data Access (MDA '99), pp. 182- 193, December1999

[16] G. Cao, "A Scalable Low-Latency Cache Invalidation Strategy for Mobile Environments," Proc. Sixth Ann. ACM/IEEE Int'l Conf. Mobile Computing and Networking (MobiCom 2000), pp. 200-209, August 2000.

[17] J. Jing, A.K. Elmagarmid, A. Helal, and R. Alonso, "Bit-Sequences: A New Cache Invalidation Method in Mobile Environments," ACM/Baltzer J. Mobile Networks and Applications (MONET), Vol. 2, No. 2, pp. 115-127, 1997.

[18] A. Kahol, S. Khurana, S.K.S. Gupta, and P.K. Srimani, "A Strategy to Manage Cache Consistency in a Distributed Mobile Wireless Environment," IEEE Trans. Parallel and Distributed Systems, Vol.12, No.7, pp. 686-700, July 2001.