This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In Sensor Network Deployments there may be hundreds or thousands of nodes and because of deploying such large-scale networks they have high value, sensor networks increasingly likely in such a way that they will be shared by many applications and gather multiple types of data: temperature, the presence of lethal chemical gases, audio or video feeds, etc. Here by the data generated in a Sensor Network and may not be equally important.
With the large deployment in sizes the congestion becomes an important problem. So congestion becomes worst when a specific area is producing data at a huge rate. An example of the problem is shown in below Figure 1.1. An important event occurs in one portion of the sensor field is called as the critical area. This critical area will typically consist of various nodes. In such a situation, there is a data processing center which collect's sensitive data from the critical area. Such information is assigned to a high priority than other information. There might be several nodes which collect's various types of Low Priority (LP) information from the other parts of network. In the current background of LP traffic, which is differentiating between the two priority classes, congestion which will degrade the service provided to High Priority (HP) data. This may result in high priority data is being dropped or delayed so long that it will be of no use for data processing center. The area which contains the shortest paths with the critical area to the sink is called the conzone. High priority data will be ideally traverse in the conzone which will face the competition in the medium because of Low Priority traffic.
In this project it proposes a class of algorithms which will enforce a differentiated routing based on the congested places of the network & the data priority. In the basic protocol which is called as (CAR) Congestion Aware Routing which discovers a congested zone with the network which exists between the high-priority data sources and the data sink which is using the simple forwarding rules and it dedicates the portion of the network whuich forwards the primary high-priority traffic. So CAR Congestion Aware Routing needs some overhead for establishing the high-priority (HP) routing zone which will be unsuitable for high mobile data sources.
Figure 1.1 Sensor Network in a Critical Area.
To accommodate these, project defines (Media Access Control) MAC- which Enhance CAR (MCAR) in which it includes the MAC-layer enhancements and also the protocol for forming the high-priority paths on the fly for each burst of data. MCAR effectively handles the mobility of high-priority data sources with the expense of degrading the performance at low-priority traffic.
The regular complaints in the Sensor Network is congestion with which it becomes the worse when a particular area is generating data at a huge rate. This may occur in deployments where the sensors in one area of interest are requested to gather and transmit data at a high rate than others. In this case if the routing dynamics can lead to the congestion on specific paths. These paths are usually close to each other, which lead to an entire zone in network facing congestion.
Since the current systems does not differentiate between Low Priority data and High Priority the data congestion in the presence of the background Low Priority traffic will be degraded by the service provided to High Priority data which is High Priority data is dropped and Low Priority data may be delivered.
These factors are the motivating factors to take up the project which will develop a CAR and MCAR routing protocol with which ensures that the data with high priority is received in the presence of congestion due to Low Priority packets.
In present world Wireless Sensor Networks are used in many industrial and civilian application areas. Maintaining Sensor Network is very cost-effective so sensors will be shared by multiple applications to gather various types of data. All the data generated in Sensor Network will not be equally important, some information may be important than other. Thus a differentiated data delivery is required in Sensor Networks for High Priority and Low Priority data.
To accomplish this project uses CAR and MCAR routing mechanisms which projects to increase the delivery ratio of High Priority information and to increase the performance of the system in presence of congestion.
So the scope of the project task can be confined as the task of integration of the existing technology and proposed routing protocols to the maximum level possible so that it reduce the time & cost significantly with the system can be utilized also implied in the suitable way possible.
The Project objective is the lessen is to overcome the progressive performance failure in an over crowded Sensor Networks using CAR and MCAR and increase the Ratio of High Priority Data. More energy is used uniformly in the deployment and reduce the energy consumed in the nodes that lie on the conzone, which leads to an increase in connectivity lifetime.
Provides less jitter.
Increasing ratio of high priority data.
Consumption of Energy is low.
The waterfall Model was a task of work carried according to the specifications which was popularly known as "Linear Sequential Model" which suggests a, sequential & systematic approach to software development. This task consist of the working phases of waterfall model
Study on Feasibility
A detailed study on feasibility was conducted to know the financial & technical feasibility of the task which was found that the work is feasible to design, develop, use, and maintain in all sort of respects.
Project Planning & Requirement Analysis
Initially to start the design of the project work in detail the requirements of the project which was analyzed that includes the system requirement specification. The software & hardware requirements which were in the project planning is done by the help of requirement analysis, which is a part of the requirement collection the project which was found the following things and are need to be implemented.
A work of protocol which is differentiated protocol & route High Priority and Low Priority packets for measuring the performance of the network & signal.
Different modules used each for measuring the performance of the general traffic which flow in the network.
Separate modules to form the network were got to find the congestion zone and to rout the data.
A good Graphical User Interface (GUI) which controls to the setup and use the tool are required in different situations.
After successful analysis of system requirement, design of the project started where various design constraints are analyzed. The design phase consists of two main modules CAR and MCAR, intern CAR consist of three sub-modules network formation, conzone discovery, differentiated routing and MCAR consists of two sub-modules network formation, setting modes and routing data, each of which designed to do a specific task in monitoring and analyzing the performance of the network. A functional design methodology and top-down strategy is used in this design phase.
The design of the system produced during the design phase is converted into code in java environment; it also makes use of Java Development Kit 1.5 (JDK 1.5). The coding is done according to the design strategy i.e., code is done according to the module wise.
The coding is done for routing of HP and LP data to exhibit different modules of the project. The source code makes use of some standard java built-in packages, classes, functions, and structures to ease coding.
The program is tested by executing with a set of test cases in different environments of the networks and stand alone machine and then output of the program for the test cases is evaluated to determine if the program is performing as expected.
Incremental Testing Strategy is used to ensure functional testing. First some main parts of the project were tested independently. Then these parts are combined together forming subsystems, which are then tested separately.
1.7 Limitations of the Project
Limitation with CAR is it requires some overhead to discover the conzone. While this overhead is reasonable, it may still be too heavy-weight if the data source is moving often and the conzone is changing frequently or if the HP traffic is short lived. Hence, CAR is designed for static or nearly static networks with long-lived HP flows.
To address a mobile conzone (i.e., the conzone formed when sources of HP traffic are mobile) and/or bursty HP traffic, MCAR forms high-priority paths on the fly for each burst of data. Limitation in MCAR is this protocol handles mobility effectively but at the cost of drastically degrading the delivery of LP traffic, because there is no opportunity to establish alternate routes for such data.
1.8 Organization Profile
This project work was carried out at Mindset IT Solutions (P) Ltd, Bangalore.
1.8.1 About the Company
At Mindset IT Solutions, they go beyond providing software solutions. They work with their client's technologies and business changes that shape their competitive advantages. Founded in 2000, Mindset IT Solutions (P) Ltd. is a software and service provider that helps organizations deploy, manage, and support their business-critical software more effectively. Utilizing a combination of proprietary software, services and specialized expertise, Mindset IT Solutions (P) Ltd. helps mid-to-large enterprises, software companies and IT service providers improve consistency, speed, and transparency with service delivery at lower costs.
Mindset IT Solutions (P) Ltd. helps companies avoid many of the delays, costs and risks associated with the distribution and support of software on desktops, servers and remote devices. Company automated solutions include rapid, touch-free deployments, ongoing software upgrades, fixes and security patches, technology asset inventory and tracking, software license optimization, application self-healing and policy management.
1.8.2 About the People
As a team they have the prowess to have a clear vision and realize it too. As a statistical evaluation, the team has more than 40,000 hours of expertise in providing real-time solutions in the fields of Embedded Systems, Control systems, Micro-Controllers, C Based Interfacing, Programmable Logic Controller, (Very Large Scale Integration) VLSI Design and Implementation, Networking With C, C++, java, Client/Server Technologies in Java, (J2EE (Java 2 Enterprise Edition)\J2ME (Java 2 Micro Edition)\J2SE (Java 2 Standard Edition)\EJB (Enterprise Java Beans)), VB (Visual Basic) & VC++ (Visual C++), Oracle and operating system concepts with Linux.
1.8.3 Company Vision
"Our goal is to dream a vision of possibility & realizing".
1.8.4 Company Mission
They have achieved this by creating and perfecting processes that are in par with the global standards and they deliver high quality, high value services, reliable and cost effective IT (Information Technology) products to clients around the world.
Inquirre consultancy (U.S.A)
K square consultancy pvt Ltd (U.S.A)
Vertex Business Machines
1.9 Organization of the Report
The remaining report is mentioned below :
Chapter 2 describes about the Literature Survey.
Chapter 3 deals with System Requirement Specification which includes details about hardware and software's used for building the application and about the technologies used that are part of this project.
Chapter 4 gives the description about the System Analysis of the Project.
Chapter 5 describes the Software Design Description, High Level Design as well as Low-level design of the Project.
Chapter 6 deals with Implementation, this tells about CAR and MCAR implementation. This chapter tells in detail about conzone discovery algorithm and routing of data in CAR and MCAR.
Chapter 7 explains the Testing Strategy and various unit and integration test cases for the completion of work.
Chapter 8 is the Conclusion. It briefly tells what the system is doing and what has been accomplished.
Chapter 9 explains Future Enhancements of the system.
Apart from the above, books and links referred are listed out in References. The Appendix is provided which gives the complete system working view with Screenshots, about Graphical User Interface, Acronyms and Symbols used in UML Diagrams.
2.1 Wireless Network
Wireless networks are computer networks that are not connected by cables of any kind, and is commonly associated with a telecommunications network whose inter-connections between nodes is implemented without the use of wires. The use of a wireless network enables enterprises to avoid the costly process of introducing cables into buildings or as a connection between different equipment locations. The basis of wireless systems are radio waves, an implementation that takes place at the physical level of network structure.Wireless networks have had a significant impact on the world as far back as World War II. Through the use of wireless networks, information could be sent overseas or behind enemy lines easily, efficiently and more reliably. Since then, wireless networks have continued to develop and their uses have grown significantly.
Wireless networks use radio waves to connect devices such as laptops to the Internet, the business network and applications. When laptops are connected to Wi-Fi hot spots in public places, the connection is established to that business's wireless network. Sending information overseas is possible through wireless network systems using satellites and other signals to communicate across the world. Emergency services such as the police department utilize wireless networks to communicate important information quickly.
People and businesses use wireless networks to send and share data quickly whether it be in a small office building or across the world. Another important use for wireless networks is as an inexpensive and rapid way to be connected to the Internet in countries and regions where the telecom infrastructure is poor or there is a lack of resources, as in most developing countries.
Compatibility issues also arise when dealing with wireless networks. Different components not made by the same company may not work together, or might require extra work to fix these issues. Wireless networks are typically slower than those that are directly connected through an Ethernet cable.
A compatibility issue between the Microsoft Vista operating system and a wireless router adapter can have a number of potential causes.
A wireless network is more vulnerable, because anyone can try to break into a network broadcasting a signal. Many networks offer WEP - Wired Equivalent Privacy - security systems which have been found to be vulnerable to intrusion. Though WEP does block some intruders, the security problems have caused some businesses to stick with wired networks until security can be improved.
Another type of security for wireless networks is WPA - Wi-Fi Protected Access. WPA provides more security to wireless networks than a WEP security set up. The use of firewalls will help with security breaches which can help to fix security problems in some wireless networks that are more vulnerable.
2.2 Wireless Sensor Network
AÂ wireless sensor network (WSN)Â consists of spatially distributed Â autonomousÂ sensorsÂ toÂ monitorÂ physical or environmental conditions, such asÂ temperature,Â sound,Â pressure, etc. and to cooperatively pass their data through the network to a main location. The more modern networks are bi-directional, also enablingÂ controlÂ of sensor activity. The development of wireless sensor networks was motivated by military applications such as battlefield surveillance; today such networks are used in many industrial and consumer applications, such as industrial process monitoring and control, machine health monitoring, and so on.
In addition to one or more sensors, each node in a sensor network is typically equipped with a radio transceiver or other wireless communications device, a small microcontroller, and an energy source, usually a battery. The envisaged size of a single sensor node can vary from shoebox-sized nodes down to devices the size of grain of dust, although functioning 'motes' of genuine microscopic dimensions have yet to be created. The cost of sensor nodes is similarly variable, ranging from hundreds of dollars to a few cents, depending on the size of the sensor network and the complexity required of individual sensor nodes. Size and cost constraints on sensor nodes result in corresponding constraints on resources such as energy, memory, computational speed and bandwidth. Typical Wireless Sensor Network architecture is given in the below Figure 2.1. A sensor node is a node in a wireless sensor network that is capable of performing some processing, gathering sensory information and communicating with other connected nodes in the network. The typical architecture of the sensor node is shown in Figure 2.2. The main components of a sensor node as seen from the Figure 2.2 are microcontroller, transceiver, external memory, power source and one or more sensors.
Microcontroller performs tasks, processes data and controls the functionality of other components in the sensor node. The functionality of both transmitter and receiver are combined into a single device know as transceivers are used in sensor nodes. Memory requirements are very much application dependent. Two categories of memory based on the purpose of storage a) User memory used for storing application related or personal data. b) Program memory used for programming the device.
Figure 2.1: Wireless Sensor Network Architecture
Figure 2.2: Sensor Node Architecture
Power consumption in the sensor node is for the Sensing, Communication and Data Processing. More energy is required for data communication in sensor node. Energy expenditure is less for sensing and data processing. Batteries are the main source of power supply for sensor nodes. Sensors are hardware devices that produce measurable response to a change in a physical condition like temperature and pressure. Sensors sense or measure physical data of the area to be monitored. The continual analog signal sensed by the sensors is digitized by an Analog-to-digital converter and sent to controllers for further processing. Characteristics and requirements of Sensor node should be small size, consume extremely low energy, operate in high volumetric densities, be autonomous and operate unattended, and be adaptive to the environment.
Routing is the process of selecting paths in a network along which to send network traffic. Routing is performed for many kinds of networks, including the telephone network, electronic data networks (such as the Internet), and transportation (transport) networks.
Routing is a way to get one packet from one destination to the next. Routers or software in a computer determines the next network point to which aÂ packetÂ should be forwarded toward its final destination. The router is connected to at least two networks and makes a decision which way to send each data packet based on its current state of the networks it is connected to. A router is located at any point of networks orÂ gateway, including each Internet POP.
A router creates or maintains a table of the available routes and their conditions and uses this information along with distance and cost algorithms to determine the best route for a given packet. Typically, a packet may travel through a number of network points with routers before arriving at its destination.
2.4 Network Congestion
When one part of the subnet (e.g. one or more routers in an area) becomes overloaded,Â congestionÂ results. Because routers are receiving packets faster than they can forward them, one of two things must happen:
The subnet must prevent additional packets from entering the congested region until those already present can be processed.
The congested routers can discard queued packets to make room for those that are arriving.
Network protocols which use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load has been reduced to a level which would not normally have induced network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
2.5 Congestive Collapse
Congestive collapse is a condition which a packet switched computer network can reach, when little or no useful communication is happening due to congestion. When a network is in such a condition, it has settled (under overload) into a stable state where traffic demand is high but little useful throughput is available, and there are high levels of packet delay and loss.
Congestion collapse and pathological congestion are not normally seen in the ARPANET / MILNET system because these networks have substantial excess capacity. Where connections do not pass through IP gateways, the IMP-to host flow control mechanisms usually prevent congestion collapse, especially since TCP implementations tend to be well adjusted for the time constants associated with the pure ARPANET case. However, other than ICMP Source Quench messages, nothing fundamentally prevents congestion collapse when TCP is run over the ARPANET / MILNET and packets are being dropped at gateways. Worth noting is that a few badly-behaved hosts can by themselves congest the gateways and prevent other hosts from passing traffic. We have observed this problem repeatedly with certain hosts (with whose administrators we have communicated privately) on the ARPANET.
2.6 Mobile Adhoc Networks
AÂ mobile ad-hoc networkÂ (MANET) is a self-configuring infrastructure lessÂ networkÂ of mobile devices connected byÂ wireless.Â Ad hocÂ is Latin and means "for this purpose". The routers are free to move randomly and organize themselves arbitrarily; thus, the network's wireless topology may change rapidly and unpredictably. Such a network may operate in a standalone fashion, or may be connected to the larger Internet. Since the nodes are mobile, the network topology may change rapidly and unpredictably over time. The network is decentralized, where all network activity including discovering the topology and delivering messages must be executed by the nodes themselves, i.e., routing functionality will be incorporated into mobile nodes.
The set of applications for MANETs is diverse, ranging from small, static networks that are constrained by power sources, to large-scale, mobile, highly dynamic networks. The design of network protocols for these networks is a complex issue. Regardless of the application, MANETs need efficient distributed algorithms to determine network organization, link scheduling, and routing. However, determining viable routing paths and delivering messages in a decentralized environment where network topology fluctuates is not a well-defined problem. While the shortest path (based on a given cost function) from a source to a destination in a static network is usually the optimal route, this idea is not easily extended to MANETs. Factors such as variable wireless link quality, propagation path loss, fading, multiuser interference, power expended, and topological changes, become relevant issues. The network should be able to adaptively alter the routing paths to alleviate any of these effects. Moreover, in a military environment, preservation of security, latency, reliability, intentional jamming, and recovery from failure are significant concerns. Military networks are designed to maintain a low probability of intercept and/or a low probability of detection. Hence, nodes prefer to radiate as little power as necessary and transmit as infrequently as possible, thus decreasing the probability of detection or interception. A lapse in any of these requirements may degrade the performance and dependability of the network
MANET's play an important role for applications that need to access shared data where no fixed network infrastructure is available. Even in such environments, it might not be desirable to use it due to, for example, communication and/or service costs. In such situations, MANET-based applications provide a viable alternative. Mobile ad hoc networks possess a number of characteristics that impose new challenges to the design of algorithms developed for distributed data management. High mobility of nodes leads to frequent topology changes that are hard to predict, so that the network may become partitioned at any time. Poor connectivity between arbitrary network nodes cannot be treated as an error, and must be considered to be the normal case for which new algorithms must be developed. In addition, algorithms running on mobile devices must use available energy resources efficiently since these devices are commonly equipped with conventional batteries.
2.7 Adhoc On-Demand Distance Vector (AODV) Routing Protocol
Adhoc On-Demand Distance Vector (AODV) Routing is a routing protocol for mobile adhoc networks and other wireless ad-hoc networks. It is jointly developed in Nokia Research Center of University of California, Santa Barbara and University of Cincinnati by C. Perkins and S. Das. AODV is capable of both unicast and multicast routing. It is a reactive routing protocol, meaning that it establishes a route to a destination only on demand. In contrast, the most common routing protocols of the Internet are proactive, meaning they find routing paths independently of the usage of the paths. AODV is, as the name indicates, a distance-vector routing protocol.
In AODV, the network is silent until a connection is needed. At that point the network node that needs a connection broadcasts a request for connection. Other AODV nodes forward this message, and record the node that they heard it from, creating an explosion of temporary routes back to the needy node. When a node receives such a message and already has a route to the desired node, it sends a message backwards through a temporary route to the requesting node. The needy node then begins using the route that has the least number of hops through other nodes. Unused entries in the routing tables are recycled after a time. When a link fails, a routing error is passed back to a transmitting node, and the process repeats. Much of the complexity of the protocol is to lower the number of messages to conserve the capacity of the network. For example, each request for a route has a sequence number. Nodes use this sequence number so that they do not repeat route requests that they have already passed on. Another such feature is that the route requests have a "time to live" number that limits how many times they can be retransmitted. Another such feature is that if a route request fails, another route request may not be sent until twice as much time has passed as the timeout of the previous route request.
The advantage of AODV is that it creates no extra traffic for communication along existing links. Also, distance vector routing is simple, and doesn't require much memory or calculation. However AODV requires more time to establish a connection, and the initial communication to establish a route is heavier than some other approaches. The Ad hoc On-Demand Distance Vector (AODV) Routing protocol uses an on-demand approach for finding routes, that is, a route is established only when it is required by a source node for transmitting data packets. It employs destination sequence numbers to identify the most recent path. The major difference between AODV and Dynamic Source Routing (DSR) stems out from the fact that DSR uses source routing in which a data packet carries the complete path to be traversed. However, in AODV, the source node and the intermediate nodes store the next-hop information corresponding to each flow for data packet transmission. In an on-demand routing protocol, the source node floods the Route Request packet in the network when a route is not available for the desired destination. It may obtain multiple routes to different destinations from a single Route Request.
The major difference between AODV and other on-demand routing protocols is that it uses a destination sequence number (DestSeqNum) to determine an up-to-date path to the destination. A node updates its path information only if the DestSeqNum of the current packet received is greater than the last DestSeqNum stored at the node. A RouteRequest carries the source identifier (SrcID), the destination identifier (DestID), the source sequence number (SrcSeqNum), the destination sequence number (DesSeqNum), the broadcast identifier (BcastID), and the time to live (TTL) field. DestSeqNum indicated the freshness of the route that is accepted by the source. When an intermediate node receives a RouteRequest, it either forwards it or prepares a RouteReply if it has a valid route to the destination.
The validity of a route at the intermediate node is determined by comparing the sequence number at the intermediate node with the destination sequence number in the Route Request packet. If a Route Request is received multiple times, which is indicated by the BcastID-SrcID pair, the duplicate copies are discarded. All intermediate nodes having valid routes to the destination, or the destination node itself, are allowed to send RouteReply packets to the source.
Every intermediate node, while forwarding a RouteRequest, enters the previous node address and its BcastID. A timer is used to delete this entry in case a RouteReply is not received before the timer expires. This helps in storing an active path at the intermediate node as AODV does not employ source routing of data packets. When a node receives a RouteReply packet, information about the previous node from which the packet was received is also stored in order to forward the data packet to this next node as the next hop toward the destination.
TheÂ Ad hoc On Demand Distance Vector (AODV) routing algorithm is a routing protocol designed for ad hoc mobile networks. AODV is capable of both unicast and multicast routing. It is an on demand algorithm, meaning that it builds routes between nodes only as desired by source nodes. It maintains these routes as long as they are needed by the sources. Additionally, AODV forms trees which connect multicast group members. The trees are composed of the group members and the nodes needed to connect the members. AODV uses sequence numbers to ensure the freshness of routes. It is loop-free, self-starting, and scales to large numbers of mobile nodes.
AODV builds routes using a route request / route reply query cycle. When a source node desires a route to a destination for which it does not already have a route, it broadcasts a route request (RREQ) packet across the network. Nodes receiving this packet update their information for the source node and set up backwards pointers to the source node in the route tables. In addition to the source node's IP address, current sequence number, and broadcast ID, the RREQ also contains the most recent sequence number for the destination of which the source node is aware. A node receiving the RREQ may send a route reply (RREP) if it is either the destination or if it has a route to the destination with corresponding sequence number greater than or equal to that contained in the RREQ. If this is the case, it unicasts a RREP back to the source. Otherwise, it rebroadcasts the RREQ. Nodes keep track of the RREQ's source IP address and broadcast ID. If they receive a RREQ which they have already processed, they discard the RREQ and do not forward it.
As the RREP propagates back to the source, nodes set up forward pointers to the destination. Once the source node receives the RREP, it may begin to forward data packets to the destination. If the source later receives a RREP containing a greater sequence number or contains the same sequence number with a smaller hopcount, it may update its routing information for that destination and begin using the better route.
As long as the route remains active, it will continue to be maintained. A route is considered active as long as there are data packets periodically travelling from the source to the destination along that path. Once the source stops sending data packets, the links will time out and eventually be deleted from the intermediate node routing tables. If a link break occurs while the route is active, the node upstream of the break propagates a route error (RERR) message to the source node to inform it of the now unreachable destination(s). After receiving the RERR, if the source node still desires the route, it can reinitiate route discovery.
Multicast routes are set up in a similar manner. A node wishing to join a multicast group broadcasts a RREQ with the destination IP address set to that of the multicast group and with the 'J'(join) flag set to indicate that it would like to join the group. Any node receiving this RREQ that is a member of the multicast tree that has a fresh enough sequence number for the multicast group may send a RREP. As the RREPs propagate back to the source, the nodes forwarding the message set up pointers in their multicast route tables. As the source node receives the RREPs, it keeps track of the route with the freshest sequence number, and beyond that the smallest hop count to the next multicast group member. After the specified discovery period, the source node will unicast a Multicast Activation (MACT) message to its selected next hop. This message serves the purpose of activating the route. A node that does not receive this message that had set up a multicast route pointer will timeout and delete the pointer. If the node receiving the MACT was not already a part of the multicast tree, it will also have been keeping track of the best route from the RREPs it received. Hence it must also unicast a MACT to its next hop, and so on until a node that was previously a member of the multicast tree is reached.
AODV maintains routes for as long as the route is active. This includes maintaining a multicast tree for the life of the multicast group. Because the network nodes are mobile, it is likely that many link breakages along a route will occur during the lifetime of that route. The papers listed below describe how link breakages are handled. The WMCSA paper describes AODV without multicast but includes detailed simulation results for networks up to 1000 nodes. The Mobicom paper describes AODV's multicast operation and details simulations which show its correct operation. The internet drafts include descriptions of unicast and multicast route discovery, as well as mentioning how QoS and subnet aggregation can be used with AODV. Finally, the IEEE Personal Communications paper and the Infocom paper details an in-depth study of simulations comparing AODV with the Dynamic Source Routing (DSR) protocol, and examine each protocol's respective strengths and weaknesses.
2.8 Dynamic Source Routing
Dynamic Source Routing (DSR) is a routing protocol for wireless mesh networks. It is similar to AODV in that it forms a route on-demand when a transmitting computer requests one. However, it uses source routing instead of relying on the routing table at each intermediate device.
Determining source routes requires accumulating the address of each device between the source and destination during route discovery. The accumulated path information is cached by nodes processing the route discovery packets. The learned paths are used to route packets. To accomplish source routing, the routed packets contain the address of each device the packet will traverse. This may result in high overhead for long paths or large addresses, like (Internet Protocol v6) IPv6. To avoid using source routing, DSR optionally defines a flow id option that allows packets to be forwarded on a hop-by-hop basis.
This protocol is truly based on source routing whereby all the routing information is maintained (continually updated) at mobile nodes. It has only 2 major phases which are Route Discovery and Route Maintenance. Route Reply would only be generated if the message has reached the intended destination node (route record which is initially contained in Route Request would be inserted into the Route Reply). To return the Route Reply, the destination node must have a route to the source node. If the route is in the Destination Node's route cache, the route would be used. Otherwise, the node will reverse the route based on the route record in the Route Reply message header (symmetric links). In the event of fatal transmission, the Route Maintenance Phase is initiated whereby the Route Error packets are generated at a node. The erroneous hop will be removed from the node's route cache; all routes containing the hop are truncated at that point. Again, the Route Discovery Phase is initiated to determine the most viable route. For information on other similar protocols, see the ad hoc routing protocol list. DSR is an on-demand protocol designed to restrict the bandwidth consumed by control packets in ad hoc wireless networks by eliminating the periodic table-update messages required in the table-driven approach. The major difference between this and the other on-demand routing protocols is that it is beacon-less and hence does not require periodic hello packet (beacon) transmissions, which are used by a node to inform its neighbors of its presence. The basic approach of this protocol (and all other on-demand routing protocols) during the route construction phase is to establish a route by flooding Route Request packets in the network. The destination node, on receiving a Route Request packet, responds by sending a Route Reply packet back to the source, which carries the route traversed by the Route Request packet received.
Consider a source node that does not have a route to the destination. When it has data packets to be sent to that destination, it initiates a Route Request packet. This Route Request is flooded throughout the network. Each node, upon receiving a Route Request packet, rebroadcasts the packet to its neighbors if it has not forwarded already or if the node is not the destination node, provided the packet's time to live (TTL) counter has not exceeded. Each Route Request carries a sequence number generated by the source node and the path it has traversed. A node, upon receiving a Route Request packet, checks the sequence number on the packet before forwarding it. The packet is forwarded only if it is not a duplicate Route Request. The sequence number on the packet is used to prevent loop formations and to avoid multiple transmissions of the same Route Request by an intermediate node that receives it through multiple paths. Thus, all nodes except the destination forward a Route Request packet during the route construction phase. A destination node, after receiving the first Route Request packet, replies to the source node through the reverse path the Route Request packet had traversed. Nodes can also learn about the neighboring routes traversed by data packets if operated in the promiscuous mode (the mode of operation in which a node can receive the packets that are neither broadcast nor addressed to itself). This route cache is also used during the route construction phase. If an intermediate node receiving a Route Request has a route to the destination node in its route cache, then it replies to the source node by sending a Route Reply with the entire route information from the source node to the destination node.
2.9 Distance Vector Routing
A Distance Vector Routing Protocol is one of the two major classes of routing protocols used in packet-switched networks for computer communications, the other major class being the link-state protocol. In distance vector routing each router maintains a routing table indexed by and containing one entry for each router in subnet. This entry contains two parts, the preferred outgoing line to use for that destination and an estimate of the time or distance to that destination. The metric used might be number of hops, time delay in milliseconds etc. The router is assumed to know the distance to each of its neighbors. If the metric is hops, the distance is just one hop.
A distance-vector routing protocol requires that a router informs its neighbors of topology changes periodically. Compared toÂ link-state protocols, which require a router to inform all the nodes in a network of topology changes, distance-vector routing protocols have lessÂ computational complexityÂ andÂ message overhead.
Numerous applications run in a client-server environment, AÂ serverÂ is a computer system that selectively shares itsÂ resources; aÂ clientÂ is aÂ computerÂ orÂ computer programÂ that initiates contact with a server in order to make use of a resource.Â Data,Â CPUs,Â printers, andÂ data storage devicesÂ are some examples of resources.
This sharing of computer resources is calledÂ time-sharing, because it allows multiple people to use a computer (in this case, the server) at the same time. Because a computer does a limited amount of work at any moment, a time-sharing system must quickly prioritize itsÂ tasksÂ to accommodate the clients.
A network socket is a lot like an electrical socket. Various plugs around the network have a standard way of delivering their payload. Anything that understands the standard protocol can "plug in" to the socket and communicate.
Internet protocol (IP) is a low-level routing protocol that breaks data into small packets and sends them to an address across a network, which does not guarantee to deliver said packets to the destination. Transmission Control Protocol (TCP) is a higher-level protocol that manages to reliably transmit data. A third protocol, User Datagram Protocol (UDP), sits next to TCP and can be used directly to support fast, connectionless, unreliable transport of packets.
The notion of a socket allows as single computer to serve many different clients at once, as well as serving many different types of information. This feat is managed by the introduction of a port, which is a numbered socket on a particular machine. A server process is said to "listen" to a port until a client connects to it. A server is allowed to accept multiple clients connected to the same port number, although each session is unique. To mange multiple client connections, a server process must be multithreaded or have some other means of multiplexing the simultaneous I/O (Input/Output).
The client/server model is particularly recommended for networks requiring a high degree of reliability, the main advantages being:
Centralized resources: given that the server is the centre of the network, it can manage resources that are common to all users, for example: a central database would be used to avoid problems caused by redundant and inconsistent data
Improved security: as the number of entry points giving access to data is not so important
Server level administration: as clients do not play a major role in this model, they require less administration
Scalable network: thanks to this architecture it is possible to remove or add clients without affecting the operation of the network and without the need for major modification
Client/Server architecture also has the following drawbacks:
Increased cost: due to the technical complexity of the server
Weak link: the server is the only weak link in the client/server network, given that the entire network is built around it. Fortunately, the server is highly fault tolerant.
A client/server system operates as outlined in the following Figure 2.3. The client sends a request to the server using its IP address and the port, which is reserved for a particular service running on the server. The server receives the request and responds using the client IP address and port.
System Requirement Specification
The requirement specification of the software is generated by the time of formation of analysis task. The perfect software regarding the performance and function is build by establishing a detailed information description as functional representation, a representation of system behavior, an indication of performance requirements and design constraints, appropriate validation criteria.
Swing - Swing is the collection of class that in turn produce more powerful and flexible components which are made available with AWT (Abstract Windowing Toolkit). In the similar way of familiar components like button, check boxes, and labels. The swing also came up with the new exciting facilities like including tabbed panes, scroll panes, trees and tables.
Hard disk : 40 GB
RAM : 512 MB
Processor Speed : 3.00GHz
Processor : Pentium IV Processor
Language used - Java (Java Development Kit JDK 1.5)
Operating System - Windows XP/2000
3.5 Java and JDK
Java is a platform independent language which extends its features across the network and has the oops (object-oriented programming language) concept. Its a high level language which have the build in libraries of reusable software components. Java was introduced by James Gosling, Patrick Naughton, Chris Wrath, Ed Frank, and Mike Sheridan around 1990's at Sun Micro system.
Java program should undergo the two steps in its execution process which are compilation and interpretation. The compiler will translate the Java program into an intermediate language which is understandable by the system called Java byte codes--the platform-independent codes interpreted by the Java interpreter. The interpreter will parse this Java byte code instruction and execute on the system. Compilation happens only one time whereas interpretation happens every time when the program got executed. The below Figure 3.1 illustrates how the process flows.
Java byte codes are given as input to the machine code instructions for the Java Virtual Machine (Java VM). Every Java interpreter, irrespective of whether it's a Java development tool or a Web browser that are able to run Java applets leads to implementation of the Java VM. The JVM can also be implemented in hardware.
Java byte codes help make "write once, run anywhere" possible. The Java program can be compiled into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the Java VM. For example, the same Java program can run on Windows NT, Solaris, and Macintosh.
With the Java byte codes the quotation "code once, run multiple times" suites better. Since Java is platform independent language the Java program can be compiled into byte codes on any platform that has a Java compiler. The byte codes can then be run on any implementation of the JVM
Figure 3.1 Java Program Compilation and Interpretation
Java2 version came up with a new component called "Swing". It is a collection of class that in turn produce more powerful and flexible components which are made available with AWT (Abstract Windowing Toolkit). It's a light weight package, as they are not implemented by platform-specific code. Related classes of swing are contained in javax.swing and its sub packages, such as javax.swing.tree. Components described in the Swing have more capabilities than those of AWT.
JDK - The JDK contains the software and tools needed to compile, debug and execute applets and applications which are written in Java language. Tools under JDK are javac compiler to compile java source code and convert that into bytecodes, java interpreter used to run java bytecodes, appletviewer used to view and test applets and javadoc is the java documentation tool.
System analysis is the clear study of the different operations performed by a system and their relationship within and outside the system. One feature of the system analysis is setting the boundaries of the system and determining whether a candidate system should consider other related systems or not. The aim in system analysis is on identifying what is required from the system, but not on how the system will reach its goal. At the time of analysis the data are collected on the available files, decision points, and transactions.
The study of the current system is carried out with a view of making system more effective by identifying inefficiencies.
This project has two important modules CAR and MCAR, CAR consists of three sub-modules and MCAR consists of two sub-modules which represents the entire workflow of the system. The following system design document focuses on these modules. The modules of project are:
A. Network Formation
B. Conzone Discovery
C. Differentiated Routing
A. Network Formation
B. Setting Modes and Routing Data
CAR is build with three steps: network formation, conzone discovery, and differentiated routing. The combination of these functions segregates the network into on-conzone and off-conzone nodes. Only HP traffic is routed to on-conzone nodes and LP traffic is routed out of the conzone.
4.1.1 Network Formation in CAR
All the nodes are connected in the network formation and the depth of all nodes are assigned. At the beginning all the nodes are in off-conzone. Node N1 is considered as a critical area node. Node N1 is connected to N2, N2 is connected to N3, N4 and N5. Nodes N3, N4 and N5 are connected to Sink. In sink there are three JPanels, two LP JPanels and one HP JPanel.
CAR generates a HP network, nodes forwarding HP data creates HP network, by dividing nodes in the network as congestion zone nodes and off-conzone nodes. Only on-conzone nodes will forward the HP data. LP data generated inside the conzone is routed out of the conzone.
HP network formation is very critical in case if the source of data is moving often and congestion zone is changing frequently or if the HP traffic is short-lived. So one disadvantage with CAR is it requires some overhead to discover the congestion zone if the conzone is changing frequently. To overcome this MCAR is used,it address the mobility of data sources. In most of the applications sensor networks are specified by low mobility, in such cases CAR is used. In case of high mobility applications MCAR is used.
N1(critical area node)
Figure 4.1: Network Formation in CAR
4.1.2 Conzone Discovery in CAR
By using the Conzone discovery mechanism, the nodes of this module are discovered whether they are on the conzone or not. After that the Conzone must be discovered from that neighborhood to the sink for the HP data delivery. To do this, critical area nodes broadcast "discover Conzone to sink" (To_Sink) messages. This message includes the ID of the source and its depth and is sent to all neighbors. When a node hears more than threshold Thre_Alpha distinct To_Sink messages coming from its neighbors, it represents itself as on-conzone and if it is not Sink it propagates a To_Sink message with its ID and depth to its neighbors. If it is Sink it will not braodcast To_Sink message simply it marks itself as it is in conzone. If To_Sink messages received by node is not greater than Thre_Alpha then the node represents itself as it is in off-conzone.
Thre_Alpha is produced by Dx*Bx*Nx where Dx is depth, Bx is some constant setted according to requirement and Nx is Neighborhood size (the number of nodes within the communication range).
4.1.3 Differentiated Routing in CAR
Once the Conzone is discovered, in differentiated routing, HP data is routed in the conzone, and LP data is routed off the Conzone. LP data generated inside the conzone is routed out of the conzone. HP data is received from Sink in HP JPanel and LP data is received in either of two Jpanels.
In differentiated routing, if the conzone is discovered, then the HP data is navigated in the conzone and LP data is navigated off the conzone. LP data produced within the conzone is routed out of the conzone. HP data is gathered from Sink in HP JPanel and LP data is gathered in either of two Jpanels.
MCAR consists of three steps: network formation, setting modes and routing data.
4.2.1 Network Formation in MCAR
All the nodes are connected to each other and the depth is allocated to all nodes, during the network formation. Instead of forming a HP network, HP paths are dynamically created, since the sources or sinks are expected to be mobile in MCAR. But it is not in the case of CAR.
4.2.2 Setting Modes and Routing Data in MCAR
Each node in a network can be set into one of these three modes: LP mode, HP mode or shadow mode. The state of a node will change dynamically and the appropriate data is routed.
LP Mode: In this mode, nodes forward LP data. Initially all the nodes are in LP mode. Once the LP data is received, nodes will remain in the LP mode and LP data is forwarded. If the HP data is received by the node in the LP mode, it tends to the HP mode and forwards HP data.
HP Mode: Nodes which are flowing through the path of the HP data are in HP mode. Node that deals with the HP data would change as HP mode. It will be in the same mode if a node receives an HP data and sends the HP data. Otherwise if it receives LP data either it changes to LP mode and forwards LP data or it changes to shadow mode and drops the LP data.
Shadow Mode: In this shadow mode the HP mode is transitioned to HP mode if node receives HP data and it sends the HP data. LP data is dropped if node receives the LP data. The major disadvantage with MCAR is it neglects the service provided to LP data when the node turns to shadow mode it drops the LP data.
The requirements are translated into a representation of the software which is assessed for quality by the design process before coding begins. If the requirements have been gathered and analyzed, we should identify clearly that how the system will be developed to do the necessary tasks.
5.1 High Level Design
CAR drives uniquely in the network layer. Packets are categorized as HP or LP by the data sources, and nodes within a conzone only forward HP traffic. LP traffic is routed out of and/or around the conzone. In effect, we divide the network into two partitions conzone nodes and non-conzone nodes. An activity diagram for CAR is given in the below Figure 5.1
Congestion Zone Discovery
Differentiated Routing of HP and LP Data
Figure 5.1: Activity Diagram Showing CAR
MCAR is the combination of MAC and routing scheme, which provide the support for situations in which the sensors generating HP data may move or critical events may move. MCAR is relied on MAC-layer enhancements that is capable of forming the conzone on the fly with each burst of data. The ultimate goal is that it effectively preempts the flow of LP data, which results in serious degradation of its service.
The dynamic performance of route discovery is done at the time of HP event detection. HP data moves along the path when once this route is discovered. There is a chance for re-route discovery in the case when there is a route break due to error in node or mobility.
In MCAR, it is not necessary to route LP data out of the HP zone. In effect to it, MCAR is high aggressive in dropping LP data and clear all the obligations for the shared channel in between the LP and HP packets. This is one of the trade-offs between CAR and MCAR. An activity diagram for MCAR is given in the below Figure 5.2.
Figure 5.2: Activity Diagram Showing MCAR
5.2 Low Level Design
The decision of the internal logic of each of the modules mentioned in the system deign is made during the low level design. Low level design concentrates on designing logic of each of the modules. During the implementation phase, the components for the system design are recognized with the help of this low low level design
5.2.1 CAR Design
CAR design performs three steps: conzone discovery, network formation, and differentiated routing. The group of these functions divides the network into on-conzone and off-conzone nodes. All the HP traffic is routed by on-conzone nodes and the LP traffic is routed by off-conzone.
All the nodes are connected and the depth is allocated to all nodes during the formation of the network. During the initial stage all the nodes are in off-conzone. An activity diagram for network formation in CAR is depicted in the Figure 5.3. In conzone discovery, recognize whether the nodes are on-conzone or they are on off-conzone discovery algorithm. Flowchart is represented in the Figure 5.4. After this step some nodes will be in on-conzone and some nodes will be in off-conzone.
Connect all nodes and initially all nodes
are in off-conzone
Assign depth to all nodes
Figure 5.3: Activity Diagram for Network Formation in CAR
To_Sink contains depth and id of the source
r = "off-conzone"
Boroadcast To_Sink to neighbors
r = "on_conzone"
On_conzone = true
Figure 5.4: Flowchart for Conzone Discovery Algorithm in CAR
If the Conzone is identified once, in differentiated routing, HP data is routed in the conzone, and LP data is routed in the off-Conzone. LP data generated inside the conzone is routed out of the conzone. An activity diagram representing differentiated routing in CAR is given in the below Figure 5.5.
Figure 5.5: Activity Diagram for Differentiated Routing in CAR
5.2.2 MCAR Design
MCAR is classified into two steps: network formation, setting modes and routing data. The depth of all the nodes are assigned and the independent nodes are connected during the formation of the network. An activity diagram for network formation in MCAR is given in the Figure 5.6.
In categorizing the modes each node in a network can be in one of the three modes: LP mode, HP mode or shadow mode. The state changes of a node is done dynamically and the appropriate data is routed. An activity diagram for setting modes and routing data is given in the below Figure 5.7.
LP mode: In this mode, nodes forward LP data. In the network all the nodes are initially in the LP mode.
HP mode: Nodes in the path of HP data are in the HP mode.
Shadow mode: Nodes in the shadow mode will drop the LP data.
Connect all nodes and initially all nodes
are in LP mode
Assign depth to all nodes
Figure 5.6: Activity Diagram for Network Formation in MCAR
Initially in LP Mode
Lpcount (number of
LP data received)<=5
Mode and drops LP packet
Figure 5.7: Activity Diagram for Setting Modes and Routing Data in MCAR
5.3 Sequence Diagram for CAR and MCAR
A sequence diagram is the pretty way of explaining the system behavior by representing the interaction in the sequence of time. The sequence diagram comprises two dimensions: one is the vertical dimension which delineate time; and the other one is horizontal dimension which describes various objects related to the theme. The vertical line is the object's lifeline representing the object's existence during the interaction. Each message is represented by an arrow between the lifelines of two objects. The order of occurrence of these messages is shown top to bottom in the Figure. The objects in this sequence diagram are network formation, message transformation and conzone discovery. Sequence diagrams for CAR and MCAR are given in Figure 5.8 and Figure 5.9.