Flow State in the Dynamic Source Routing is an on-demand routing protocol. It allows the routing of most packets without an explicit source route header in the packet, further reducing the overhead of the protocol while still preserving the fundamental properties of DSR's operation.
A source node sending packets to some destination node MAY use the DSR flow state extension described here to establish a route to that destination as a flow. A "flow" is a route from the source to the destination represented by hop-by-hop forwarding state within the nodes along the route. Each flow is uniquely identified by a combination of the source node address, the destination node address, and a flow identifier (flow ID) chosen by the source node. Each flow ID is a 16-bit unsigned integer.
A DSR Flow State header in a packet identifies the flow ID to be followed in forwarding that packet. From a given source to some destination, any number of different flows MAY exist and be in use, for example following different sequences of hops to reach the destination. One of these flows may be considered to be the "default" flow from that source to that destination. A node receiving a packet, with neither a DSR header  specifying the route to be taken (with a Source Route option in the DSR header) nor a DSR Flow State header specifying the flow to be followed, is forwarded along the default flow for the source and destination addresses specified in the packet's IP header.
In establishing a new flow, the source node generates a nonzero 16-bit flow ID greater than any unexpired flow IDs for this (source, destination) pair. If the source wishes for this flow to become the default flow, the low bit of the flow ID MUST be set (the flow ID is an odd number); otherwise, the low bit MUST NOT be set (the flow ID is an even number).
The source node establishing the new flow then transmits a packet containing a DSR header with a Source Route option, as defined in the base specification for DSR ; to establish the flow, the source node also MUST include in the packet a DSR Flow State header, with the Flow ID field set to the chosen flow ID for the new flow, and MUST include a Timeout option in the DSR header, giving the lifetime after which information about this flow is to expire.
The source node also records this flow in its Flow Table for future use, setting the TTL in this Flow Table entry to be the value used in the TTL field in the packets' IP header and setting the Lifetime in this entry to be the lifetime specified in the Timeout option in the DSR header.
Any further packets sent with this flow ID before the timeout that also contain a DSR header with a Source Route option MUST use this same source route in the Source Route option.
3.2.3 Flow oriented protocols
This type of protocols finds a route on demand by following present flows. One option is to unicast consecutively when forwarding data while promoting a new link. The main disadvantages of such algorithms are:
Takes long time when exploring new routes without a prior knowledge.
May refer to estimative existing traffic to compensate for missing knowledge on routes.
The Multipath On-demand Routing (MOR) protocol  is a protocol to connect nodes in wireless sensor networks. It is an Ad Hoc Routing Protocol which is reactive or on-demand, meaning that it establishes routes as needed. The advantage of this approach is obvious if only a few routes are needed, since the routing overhead is less compared to the proactive approach of establishing routes whether or not they are needed. The disadvantage of on-demand establishment of routes is that connections take more time if the route needs to be established.
MOR lessens the disadvantages of on-demand routing in wireless sensor networks by having the likely targets of communication perform an initial broadcast. This allows all recipients to have a route to these nodes.
The main characteristic distinguishing MOR from other ad-hoc routing protocols is that it maintains multiple routes to each destination, when available, whereas most other such protocols only keep a single route. There are many advantages to having multiple routes when possible, including
Potentially better load balancing
More even energy consumption (a consequence of better load balancing)
Each node in MOR remembers all next-hop nodes that are closer to a given destination for which a route exists. It then sends successive packets to each such node in round-robin fashion. If a next-hop node fails to acknowledge a given packet, the retransmission is attempted to another node, again if possible. This allows automatic and graceful recovery from occasional localized congestion as well as longer-term reasons for node unavailability.
Lightweight Underlay Network Ad-Hoc Routing (LUNAR)  is a routing protocol based on ARP forwarding. A source node which desires to send an IP packet to a destination in the same LUNAR network emits internally a standard ARP request. The LUNAR routing layer intercepts this ARP request and maps it to a LUNAR Route Request (RREQ) message that is broadcasted and forwarded inside the network. Once the destination node receives this RREQ message, it will send back a Route Reply (RREP) and build two independently managed unicast routes (one for each direction) between the source and the destination nodes. When the source receives the RREP message, it maps it back to an ARP reply message, permitting the source's IP stack to send IP datagrams to the destination node. Thus, none of the involved IP stacks is aware of the potential multi-hop nature of the underlying LUNAR network.
LUNAR routes have a limited life time and need to be re-established at regular intervals. The corresponding LUNAR system parameter has been set to 3 seconds i.e., a data route is in use for 3 seconds before it is replaced by another newly constructed one.
The routing and forwarding strategy of LUNAR is simpler than other MANET routing protocols: LUNAR does not feature route repair, route caching nor packet salvation. Nevertheless, simulations revealed that LUNAR scales without problems to networks of 40 nodes and 10 hops. Despite these observations, this document conservatively recommends LUNAR operations for networks with less than 15 nodes and even imposes a network diameter of 3 hops. Factors like TCP unfairness over IEEE 802.11 (WiFi) become so dominant at this range that the best thing the LUNAR protocol can do is to impose an artificial 3-hop zone in order to let users operate with parameters yielding acceptable network performance.
LUNAR has four building blocks:
The IP adaption and steering layer (ASL) is responsible for translating events at the border to the IP stack into LUNAR specific actions and vice versa. Among others, this includes the intercepting of ARP and DHCP requests and mapping them to the corresponding LUNAR primitives. This layer also takes care of the forced rediscovery of routes every 3 seconds.
The LUNAR protocol engine implements the core routing algorithm. It handles the generation of LUNAR RREQ and RREP messages and their processing at intermediate and destination nodes. As a result, data routes will be created.
The eXtensible Resolution Protocol (XRP) is the request/reply protocol and data format through which the LUNAR protocol engines interact with each other.
The Selector Network (SelNet) is a forwarding layer which supports the creation of data routes composed of Network Pointers [NETPTR]. It is independent of IP addresses and the IP routing layer and works directly on top of the link layer.
The basic operation of SrcRR is similar to DSR with link caches: SrcRR is a reactive routing protocol with source routed data traffic .
Every node running SrcRR maintains a link cache, which tracks the ETX metric values for links it has heard about recently. Whenever a change is made to the link cache, the node locally runs Dijkstra's weighted shortest-path algorithm on this database to find the current, minimum-metric routes to all other nodes in the network. To ensure only fresh information is used for routing, if a link metric has not been updated within 30 seconds it is dropped from the link cache.
When a node wants to send data to a node to which it does not have a route, it floods a route request. When a node receives a route request, it appends its own node ID, as well as the current ETX metric from the node from which it received the request, and rebroadcasts it. A node will always forward a given route request the first time it receives it. If it receives the same route request again over a different route, it will forward it again if the accumulated route metric is better than the best metric it has forwarded so far. This ensures that the target of the route request will receive the best routes.
When a node receives a route request for which it is the target, it reverses the accumulated route and uses this as the source-route for a route reply. When the original source node receives this reply, it adds each of the links to its link cache, and then source-routes data over the minimum-metric path to the destination.
When a SrcRR node forwards a source-routed data packet, it updates its entry in the source route to contain the latest ETX metric for the link on which it received the packet. This allows the source and destination to maintain update link caches, and discover when a route's quality has declined enough that an alternate route would be better. This allows the source and destination to learn of the existence and metric of some alternate links. As with all changes to the link cache, this prompts re-computation of all the best routes using Dijkstra's algorithm.
3.2.4 Hybrid protocols
This type of protocols combines the advantages of proactive and of reactive routing. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding. The choice for one or the other method requires predetermination for typical cases. The main disadvantages of such algorithms are:
Advantage depends on amount of nodes activated.
Reaction to traffic demand depends on gradient of traffic volume.
The Temporally-Ordered Routing Algorithm (TORA)  is an adaptive routing protocol for multihop networks that possesses the following attributes:
Reactive or proactive route establishment and maintenance, and
Minimization of communication overhead via localization of algorithmic reaction to topological changes.
TORA is distributed, in that routers need only maintain information about adjacent routers (i.e., one-hop knowledge). Like a distance-vector routing approach, TORA maintains state on a per-destination basis. However, TORA does not continuously execute a shortest-path computation and thus the metric used to establish the routing structure does not represent a distance. The destination-oriented nature of the routing structure in TORA supports a mix of reactive and proactive routing on a per-destination basis. During reactive operation, sources initiate the establishment of routes to a given destination on-demand. This mode of operation may be advantageous in dynamic networks with relatively sparse traffic patterns, since it may not be necessary (nor desirable) to maintain routes between every source/destination pair at all times. At the same time, selected destinations can initiate proactive operation, resembling traditional table-driven routing approaches. This allows routes to be proactively maintained to destinations for which routing is consistently or frequently required (i.e., servers or gateways to hardwired infrastructure).
TORA has been designed to work on top of lower layer mechanisms or protocols that provide the following basic services between neighboring routers:
Link status sensing and neighbor discovery
Reliable, in-order control packet delivery
Link and network layer address resolution and mapping
Events such as the reception of control messages and changes in connectivity with neighboring routers trigger TORA's algorithmic reactions.
A logically separate version of TORA is run for each "destination" to which routing is required. The following discussion focuses on a single version of TORA running for a given destination. The term destination is used herein to refer to a traditional IP routing destination, which is identified by an IP address and mask (or prefix). Thus, the route to a destination may correspond to the individual address of an interface on a specific machine (i.e., a host route) or an aggregation of addresses (i.e., a network route).
TORA assigns directions to the links between routers to form a routing structure that is used to forward datagrams to the destination. A router assigns a direction ("upstream" or "downstream") to the link with a neighboring router based on the relative values of a metric associated with each router. The metric maintained by a router can conceptually be thought of as the router's "height" (i.e., links are directed from the higher router to the lower router). The significance of the heights and the link directional assignments is that a router may only forward datagrams downstream. Links from a router to any neighboring routers with an unknown or undefined height are considered undirected and cannot be used for forwarding.
Zone Routing Protocol (ZRP)  is a protocol that divides the topology into zones and seeks to utilize different routing protocols within and between the zones based on the weaknesses and strengths of these protocols. ZRP is totally modular, meaning that any routing protocol can be used within and between zones. The size of the zones is defined by a parameter r describing the radius in hops. Figure 5 illustrates a ZRP scenario with r set to 1. Intra-zone routing is done by a proactive protocol since these protocols keep an up to date view of the zone topology, which results in no initial delay when communicating with nodes within the zone. Inter-zone routing is done by a reactive protocol. This eliminates the need for nodes to keep a proactive fresh state of the entire network.
Fig. 5. ZRP scenario with r set to 1
ZRP defines a technique called the Bordercast Resolution Protocol (BRP) to control traffic between zones. If a node has no route to a destination provided by the proactive inter-zone routing, BRP is used to spread the reactive route request.
3.2.5 Hierarchical protocols
With this type of protocols the choice of proactive and of reactive routing depends on the hierarchic level where a node resides. The routing is initially established with some proactively prospected routes and then serves the demand from additionally activated nodes through reactive flooding on the lower levels. The choice for one or the other method requires proper attribution for respective levels. The main disadvantages of such algorithms are:
Advantage depends on depth of nesting and addressing scheme.
Reaction to traffic demand depends on meshing parameter.
Cluster Based Routing Protocol (CBRP)  is a routing protocol designed for use in mobile Ad- hoc networks. The protocol divides the nodes of the Ad hoc network into a number of overlapping or disjoint 2-hop-diameter clusters in a distributed manner. A cluster head is elected for each cluster to maintain cluster membership information. Inter-cluster routes are discovered dynamically using the cluster membership information kept at each cluster head. By clustering nodes into groups, the protocol efficiently minimizes the flooding traffic during route discovery and speeds up this process as well.
Furthermore, the protocol takes into consideration the existence of unidirectional links and uses these links for both intra-cluster and inter-cluster routing.
Fisheye State Routing (FSR) is a table-driven or proactive routing protocol . It bases on link state protocol and has the ability of immediately providing route information when needed. The fisheye scope technique allows exchanging link state messages at different intervals for nodes within different fisheye scope distance, which reduces the link state message size. Further optimization allows FSR only broadcast topology message to neighbors  in order to reduce the flood overhead. With these optimizations, FSR significantly reduces the topology exchange overhead and scales well to large network size.
FSR routes each data packet according to locally computed routing table. The routing table uses most recent topology information. The fisheye scope message updating scheme will not lose routing accuracy for inner scope nodes. For outer scope nodes, information in routing entries may blur due to longer exchange interval, but the extra work of "finding" the destination (as in on-demand routing) is not necessary. Thus low single packet transmission latency can be maintained. At a mobile environment, this inaccuracy for remote nodes will increase. However, when a packet approaches its destination, it finds increasingly accurate routing instructions as it enters sectors with a higher refresh rate.
FSR does not trigger any control messages when a link failure is reported. Thus it is suitable for high topology change environment. The broken link will not be included in the next fisheye scope link state message exchange. Sequence number and table refreshment enables the FSR to maintain the latest link state information and loop-free in a unreliable propagation media and highly mobile network.
The protocol works independently with the IP format of packets and is a distributed protocol. It can be implemented either in network layer or in application layer. It only deals with routing table management of the network system.
The characteristic features of Hierarchical State Routing (HSR)  are multilevel clustering and logical partitioning of mobile network nodes. The physical network nodes are partitioned into clusters and clusterheads are elected as in a cluster-based algorithm.
Additionally, clusterheads at a low level become members of the next higher level. These new virtual cluster members organize themselves again into clusters and so on, this process leads to an hierarchical topology. The IDÂÂ´s at level 0 are physical addresses (similar to MAC addresses), thus unique. And for example, Level 1 and Level 2 clusters are generated by the recursive selection of clusterheads. Thus, those upper level clusters are only virtual with so called virtual links between nodes.
Nodes belonging to a physical cluster broadcast their link information to each other. Each clusterhead summarizes all information about its cluster and sends it to neighboring clusterheads via gateway. This knowledge of neighbor clusterheads leads to the formation of next level clusters.
As already mentioned, clusterheads are member of a virtual cluster on a next higher level and they exchange their own link information as well as the summarized lower-level information among each other.
A node in a virtual cluster level floods the information that it obtains to its lower level. So the lower level will know about the hierarchical topology meaning that each node has a hierarchical address, called HID (Hierarchical ID).
A hierarchical address is assigned by using the cluster head (or node) IDÂÂ´s on the way from the root (top-level) down to the node (physical level). A HID can be considered as a series of MAC addresses.
As a gateway can be reached from the top-level via more than one path, it can have more than one HID. A hierarchical address is enough to ensure delivery from anywhere in the network to a specific host. Each node will dynamically keep its HID up-to-date upon receiving updates from higher-level nodes.
The Landmark Routing Protocol (LANMAR)  utilizes the concept of landmark for scalable routing in large, mobile ad hoc networks.
It relies on the notion of group mobility: i.e., a logical group (for example a team of coworkers at a convention) moves in a coordinated fashion. The existence of such logical group can be efficiently reflected in the addressing scheme. It assumes that an IP like address is used consisting of a group ID (or subnet ID) and a host ID, i.e. <Group ID, Host ID>. A landmark is dynamically elected in each group. The route to a landmark is propagated throughout the network using a Distance Vector mechanism.
Separately, each node in the network uses a scoped routing algorithm (i.e, FSR) to learn about routes within a given (max number of hops) scope. To route a packet to a destination outside its scope, a node will direct the packet to the landmark corresponding to the group ID of such destination. Once the packet approaches the landmark, it will typically be routed directly to the destination. A solution to nodes outside of the scope of their landmark (i.e., drifters) is also addressed in the draft. Thus, by summarizing in the corresponding landmarks the routing information of remote groups of nodes and by using the truncated local routing table, LANMAR dramatically reduces routing table size and routing update overhead in large networks. The dynamic election of landmarks enables LANMAR to cope with mobile environments.
LANMAR is well suited to provide an efficient and scalable routing solution in large, mobile, ad hoc environments in which group behavior applies and high mobility renders traditional routing schemes inefficient.