This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
In general,standard covers the MAC sub-layer and the physical layer of the OSI Open System Interconnection network reference model. Logical Link Control sub-layer is specified in the IEEE 802.2 standard . This architecture provides a transparent interface to the higher layer users: stations may move, roam through an 802.11 wireless network and still appear as stationary to 802.2 LLC sub-layer and above. This allows existing network protocols (such as TCP/IP) to run over IEEE 802.11 wireless without any special considerations, just like if IEEE 802.3 wired Ethernet was deployed.
At PHY layer, first the IEEE provides three kinds of options in the 2.4 GHz band. The three PHY layers are an Infrared (IR) baseband PHY, a Frequency Hopping Spread Spectrum (FHSS) radio and a Direct Sequence Spread Spectrum (DSSS) radio. All three PHY layers support both 1 and 2Mbps operation. In 1999, the IEEE defined up to 11Mbps 802.11b in the 2.4 GHz free ISM (Industrial, Science, and Medical) band and up to 54Mbps 802.11a OFDM in 5GHz frequency. Ongoing 802.11g will extend 2.4GHz 802.11b PHY layer to support at least 20Mbps rate. Moreover, 802.11h will enhance 802.11a in the 5GHz band, adding indoor and outdoor channel selection for 5GHz license exempt bands in Europe. At MAC layer, ongoing 802.11e covers QoS support to the 802.11 wireless networks. 802.11i will enhance security and authentication mechanisms for 802.11 MAC. The IEEE 802.11 MAC sub-layer defines two relative medium access coordination functions, the Distributed Coordination Function (DCF) and the optional Point Coordination Function (PCF).The transmission medium can operate both in contention mode (DCF) and contention-free mode (PCF). The IEEE 802.11 MAC protocol provides two types of transmission: asynchronous and synchronous.
The asynchronous type of transmission is provided by DCF which implements the basic access method of the 802.11 MAC protocol. DCF is based on the CSMA/CA protocol, and should be implemented in all the stations. The synchronous service is provided by PCF which basically implements a polling-based access method.
3.10.1 Distributed Coordination Function (DCF)
The basic scheme for DCF is Carrier Sense Multiple Access (CSMA). This protocol has two variants: Collision Detection (CSMA/CD) and Collision Avoidance (CSMA/CA). A collision can be caused by two or more stations using the same channel at the same time after waiting a channel idle period, or (in wireless networks) by two or more hidden terminals emitting at the same time. CSMA/CD is used in Ethernet (IEEE 802.3) wired networks. Whenever a node detects that the transmitted signal is different from the one on the channel, it aborts transmission, saving useless collision time. This mechanism is not possible in wireless communications, as nodes cannot listen to the channel while transmitting, due to the big difference between transmitted and received power levels. In this case, after each frame transmission the sender waits for an acknowledgment (ACK) from the receiver, as shown in Figure 3.5.
Figure 3.5 Basic Access Scheme
Source axis shows data transmitted by the source. The destination responds by an ACK, represented on the Destination axis. The third axis represents the network state, as seen by other nodes. In above presented figure the transmission delay has not been presented.
f no ACK was returned, a collision must have occurred and the frame is retransmitted. But this technique may waste a lot of time in case of long frames, keeping transmission going on while congestion is taking place (caused by a hidden terminal for example). This can be solved by introducing an optional RTS/CTS scheme (Request to Send and Clear to Send respectively), in addition to the previous basic scheme. In the optional RTS/CTS scheme, a station sends an RTS before each frame transmission for channel reservation. The destination responds with CTS if it is ready to receive and the channel is idle for the packet duration. When the source receives the CTS, it starts transmitting its frame, being sure that the channel is "reserved" for the frame duration. All other nodes update their Network Allocation Vector (NAV) at each hearing of RTS, CTS and the data frames. NAV is used for virtual carrier sensing, detailed in the next paragraph.
This scheme is shown in Figure 3.6. The overhead caused by the transmission of RTS/CTS frames becomes considerable when data frames sizes are small and suboptimal channel usage takes place. Reference  discuss optimal data frame sizes (RTS Threshold) above which it is recommended to use the RTS/CTS scheme.
Figure 3.6 RTS/CTS Access Scheme
Not all packet types have the same priority. For example, ACK packets should have priority over RTS or data ones. This is done by affecting to each packet type a certain Inter-frame Spacing (IFS) before which a packet cannot be transmitted, once the channel becomes idle. In DCF two IFSs are used: Short IFS (SIFS) and DCF IFS (DIFS), where SIFS is shorter than DIFS. As a result, if an ACK (affected with SIFS) and a new data packet (affected with DIFS) are waiting simultaneously for the channel to become idle, the ACK will be transmitted before the new data packet (the first has to wait SIFS whereas the data has to wait DIFS). Carrier sensing can be performed on both layers. On the physical layer physical carrier sensing is done by detecting any channel activity caused by other sources. On the MAC sub-layer, virtual carrier sensing can be done by updating a local NAV with the value of other terminal's transmission duration. This duration is declared in data frames, RTS and CTS frames. Using the NAV, a node MAC knows when the current transmission will end. NAV is updated upon hearing an RTS from the sender and/or CTS from the receiver, so the hidden node problem is avoided.
The collision avoidance part of CSMA/CA consists of avoiding packet transmission right after the channel is sensed idle (+ DIFS time), so it won't collide with other "waiting" packets. Instead, a node with a packet ready to be transmitted waits a random time after the channel being idle for DIFS, back-off time, shown in Figure 3.5 and Figure 3.6. Back off time of each node is decreased as long as the channel is sensed idle (during the called contention window). When the channel is busy, back-off time is freezed. When back off time reaches zero, the node transmits its frame, but if the channel is sensed busy because of another "waiting" frame, the node computes a new random back-off time, with a new range. This range increases exponentially as 22+i where i (initially equal to 1) is the transmission attempt number. Therefore, the back off time equation is:
Where Slot time is function of some physical layer parameters, and rand () is a random function with a uniform distribution in [0, CW]. There is a higher limit for retransmission attempts i, above which the frame will be dropped. Collision avoidance is applied on data packets in the basic scheme, and on RTS packets in the RTS/CTS scheme. All nodes have equal probability to access the channel, thus share it equally. But this method has no guarantees for queuing delays and has no service differentiation.
3.10.2 QoS Issues in DCF
DCF can only support best-effort services, not any QoS guarantees. Typically, time bounded services such as Voice IP, audio and video conference require specified bandwidth, delay and jitter, but can tolerate some losses. However, in DCF mode, all the stations in the network or all the flows in one station compete for the resources and channel with the same priorities. There is no differentiation mechanism to guarantee bandwidth, packet delay and jitter for high-priority stations or multimedia flows. Throughput degradation and high delay are caused by the increasing time used for channel access contention.
3.10.3 IEEE802.11e (EDCF)
EDCF  is a main part of the 802.11e  standard for service differentiation. It prioritizes traffic categories by different contention parameters, including arbitrary Inter frame space (AIFS), maximum and minimum back-off window size (CWmax/min), and a multiplication factor for expanding the backoff window.
Although all traffic categories keep using the same DCF access method, they have different probabilities of winning the channel contention by differentiating contention parameters.
EDCF makes two improvements for providing differentiation. First, it includes a QoS parameter set element which sets the contention window values and AIFS (Arbitration Inter frame Space) values for prioritized EDCF channel access during the contention period. Classes with smaller AIFS have higher priority. Second, to achieve better medium utilization, packet bursting is used, i.e., when a station has gained access to the medium, it can be allowed to send more than one frame without contending for the medium again. EDCF provides good traffic differentiation, but it causes starvation of low priority flows under high traffic load.
3.11 Service Differentiation
Service differentiation is an important aspect of providing QoS in wireless networks. In this Section we look at service differentiation mechanisms and how they provide QoS in wireless ad-hoc networks. We then highlight the challenges this aspect faces in regard to flow reservation and admission control.
In many ad-hoc network applications, such as disaster rescue, communication terminals may have different priority ranks. Many applications that are deployable in ad hoc networks, such as multimedia applications, may have different delivery requirements, i.e., low delay and jitter, and high throughput. For instance, a typical Voice over IP (VoIP) traffic session has the requirement of very low transmission delay. While multimedia streaming traffic is more tolerant to latency than VoIP traffic, it requires more bandwidth. We can therefore label different traffic classes with different priority levels and provide service differentiation among traffic flows.
The essential problem of providing QoS in multi-hop ad-hoc networks is trying to admit as many traffic flows as possible in order to achieve high efficiency of the channel usage, while at the same time providing service quality guarantees according to traffic priority.
A number of recent proposals allow service differentiation among stations or even among traffic classes, in the 802.11 standard. This differentiation is archived by assigning different priorities in the wireless medium access to stations that contend for it. These proposals suggest modifications to the DCF mode.
These techniques can be classified according to the parameter used to archive differentiation: DIFS, back off, frame size, and RTS/CTS threshold. The DIFS-based scheme consists of configuring wireless stations with different values for this parameter according to the priority that one wishes to assign to each station. The larger the DIFS in the number of slots the smaller the station priority. To avoid contention among stations with different priorities, the maximum contention window of a station with priority j added to DIFS is chosen in such a way that it is never larger than DIFSj (lower priority). This guarantees that a higher priority station has no frames to send when a lower priority station starts transmitting.
The back off-based scheme consists of assigning different intervals (min and max) for the contention window of each station or determining how the contention window evolves along with station/flow priority, number of retransmission retrials, and other factors. In , the contention window intervals are calculated according to the priority established for each station. Aad et al  also present a mechanism that assigns different priorities for different destinations, i.e., per-flow differentiation. In , the authors propose a scheme where the priority of the next frame to be sent is included in RTS and CTS control frames, data frame, and the corresponding ACK. Since all stations in the same coverage area hear this information, they can maintain a table with the current head-of-line frames of all stations that contend for the medium. The contention window interval is then calculated by each station according to the position (rank), in terms of priority, of its frame in that table. This scheme does not provide an admission control mechanism, resulting in performance degradation as the traffic load increases.
Bensaou et al  propose a scheme of differentiated back-off according to the estimate of its bandwidth share and the share obtained by the other stations. The main idea is to allow all stations to transmit using the default configuration if the total load is smaller than the link capacity. In case of exceeding the link capacity, each station should obtain an access proportional to sharing index previously established in the admission control.
The two schemes described below establish a coarser differentiation. In the technique based on the frame size, stations with higher priority use larger frame sizes in their transmissions. This scheme controls the time a station retains the medium after winning a contention for it.
The technique based on the RTS/CTS threshold consists of the use of medium reservation through the RTS/CTS handshake. Stations with threshold values larger than frame sizes of a certain flow will not use RTS/CTS. These frames will have higher collision probability and consequently a lower priority.
3.12 Admission Control
Admission control aims to provide a path, from source to destination, containing enough free resources to carry a flow, without interfering with nearby ongoing traffic.
Since we are assuming a shared medium, the routing protocol (AODV) must be able to access bandwidth related information of every node on the path, as well as their first hop neighbours.
Basically, admission control schemes can be broadly classified into measurement-based and calculation-based methods. In measurement-based schemes, admission control decisions are made based on the measurements of existing network status, such as throughput and delay. On the other hand, calculation-based schemes construct certain performance metrics or criteria for evaluating the status of the network.
Service differentiation is helpful in providing better QoS for multimedia data traffic under low to medium traffic load conditions. However, due to the inefficiency of IEEE 802.11 MAC, service differentiation does not perform well under high traffic load conditions as stated in the problem statement. In service differentiation mechanisms, no assurance can be given to higher priority traffic in terms of throughput and delay performance. Admission control is an important tool to maintain QoS experienced by users. Our admission control algorithm takes into account the problem of determining interference caused by transmission between two nodes. By predicating the achievable throughput of data flows and avoiding channel overloading, the QoS of existing flows can be maintained.
Wireless networks generally have limited resources in terms of both device capabilities and available network bandwidth. Consequently, it is beneficial to have call admission to prevent unprovisioned traffic from being injected into the network beyond the saturation point. If a flow has rigid QoS requirements, an admission mechanism will prevent the waste of resources of both the source node itself and the whole network, if the network cannot support the flow.
Wireless communication channels are shared by all nodes within transmission range; consequently, all nodes within a transmission area contend for the limited channel bandwidth. In a multi-hop scenario, an admitted flow at a source node does not only consume the source's bandwidth, but the bandwidth of all the neighboring nodes along the data propagation path, thereby affecting ongoing flows of other nodes. Hence, it is essential to perform admission control along the entire path.
Efficient Connection Admission Control and reservation are very essential in providing QoS. Papers like  do address the Admission Control, but a simple and efficient inbuilt Admission Control and reservation mechanism is not provided for ad-hoc networks so far. Performing Admission Control and flow reservation in a distributed manner with no single node having to burden the task is the real challenge in ad-hoc networks.
3.15 Flow Reservation
The resource reservation arranges for the allocation of suitable end-system and network resources to satisfy the user QoS specification. In doing so, the resource reservation interacts with the QoS routing protocol to establish a path through the network in the first instance, then, based on admission control at each node, end-to-end resources are allocated.
For in-band signaling protocols for MANET such as INSIGNIA  , the reservation control message is integrated together with the data packet. In our approach we are proposing using HELLO messages which are extended to include bandwidth field, which carries bandwidth information from neighbours. Reservation Request and Reply messages are integrated in AODV as described in : the bandwidth reservation is included in a Route Request (RREQ) message as an extension object. The RREQ QoS extensions include a session-ID to identify the flows together with the Source and Destination addresses.
Upon receiving a RREQ, intermediate nodes apply the admission control algorithm. If the reservation is accepted, the RREQ is forwarded, and it is discarded otherwise. However, reservation is only done when the RREP is received. Opposite to AODV, if an intermediate node has a route to a destination, this node should not answer with a route reply to the sender, since the intermediate node does not know whether further nodes can accomplish the bandwidth reservation. In order to avoid this situation the D flag of a RREQ is activated indicating that only the destination can send a RREP.
Figure 3.7 Reservation Procedure
The resource reservation arranges for the allocation of suitable end-system and network resources to satisfy the user QoS specification. In so doing, the resource reservation interacts with the QoS routing protocol to establish a path through the network in the first instance, then, based on admission control at each node, end-to-end resources are allocated.
RSVP (Resource Reservation Setup Protocol) ) is a signaling mechanism to carry the QoS parameters from the sender to the receiver to make resource reservations along the path. The mechanism works as follows:
The sender of an application sends PATH messages containing the traffic specifications to the receiver(s) of the application that will use this reservation.
The receiver receives this PATH message and sends RESV message to the sender specifying the flow it wants to receive.
As the RESV message flows back to the sender, reservations are made at every node along the way. If at any point along the path the request cannot be supported, that request is blocked.
At every router/host along the way, path and reservation states are maintained for every application session. Periodically sent PATH and RESV messages refresh the path and reservation states.
RSVP is designed to provide integrated service for packet-switched network such as IEEE802.3. However, because of the scarcity of bandwidth and high link error in wireless network, directly applying RSVP may lead to high overhead and instable performance.
A QoS oriented Bandwidth Estimation technique
Throughputs reached today by mobile ad hoc networks based on the IEEE 802.11 standard enable the execution of complex applications (video conference, distribution of multimedia flow. However, these applications consume significant amounts of resources and can suffer from an inefficient and unfair use of the wireless channel. Therefore, new specific QoS solutions need to be developed taking into account the dynamic nature of ad hoc networks. Since these networks should deal with the limited radio range and mobility of their nodes, we believe that the best way to offer QoS is to integrate it in routing protocols. These protocols will have to take into consideration the QoS required by the applications, such as bandwidth constraints, in order to select the adequate routes.
Considering the requirement of robust system architecture for enhancing QoS in MANET here the research has defined few research aims like to develop a simulation frame work for bandwidth estimation, end to end delay measurement, and then enhancing the QoS of the adhoc network. In this research work the author has proposed novel system architecture of protocol for bandwidth estimation and end to end delay calculation or bandwidth calculation technique.
The general objective of this research work is to show that it is feasible to obtain some bandwidth related end-to-end measures in a MANET path, and to infer from them both the Bandwidth (BW) and the Available Bandwidth (ABW) of the path in such a way that the measurement procedure is efficient in terms of network resource utilization (so it does not affect the network performance), the inference procedure has a small computational cost, and the resulting estimates are highly accurate.
This chapter is dedicated to the research development and system architecture implemented for accomplishing the goal of the research work. As presented in the problem definition of this thesis, the researcher or author has initially implemented system architecture to transmit audio streaming over 802.11 IEEE protocol. Then realizing the need of an effective bandwidth estimation technique for enhancing the QoS of the network, it initializes developing a robust technique that could effectively enhance the system throughput and can optimize the Mobile Adhoc network.
In the ascending section of this chapter the research made by author has been provided so that the readers can get a in depth knowledge about the research implementation for enhancing network parameters and resource estimation for enhancing QoS.
4.1 QoS investigation for Voice communication in IEEE 802.11b
It seems like the days of wired networks are numbered. Wireless LANs are rapidly gaining in popularity. The ease of deployment, low cost, high capacity plus mobility offered by the wireless infrastructure makes them a highly attractive alternative to the old wired network. Voice is already being touted as a potential application for these networks, primarily for lower price calls. The question that arises is: are wireless networks well suited for voice communication?
While developing complete and functional system architecture for QoS optimization, here in this work, the author has considered to look for the answer as asked before, whether the Network would be able to facilitate the QoS? The wireless networks under study in this work are IEEE802.11b wireless LANs. They provide transmission rates up to 11Mbps on the 2.4 GHz band using direct sequence spread spectrum (DSSS) techniques. In this thesis this specific frequency band was selected because it is unlicensed; it means that nobody has to pay to use it. However, even though this is a major advantage, there is a clear trade-off. 802.11b network shave to share the frequency with other technologies, such as microwave ovens, cordless phones and/or Bluetooth devices. Few researches have depicted that the dominant drawback in 802.11b IEEE protocol is its limitations of QoS support. Nevertheless, these drawbacks have not stopped the deployment of this technology in business, institutional and home wireless networks.
The dominant work considered by author in this phase is to investigate whether 802.11b net-works can be used for voice communication in spite of the lack of QoS support. Meanwhile, in this phase the author study how the inherent characteristics of the 802.11b design affect voice quality parameters. For this purpose, the author has developed a measurement scenarios under different network set-ups, load conditions and environmental situations.
In this research phase the author further focused on the performance of 802.11b at the Medium Access Control (MAC) layer and its consequences for higher layers. While 802.11b networks have proved their appropriateness for best effort traffic, i.e. e-mail, browsing, chat or file transfer, their lack of QoS support makes it questionable whether the use of real-time multimedia applications, such as voice communication, is possible using these wireless networks.
In order to establish a successful voice session, three critical network parameters must be kept under certain levels. These parameters are loss, jitter and delay. A loss can be defined, in the scope of multimedia applications, as a packet that never reaches its destination, or a packet that arrives too late and thus cannot be used to play out the multimedia content. Most real- time multimedia applications are loss-tolerant, i.e. they can still provide a good perceived quality to the user even though a few packets are not delivered to the application. Jitter is de-fined as the variance of inter-packet arrival times com-pared to the inter-packet times of the original transmission. A buffer is commonly used to absorb this variation, at the cost of some delay. Jitter's effect in voice communication is undesirable because it can lead either to additional packet losses or to additional delay. And finally, delay is the time that it takes the voice to travel from the sender to the receiver. This parameter is important in voice communication, because one-way delays over 150 ms lessen the interactivity; which means that none of the participants knows when the other has started or finished.
In this research phase the author has used the terms wireless LANs, wireless networks and WLANs interchangeably for referring towards the generic 802.11b protocol. Since packet losses, delay and jitter are of such importance to obtain a good perceived quality in (successful) voice communications, it is critical to know what causes the deterioration of these parameters in a wireless network, and how these parameters are affected by the 802.11b design. While in wired net-works the deterioration is generally caused by congestion, in wireless networks it can be caused both by degradation of the signal (due to fading or interference) and/or congestion. The 802.11b design provides reliability when the environmental conditions are poor. Reliability is achieved through mechanisms such as ACK and RTS/CTS, de-tailed in the following paragraphs. However, the protocol design at the MAC layer does not provide any explicit QoS support. The IEEE 802.11 standard was created to provide wireless connectivity between different devices. The standard defines the physical (PHY) and medium access control (MAC) layers, corresponding to the first and second layers of the Open system Interconnection (OSI) model. The physical layer outlines communication parameters such as the frequencies to use, the channel bandwidth, modulation schemes, the transmission rates and frame specification; additional information about the physical layer. The MAC layer's main function is to coordinate how the stations gain access to the medium in order to avoid collisions and how to give a somewhat fair distribution of the medium for all the stations. There are two modes of medium access, called distributed coordination function (DCF) and point coordination function (PCF). The latter is not supported by many802.11 devices. Here, in this phase the author doesn't consider point coordination functions (PCF).
DCF mode is generally based on the carrier sense multiple access with collision avoidance (CSMA/CA) protocol. In a typical wireless LAN, several stations will try to access the medium, and if two of them want to transmit at the same time there will be a collision and none of them will succeed in transmitting their packets. Unlike the CSMA/CD (collision detection) protocol, used in Ethernet networks, stations cannot detect a collision because they are usually not full duplex, i.e. they cannot listen to the medium while they are transmitting. For this reason the protocol has to provide other means to detect (or avoid) such collisions. This is achieved through a positive acknowledge (ACK) scheme: when a station has received properly a packet, it sends back an ACK frame to the sender, so that the sender knows about the successful de-livery. If for some reason the ACK frame does not arrive at the sender, it will assume that the packet was not delivered, and the sender will try to retransmit it later. It is evident that this ACK scheme provides certain reliability at the MAC layer, though it is not guaranteed that a packet arrives at the destination. It is the responsibility of other layers to ensure that. The drawback of this mechanism is the overhead that it adds to the communication but it is necessary since link conditions are highly variable in wire-less networks. In order to resolve contention between several stations in an optimal and fair way, the 802.11 standard defines an exponential back off algorithm that works as follows: every time a station attempts to transmit, it waits for a random number of time slots (between 0 and a certain number) before transmitting. If the medium is sensed free, the station transmits. If not, it increases the number n and waits for another random number of slots, repeating the same procedure until the packet is finally transmitted or discarded if the number of attempts reaches a limit that can be defined previously. This mechanism increases the possibility of a successful transmission when the network is somewhat loaded. IEEE 802.11b defines two modes of building wireless LANs: 'infrastructure' and 'ad-hoc'. The former relies on an access point (AP), that is the node which all the mo-Â bile stations within the same cell are connected to. In this mode, all the traffic passes through the AP no matter the destination. There is a possibility to extend the size of a wireless LAN, interconnecting several APs (via wire or wireless), thus permitting the stations to roam between adjacent cells. It is worth noting that only three channels can be used simultaneously without interference, so net-work designers have to be especially careful to not overlap frequencies used in areas that are close to one other. In the so called adhoc mode, the network consists of end stations only.
In this research phase the author has done an analytical study of DCF's performance. The author evaluates both basic and RTS/CTS access mechanisms and concludes that the last one shouldÂ be used in almost all the practical cases, due to its superior capabilities in terms of coping with the 'hidden node' problem and performance in large network scenarios.
This stage emphasizes over performance issues of real time voice measurements over IEEE802.11b wireless networks. The goals of these measurements are to assess whether there are significant issues with 802.11b wireless networks regarding real-time voice communication. In particular the author has investigated the behavior and performance of the network at the MAC layer in conjunction with the application layer, when a packet loss is re-ported. Similarly, he has been looked at how the 802.11bMAC protocol affects the jitter and delay of real time voice. With this information the author has intended to obtain a better understanding of wireless networks' suitability for voice communication. The data gathered as part of these investigations will be made available for further study. The introspect investigation of these issues will beÂ beneficial to lead research in voice measurements. The analytical study made by the researcher comprises the comparison of suitability of the DCF and PCF protocols for audio transmission and for it; the author has utilized one simulation framework. On the other hand in the same research phase the author has studied a methodology for Capacity estimation of VoIP channels on Wireless Networks. The results obtained from this presented and accomplished research work depict and alarms for the requirement of an effective solution of QoS in wireless networks. Considering the results and analysis of this research work, the author decides to go for QoS enhancement in wireless AdHoc network so that an optimized and effective solution for resource estimation can be obtained.
The research carried for resource estimation has been done in ascending section.
Wireless measurement Scheme for bandwidth Estimation in multihop wireless adhoc network:
This section discusses the techniques and approach to estimate the bandwidth in adhoc network. This chapter encompasses all the relevant research work related to bandwidth estimation and end to end delay estimation.
The overall chapter arrangement has been classified into two parts; the initial or first chapter discusses the bandwidth estimation mechanism while another section presents the end to end delay estimation technique here called EDEAN.
In physical layer communications, the term bandwidth relates to the spectral width of electromagnetic signals or to the propagation characteristics of communication systems. In the context of data networks, the term bandwidth quantifies the data rate that a network link or a network path can transfer. In this article we focus on estimation of bandwidth metrics in this latter data network context. The concept of bandwidth is central to digital communications, and speciï¬cally to packet networks, as it relates to the amount of data that a link or network path can deliver per unit of time. For many data intensive applications, such as file transfers or multimedia streaming, the bandwidth available to the application directly impacts application performance. Even interactive applications, which are usually more sensitive to lower latency rather than higher throughput, can beneï¬t from the lower end-to-end delays associated with high bandwidth links and low packet transmission latencies.
Bandwidth is also a key factor in several network technologies. Several applications can beneï¬t from knowing bandwidth characteristics of their network paths. For example, peer-to-peer applications from their dynamic user-level networks based on available bandwidth between peers. Overlay networks can configure their routing tables based on the bandwidth of overlay links. Network providers lease links to customers and usually charge based on bandwidth purchased. Service-Level-Agreements (SLAs) between providers and customers often deï¬ne service in the terms of available bandwidth at key interconnection (network boundary) points. Carriers plan capacity upgrades in their network based on the rate of growth of bandwidth utilization of their users. Bandwidth is also a key concept in content distribution networks, intelligent routing systems, end-to-end admission control, and video/audio streaming. The term bandwidth is often imprecisely applied to a variety of throughput-related concepts.
In a distributed ad hoc network, a host's available bandwidth is not only decided by the raw channel bandwidth, but also by its neighbor's bandwidth usage and interference caused by other sources, each of which reduces a host's available bandwidth for transmitting data. Therefore, applications cannot properly optimize their coding rate without knowledge of the status of the entire network. Bandwidth is a fundamental resource. When flows are routed through the network, estimating the remaining bandwidth is often required before performing admission control, flow management, congestion control or routing based on bandwidth constraints. However, bandwidth estimation is extremely difficult, because each host has imprecise knowledge of the network status and links change dynamically. Therefore, an effective bandwidth estimation scheme is highly desirable. Bandwidth estimation can be done using various methods; for example, in bandwidth estimation is a cross-layer design of the routing and MAC layers and in , the available bandwidth is estimated in the MAC layer and is sent to the routing layer for admission control. Therefore, bandwidth estimation can be performed in several different network layers. To determine whether there is enough bandwidth available for a new flow, all we need to know is the available link capacity and the bandwidth to be consumed by the requesting flow. In wired networks this is a trivial task since the underlying medium is a dedicated point-to-point link with fixed capability. However, in wireless networks the radio channel of each node is shared with all its neighbours. Because of the shared medium, a node can successfully use the channel only when all its neighbours do not transmit and receive packets at the same time. We call this the aggregation effect.
The need for supporting real time and multimedia applications for users of Mobile Ad hoc Network (MANET) is becoming essential. Mobile Ad-hoc network is a kind of decentralized network that can provide multimedia users with mobility they demand, if efficient QoS multicast strategies were developed. Ensuring the QoS in Ad-hoc network, the efficient bandwidth estimation technique plays a vital role. The presented research for QoS optimization presents a noble technique for estimating Bandwidth in Ad-Hoc network which is decentralized in nature. Unlike centralized structure the bandwidth estimating in Ad-hoc is critical and this ultimately influences the QoS of the network communication. The presented admission control and dynamic bandwidth management scheme provides fairness and rate guarantees in the absence of distributed link layer fair scheduling. The scheme is especially suited to smart-rooms where peer-to-peer multimedia transmissions need to adapt their transmission rates co-operatively. The presented research work exhibits the better accuracy and effectiveness in bandwidth estimation and management application in Mobile Ad-Hoc network.
Available bandwidth estimation is a vital component of admission control for quality-of-service (QoS) in both wire line as well as wireless networks. In wireless networks, the available bandwidth undergoes fast time-scale variations due to channel fading and error from physical obstacles. These effects are not present in wire line networks, and make estimation of available bandwidth in wireless networks a challenging task. Furthermore, the wireless channel is also a shared-access medium, and the available bandwidth also varies with the number of hosts contending for the channel.
Wireless last-hop networks employing the IEEE 802.11 protocol in Distributed Co-ordination Function (DCF) mode are becoming increasingly popular. In DCF mode, the 802.11 protocol does not require any centralized entity to co-ordinate user's transmissions. The MAC layer uses a CSMA/CA algorithm for shared use of the medium.
In physical layer communications, the term bandwidth relates to the spectral width of electromagnetic signals or to the propagation characteristics of communication systems. In the context of data networks, the term bandwidth quantifies the data rate that a network link or a network path can transfer. In this article we focus on estimation of bandwidth metrics in this latter data network context. The concept of bandwidth is central to digital communications, and specifically to packet networks, as it relates to the amount of data that a link or network path can deliver per unit of time. For many data intensive applications, such as file transfers or multimedia streaming, the bandwidth available to the application directly impacts application performance. Even interactive applications, which are usually more sensitive to lower latency rather than higher throughput, can benefit from the lower end-to-end delays associated with high bandwidth links and low packet transmission latencies. Bandwidth is also a key factor in several network technologies. Several applications can benefit from knowing bandwidth characteristics of their network paths. For example, peer-to-peer applications create their dynamic user-level networks based on available bandwidth between peers. Overlay networks can configure their routing tables based on the bandwidth of overlay links. Network providers lease links to customers and usually charge based on bandwidth purchased. Service-Level-Agreements (SLAs) between providers and customers often define service in terms of available bandwidth at key interconnection (network boundary) points. Carriers plan capacity upgrades in their network based on the rate of growth of bandwidth utilization of their users. Bandwidth is also a key concept in content distribution networks, intelligent routing systems, end-to-end admission control, and video/audio streaming. The presented research work presents an available bandwidth estimation scheme for IEEE 802.11-based wireless Ad-Hoc network. The work has been especially designed for decentralized network, called Ad-Hoc network. Our presented scheme does not modify the CSMA/CA MAC protocol in any manner, but gauges the effect of phenomena such as medium contention, channel fading and interference, which influence the available bandwidth, on it. Based on the effect of the phenomena on the working of the medium-access scheme, we estimate the available bandwidth of a wireless host to each of its neighbors.
4.2.2 Bandwidth-Related Metrics
In this section we introduce three bandwidth metrics: capacity, available bandwidth, and Bulk-Transfer- Capacity (BTC). The first two are defined both for individual links and end-to-end paths, while the BTC is usually defined only for an end-to-end path. In the following discussion we distinguish between links at the data link layer ("layer-2") and links at the IP layer ("layer-3"). We call the former segments and the latter hops. A segment normally corresponds to a physical point-to-point link, a virtual circuit, or to a shared access local area network (e.g., an Ethernet collision domain, or an FDDI ring). In contrast, a hop may consist of a sequence of one or more segments, connected through switches, bridges, or other layer-2 devices. We define an end-to-end path from an IP host (source) to another host (sink) as the sequence of hops that connects to.
A layer-2 link, or segment, can normally transfer data at a constant bit rate, which is the transmission rate of the segment. For instance, this rate is 10Mbps on a 10BaseT Ethernet segment, and 1.544 Mbps on a T1 segment. The transmission rate of a segment is limited by both the physical bandwidth of the underlying propagation medium as well as its electronic or optical transmitter/receiver hardware. At the IP layer a hop delivers a lower rate than its nominal transmission rate due to the overhead of layer-2 encapsulation and framing. Specifically, suppose that the nominal capacity of a segment is CL2. The transmission time for an IP packet of size LL2 bytes is
Here, HL2 is the total layer-2 overhead (in bytes) needed to encapsulate the IP packet. So the capacity CL3 of that segment at the IP layer is:
Note that the IP layer capacity depends on the size of the IP packet relative to the layer-2 overhead. For the 10BaseT Ethernet, CL2 is 10Mbps, and HL2 is 38 bytes (18 bytes for the Ethernet header, 8 bytes for the frame preamble, and the equivalent of 12 bytes for the inter frame gap). So the capacity that the hop can deliver to the IP layer is 7.24Mbps for 100-byte packets, and 9.75Mbps for 1500-byte packets.
The capacity Ci of a hop i is defined to be the maximum possible IP layer transfer rate at that hop. From equation (2) the maximum transfer rate at the IP layer results from MTU-sized packets. So we define the capacity of a hop as the bit rate, measured at the IP layer, at which the hop can transfer MTU-sized IP packets. Extending the previous definition to a network path, the capacity of an end-to-end path is the maximum IP layer rate that the path can transfer from source to sink. In other words, the capacity of a path establishes an upper bound on the IP layer throughput that a user can expect to get from that path. The minimum link capacity in the path determines the end-to-end capacity C, i.e.
Where Ci is the capacity of the i-th hop, and H is the number of hops in the path. The hop with the minimum capacity is the narrow link on the path. Some paths include traffic shapers or rate limiters, complicating the definition of capacity. Specifically, a traffic shaper at a link can transfer a "peak" rate P for a certain burst length B, and a lower "sustained" rate S for longer bursts. Since the capacity as an upper bound is viewed on the rate that a path can transfer, it is natural to define the capacity of such a link based on the peak rate S rather than the sustained rate P. On the other hand, a rate limiter may deliver only a fraction of its underlying segment capacity to an IP layer hop. For example, ISPs often use rate limiters to share the capacity of an OC-3 link among different customers, charging each customer based on the magnitude of their bandwidth share. In that case the capacity of that hop to be the IP layer rate limit of that hop is defined. At last it is noticed that some layer-2 technologies do not operate with a constant transmission rate. For instance, IEEE 802.11b wireless LANs transmits their frames at 11, 5.5, 2, or 1Mbps, depending on the bit error rate of the wireless medium. The previous definition of link capacity can be used for such technologies during time intervals in which the capacity remains constant.
B. Available bandwidth
Another important metric is the available bandwidth of a link or end-to-end path. The available bandwidth of a link relates to the unused, or "spare", capacity of the link during a certain time period. So even though the capacity of a link depends on the underlying transmission technology and propagation medium, the available bandwidth of a link additionally depends on the traffic load at that link, and is typically a time-varying metric.
At any specific instant in time, a link is either transmitting a packet at the full link capacity or it is idle, so the instantaneous utilization of a link can only be either 0 or "1 thus any meaningful definition of available bandwidth requires time averaging of the instantaneous utilization over the time interval of interest. The average utilization for a time period is given by
Where is the instantaneous available bandwidth of the link at time. We refer to the time length as the averaging timescale of the available bandwidth. Figure 2 illustrates this averaging effect.
Fig 4.1: Instantaneous utilization for a link during a time period (0, T).
In this example the link is used during 8 out of 20 time intervals between 0 and T, yielding an average utilization of 40%. Let us now define the available bandwidth of a hop i over a certain time interval. If Ci is the capacity of hop i and is the average utilization of that hop in the given time interval, the average available bandwidth Ai of hop i given by the unutilized fraction of capacity,
Extending the previous definition to an H-hop path, the available bandwidth of the end-to-end path is the minimum available bandwidth of all H hops,
The hop with the minimum available bandwidth is called the tight link 1 of the end-to-end path. Figure 4.2 shows a "pipe model with fluid traffic" representation of a network path, where each link is represented by a pipe. The width of each pipe corresponds to the relative capacity of the corresponding link. The shaded area of each pipe shows the utilized part of that link's capacity, while the unshaded area shows the spare capacity. The minimum link capacity C1 in this example determines the end-to-end capacity, while the minimum available bandwidth A3 determines the end-to-end available bandwidth. As shown in Figure 4.2, the narrow link of a path may not be the same as the tight link.
Fig. 4.2: Pipe model with fluid traffic for 3-hop network path.
Several methodologies for measuring available bandwidth make the assumption that the link utilization remains constant when averaged over time, i.e., they assume a stationary traffic load on the network path. While this assumption is reasonable over relatively short time intervals, diurnal load variations will impact measurements made over longer time intervals.
Also note that constant average utilization (stationary) does not preclude traffic variability (burstiness) or long-range dependency effects. Since the average available bandwidth can change over time it is important to measure it quickly. This is especially true for applications that use available bandwidth measurements to adapt their transmission rate. In contrast, the capacity of a path typically remains constant for long time intervals, e.g., until routing changes or link upgrades occur. Therefore the capacity of a path does not need to be measured as quickly as the available bandwidth.
C. TCP Throughput & Bulk transfer capacity
Another key bandwidth-related metric in TCP/IP networks is the throughput of a TCP connection. TCP is the major transport protocol in the Internet, carrying almost 90% of the traffic . A TCP throughput metric would thus be of great interest to end users. Unfortunately it is not easy to define the expected throughput of a TCP connection. Several factors may influence TCP throughput, including transfer size, type of cross traffic (UDP or TCP), number of competing TCP connections, TCP socket buffer sizes at both sender and receiver sides, congestion along reverse (ACK) path, as well as size of router buffers and capacity and load of each link in the network path. Variations in the specification and implementation of TCP, such as New Reno , Reno, or Tahoe, use of SACKs  versus cumulative ACKs, selection of the initial window size , and several other parameters also affect the TCP throughput. For instance, the throughput of a small transfer such as a typical Web page primarily depends on the initial congestion window, Round-Trip Time (RTT), and slow-start mechanism of TCP, rather than on available bandwidth of the path. Furthermore, the throughput of a large TCP transfer over a certain network path can vary significantly when using different versions of TCP even if the available bandwidth is the same.
The Bulk-Transfer-Capacity (BTC)  defines a metric that represents the achievable throughput by a TCP connection. BTC is the maximum throughput obtainable by a single TCP connection. The connection must implement all TCP congestion control algorithms as specified in RFC 2581 . However, RFC 2581 leaves some implementation details open, so a BTC measurement should also specify in detail several other important parameters about the exact implementation (or emulation) of TCP at the end hosts .
Note that the BTC and available bandwidth are fundamentally different metrics. BTC is TCP-specific whereas the available bandwidth metric does not depend on a specific transport protocol. The BTC depends on how TCP shares bandwidth with other TCP flows, while the available bandwidth metric assumes that the average traffic load remains constant and estimates the additional bandwidth that a path can offer before its tight link is saturated. To illustrate this point suppose that a single-link path with capacity C is saturated by a single TCP connection. The available bandwidth in this path would be zero due to path saturation, but the BTC would be about C/2 if the BTC connection has the same RTT as the previous TCP connection.
In the developed research work the author has implemented a bandwidth estimation scheme as an essential component in the construction of:
(a) A dynamic bandwidth management scheme for single-hop mobile ad hoc networks, and
(b) An explicit rate-based flow control scheme for multi-hop mobile ad hoc networks.
4.2.3 Available Bandwidth Estimation:
Figure 4 shows the stages in the transmission of a single packet using the IEEE 802.11 DCF MAC protocol. Details of the individual messages and gaps can be found in . We measure the throughput of transmitting a packet as where S is the size of the packet, is the time the ACK is received and is the time that the packet is ready at the MAC layer. The time is interval includes the channel busy and contention time. We keep separate throughput estimates to different neighbors because the channel conditions may be very different to each one.
Fig. 4.3: IEEE 802.11 Unicast packet transmission sequence
A neighbor of a wireless host is defined as any other wireless host within its transmission range. This link layer measurement mechanism captures the effect of contention on available bandwidth. If contention is high, will increase and the throughput TP will decrease. This mechanism also captures the effect of fading and interference errors because if these errors affect the RTS or DATA packets, they have to be re-transmitted. This increases and correspondingly decreases available bandwidth. Our available bandwidth measurement mechanism thus takes into account the phenomena causing it to decrease from the theoretical maximum channel capacity. It should be noted that the available bandwidth is measured using only successful link layer transmissions of an ongoing data flow.
It is clear that the measured throughput of a packet depends on the size of a packet. Larger packet has higher measured throughput because it sends more data once it grabs the channel. To make the throughput measurement independent of packet size, we normalize the throughput of a packet to a pre-defined packet size. As it has been presented that is the actual time for the channel to transmit the data packet, where is the channel's bit-rate. Here we assume the channel's bit-rate is a pre-defined value. The transmission times of two packets should differ only in their times to transmit the DATA packets. Therefore, we have:
Where S1 is the actual data packet size and S2 is a pre-defined standard packet size. By above mentioned equation, the normalized throughput TP2 can be calculated for the standard size packet. In this process the CBR traffic from one host to another is sent, and varied the packet size from 64 bytes to 640 bytes during the course of the simulation. The measured raw throughput is normalized against a standard size, picked as 512 bytes. Figure 4.4 shows the result of the measured raw throughput and its corresponding normalized throughput. Obviously, the raw throughput depends on the packet size; larger packet size leads to higher measured throughput. The normalized throughput, on the other hand, does not depend on the data packet size. Hence, we use the normalized throughput to represent the bandwidth of a wireless link, to filter out the noise introduced by the measured raw throughput from packets of different sizes. Another important issue is the robustness of the MAC layer bandwidth measurement. The author then measures the bandwidth of a link in discrete time intervals by averaging the throughputs of the recent packets in the past time window and use it to estimate the bandwidth in the current time window.
Fig. 4.4: Raw and normalized throughput at MAC layer.
Obviously, this estimation may not be accurate because the channel condition may have changed. In order to calculate the estimation error, we run a CBR flow using UDP with data rate 160 kbps from a node to another in a 10 node one hop environment. Background traffic consists of 1 greedy TCP flow in the light channel contention case, and 7 TCP flows in the heavy contention case. Here we use TCP only to generate bursty cross-traffic to the UDP flow. The throughput of the CBR flow every 2 seconds is measured and normalized using the average of packet throughputs in the past time window.
4.2.4 Channel Time Proportion and Admission Control
The presented bandwidth estimation mechanism as outlined in the previous section for admission control in single and multi-hop wireless networks is implemented. Initially the concept of channel time proportion (CTP) has been introduced by considering a sophisticated illustration. Assume that the throughput TP over a particular wireless link is 10 MAC frames of a particular size S per second, based on the level of contention and physical error experienced on this link. Assume that a particular flow requires 3 frames over this link between neighbors. It thus needs to be active on the sending host's interface for 30% of unit time, on average. This leaves only 70% of unit time available to other flows out of this interface, which directly affects their admission. After accomplishing the aforementioned the logic to bits per second is also extended. If K bits can be transmitted over a wireless link in a second, given a certain level of contention and physical errors, and a user requires a minimum throughput of E bits per second, then in effect the user requires of unit time on the source interface.
The CTP requirement of a flow can thus be obtained by simply dividing its bandwidth requirement in bits per second by the estimated available bandwidth. The CTP requirement is a fraction. Admission control divides up 100% of channel time on an interface among the different flows based on their requirements and certain fairness criterion.
A. Dynamic Bandwidth Management in Single-hop Ad hoc Networks
The developed admission control and dynamic bandwidth management scheme provides fairness as well as rate guarantees in the absence of distributed link layer fair scheduling. The scheme is especially suited to smart-rooms where peer-to-peer multimedia transmissions need to adapt their transmission rates co-operatively. The minimum and maximum bandwidth requirements of a flow to the respective CTP requirements have been mapped. The center piece of the scheme, a Bandwidth Manager (BM), allots each flow a share of the channel depending on its requirements relative to those of other flows. The BM uses a max-min fair algorithm with minimum CTP guarantees. Admitted flows co-operatively control their transmission rates so they only occupy the channel for the fraction of time allotted to them. As available bandwidth in the network changes and traffic characteristics change, the BM dynamically re-allocates the channel access time to each individual flow. Simulations showed that, at a very low cost and with high probability, every flow in the network will receive at least its minimum requested share of the network bandwidth. The bandwidth estimation procedure was embedded in the device driver of the Lucent IEEE 802.11b network card.