When congestion occurs queues build up and packets are dropped. Loss due to congestion is controlled by managing the traffic load and then applying appropriate queuing and scheduling techniques. Congestion management and avoidance tools are identified in this paper. Over provisioning is not the solution to this problem, as adding bandwidth does not always solve this problem. Bandwidth has become cheap. Networks with limited bandwidth are difficult to manage. Over provisioning allows the network to grow in the future.
If you need assistance with writing your essay, our professional essay writing service is here to help!Essay Writing Service
Standard internet routers place excess packets in a buffer which works like on a basis of first in first out (FIFO) queue and drop packets if the queue is full, causing packet loss that is unacceptable for voice and video traffic. “A network is said to be congested from the perspective of a user if the service quality noticed by the user decreases because of an increase in network load.
2. Causes of network congestion
Congestion occurs when demands exceed the capacity. As users come and go so do the packets they send. Standard internet routers place excess packets in buffers using FIFO and only drop packets when a queue is full. Two problems occur during this process, storing packets in queue adds significant delay depending on the length of the queue. Packet loss can occur no matter how long the maximum queue is, queues should be kept short because when queues grow the network is said to be congested this increases delay and packet loss.
3. Congestion management tools
Queuing algorithm are used as a method of prioritising traffic. Some common methods used are first in first out (FIFO), priority queuing (PQ), weighted round robin (WRR), weighted fair queuing (WFQ), and class based weighted fair queuing (CBWFQ). These algorithms will be identified by their acronym from here on in this paper. Queuing algorithms take effect when congestion is experienced. To avoid congestion queuing mechanisms are activated at the hardware buffer of the outgoing interface.
FIFO works on a process of storing packets when the network becomes congested, it then forwards the packets in order of arrival. This process is the default on most networks. This process makes no decision based on packet priority. This can cause delays with time sensitive traffic such as voice or video. Also when queues are full tail drops can occur and routers have no way of preventing this from occurring.
PQ allows important traffic to be allocated the fastest speeds. Traffic can be prioritised based on a number of requirements such as network protocol, incoming interface, packet size, access lists and source and destination address. There are four queues with priority values of high, medium, normal and low. Traffic that is not classified is allocated to the normal queue. The highest priority queue gets preferential treatment.
WRR shares network bandwidth among applications. Bandwidth is shared proportionally among users. The bandwidth is divided among the classes, with each class assigned a specific amount of queue space, and up to sixteen queues at a time being serviced in a round robin fashion, where each queue is serviced in turn. Some higher priority traffic may be given a weight so it will get preferential treatment. This does not allocate bandwidth accurately. This is not a common method as the router can often forward a packet even if the queue is already full.
WFQ allows each queue to be serviced fairly in terms of byte count. With this queues do not starve for bandwidth and all traffic gets a predictable service. WFQ applies weights to identified traffic, classifies traffic into flows and determines how much bandwidth each flow is allocated. This is the default queuing method for serial interfaces under E1 speeds of 2Mbps. It does not work with tunnelling or encryption as these methods edit the packet, which WFQ needs to classify it. WFQ has two methods of dropping packets, the first is congestive discard threshold (cdt) which drops packets when this threshold is reached, the second drops all packets when the hold queue limit is reached. It is supported on most platforms; traffic does not need to be classified although this can also be seen as a drawback. and guarantees throughput to all flows. It cannot provide fixed bandwidth guarantees and is supported on links less than 2mbps. Cisco routers automatically configure it on links less than 2mbps.
CBWFQ allows a minimum amount of bandwidth to be assigned to each predefined class. Network admin creates a minimum bandwidth guarantee for each specific class. Traffic can be defined based on access control lists, protocols and input interfaces. Each specific class has its own queue and a specific amount of bandwidth assigned to it, which is the minimum amount it will receive during congestion. A queue limit is set for each class; this is the maximum number of packets allowed in the class queue. Queuing guarantees minimum bandwidth, but if more bandwidth is available it also can access it. Weights are calculated based on bandwidth. CBWFQ uses tail drops to avoid congestion. Voice traffic is still prone to unacceptable delay with this method as tail drops can occur during congested periods.
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
LLQ is a combination of CQ, PQ and WFQ. This algorithm has a priority queue which always has precedence over other queues. This class is assigned the required bandwidth before any other queue is serviced. It is designed for networks carrying voice or video. It contains two queues, a priority queue for real time traffic and a second queue which uses CBWFQ for data applications. During periods of congestion the priority queue is metered to make sure that the bandwidth allocated is not exceeded. When the network is not congested the priority traffic is allowed to exceed its allocated amount.
4. Congestion avoidance tools
Congestion avoidance is a form of queue management. It monitors network traffic loads in an effort to avoid congestion at common network bottlenecks as opposed to congestion management techniques that operate to control congestion after it occurs. Random early detection, weighted random early detection and tail drops are common congestion avoidance tools. Router interfaces experience congestion when the output queue is full. Dropping packets can cause performance problems as dropped packets may be real time traffic such as voice.
RED – RED is a dropping mechanism that randomly drops packets before a queue is full. It tries to avoid congestion before it becomes a problem. It monitors traffic load and randomly discard packets if congestion begins to increase. As traffic is dropped, the source of the traffic slows down transmission. Packets are dropped based on average queue length. RED tends to drop more aggressive packets to slow down congestion. When traffic bursts occur and queues are already full tail drops occur. RED aims to distribute loss over time and maintain a low queue depth while absorbing traffic spikes. RED works best when most traffic is TCP. With TCP dropped packets indicate congestion so the packet source reduces its transmission rate. With other protocols packet sources might not respond or might resend dropped packets at the same rate and so dropping packets might not decrease congestion. Tail drops can be avoided if congestion is prevented. RED randomly drops packets before a queue is full. Random early detection increases drop rate as the average queue size increases. RED results in slow TCP sessions, the average queue size is small and TCP sessions can be out of sync due to random drops.
WRED – WRED is a combination of RED, it provides preferential handling to high priority packets, and can selectively discard packets when an interface starts to become congested. Packets are dropped based on IP precedence. WRED monitors average queue length and determines when to begin discarding packets based on the length of the interface queue. When the queue length exceeds the threshold it randomly drops packets with a certain probability. If queues continue to increase larger than the max threshold it reverts to tail drops. WRED tries to maintain queue length below max threshold and to implement drop policies for different classes of traffic. WRED can discard low priority traffic when interface is congested. This reduces chances of tail drop by selectively dropping packets when the output interface begins to show signs of congestion. This process drops small packets early rather than waiting until queue is full and avoids dropping large packets at once. Non IP traffic is generally dropped first.
Tail drops – Tail drops are dropped packets that could not fit in the queue because it is full. This is not an appropriate method for important traffic as the dropped packets may have been high priority and the router did not have a chance to queue it. By managing the number of packets in a queue the router does its best to make sure the queues don’t fill up and trail drops don’t occur. Tail drops is not a congestion avoidance mechanism and should not be implemented. It does not work with real time traffic as dropping voice or video packets is unacceptable in today’s modern networks.
Network congestion is caused by a number of problems including limited bandwidth,
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: