Study On Defining Network Congestion Computer Science Essay

Published:

Consider that you are a privilege user of the internet and Internet Service Providers provides a data to you as efficient as possible. But ISP cant give guarantees of accurate data transmission. For same data there are lots of fluctuations on different ends of same network. One end of the network gets data quickly but other ends of the same network gets the same data slower than first end. For example, anyone downloads a file from any website at night at the rate of the 500 kbps but in morning the rate might be more or less than 500 Kbps for same file. There are several reasons for this kind of fluctuation like some of internet links are not available, packet are transfer from various routers, lots of connections in same networks and most common problem of bandwidth fluctuation in wireless network. What we can say to this kind of problems in network is congested [1].

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

According to Michael Welzl's book of "Network Congestion Control [1], Congestion occurs when resources demands exceed the capacity. As users come and go, so do the packets they sends; Internet performance is therefore largely governed by these inevitable natural fluctuations."

For example [1], an ISP provides services to 500 customers simultaneously. The transmission rate of each customer is 700 kbps and average rate is 200 kbps. Now, assume that ISP gets cheap bandwidth of 600kbps. If ISP provides service at the rate of 600 kbps and some of customer use their max rate at the same time than the capacity goes above the limit and by this, gateway would see traffic spikes. The packets are not transfer over the link due to the spikes and devices buffer packets or drop it. Here standard routers place these packets in buffer which works on a basic FIFO (First In First Out) queue. The packets are dropped only when the queues are full. To reserve a buffer for long time is good idea because it accommodates the chance of traffic spikes [1]. But there are two problems, one is that storing packets in queue results in significant delay and is depends upon the length of the queue. Another one is internet can't strictly follow a passion distribution. It means that there are as many upward fluctuation as there are downward fluctuation may be wrong. As per second problem, queue should be kept short as much as possible. The growing queue increases the queue delay and packet loss which causes congestion.

Congestion will occur anywhere in a network where there is a speed mismatch, aggregation or confluences accrue [2]. Here, it can be explain by taking an example of network which has two service providers ISP1 and ISP2, which are connected with 300 kbps link. Network scenario is shown in figure1. ISP1 have two customers Customer1 and Customer2 and both customers are connected with link of 100 kbps bandwidth to ISP1. ISP2 also has two Customers Customer3 and Customer4 and they are also connected with link of 100 kbps bandwidth. Customer1 is sending packet to Customer4 and Customer2 is sending packet to Customer3. Now there are not chances to place network congestion but, if we increase the bandwidth of Customer1 from 100kbps to 1Mbps than it will send packet too fast and responding customer is very slow. So, packets have to buffer at ISPs means packets are stored in queue will result the growth of queue. Storing packet in the queue causes significant delay, depending on the length of queue. If queue is full then it may drop packets. And delaying or dropping the packets causes congestion in the given network. This type of congestion is also called speed mismatch congestion [2].

Figure 1 [1]: Network scenario.

And one more example [1], assume for same scenario that we disconnect one customer from ISP2 and connect it to ISP1 with the same bandwidth, and we connect two more customers to the ISP1. All of the customers are connected with their ISPs with 100kbps link respectively, and both ISPs are connected with 300kbps link with each others. Now, ISP1 sending packets from 5 customers to ISP2 and other side there is only one customer to receive this packet so ISP2 become point of aggregation and congestion will occurs [2]. This type of congestion mostly occurs in WAN. By joining more than one traffic stream together confluence will occurs and because of this confluence congestion occurs in a network [2].

2.2 Congestion control theoretic terms:

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Traffic can be controlled at two points: One point is sender where the first decisions are made and can be called as controllers group and second is intermediate nodes where performance measurements can be taken and can be called as group of measuring points [1]. So, in congestion control schemes, at least one controller and one measuring point must be participate [1] [3]. Some important and necessary congestion control theoretic terms and mechanism can be described as follow:

2.2.1 Closed loop versus Open loop:

In congestion control theories, systems which use feedback called closed loop and systems which don't use feedback called open loop [1].

The open loop includes some decisions which are given below [3]:

Decides when to accept new traffic

Decides when to discards packets and which ones.

At various points of network it makes scheduling decisions.

The close loop is totally opposite to the open loop. Solutions of closed loop are based on three parts which are shown below [3]:

Time and place of congestion occurrence is detected by monitoring the systems.

Information would pass where action takes place.

Solve problem by adjusting systems.

2.2.2 Congestion control and flow control:

Congestion schemes exists when there are not any kind of sender as well as receiver involved and, if receiver has some means to slow down the sender (if sender sending packets more than the receiving capacity of the receiver) by sending window then the function called flow control. [1] [4]. The size of the receiving window tells the sender that how much data should be sending [4]. The goal of flow control is to protect the receiver from overload and congestion control is to protect the network, respectively [1].

2.2.3 Implicit feedback:

Implicit feedback is one type of measurement which is taken at the receiver end to find out the results that what happen in the network [1].

Feedback can be determined from the packets transmission from source to destination. There are three possibilities with packets during transmission. The possibilities and their reasons are shown below [1]:

The packet can be delayed because of distance, queuing, processing in the involved nodes, or transmission at the link layer.

The packet can be dropped because of growth in queue length, equipment malfunctions or link noise which can fail checksum of the intermediate systems.

The packet can be changed. It could be happen by changing the header or some error occurrences in the link. The error may be caused by some hacker or broken equipment.

Any of three possibilities, packet dropped, delay or change can cause the congestion [1].

2.2.4 MIMD, AIAD, AIMD and MIAD [1]:

When sender send more packets to the receiver than what happen? Generally, sender decreases the transmission rate. But in the internet or a scenario where there are lots of senders or users which are connect and disconnect and they are sending more packets or less packets, at this time what happen? In this case the answer is no congestion. "So, we end up with a sender that keeps sending , a receiver that keeps submitting binary yes/no feedback, and a rule for a sender that says increase the rate if receiver says that there was no congestion otherwise decrease the rate."

The rate update function can be expressed as

x(t + 1) =

where rate of sender t is denoted by x(t) and,

If y(t) = 0 then no congestion,

If y(t) = 1 then congestion.

And observation should be restricted to linear control. Ai,bi,ad and bd are constants.

Here, linear control has both an additive (a) and multiplicative (b) components. If, influence of only one component is allow at the same time than it will give the several probabilities which are shown below:

ai = 0; ad = 0; bi > 1; 0 < bd < 1

Multiplicative Increase, Multiplicative Decrease (MIMD)

ai > 0; ad < 0; bi = 1; bd = 1

Additive Increase, Additive Decrease (AIAD)

ai > 0; ad = 0; bi = 1; 0 < bd < 1

Additive Increase, Multiplicative Decrease (AIMD)

ai = 0; ad < 0; bi > 1; bd = 1

Multiplicative Increase, Additive Decrease (MIAD)

2.2.5 RTT Estimation [1]:

The RTT is the measurement of the duration between sending a packet and receiving the corresponding ACK yields the time it took for the packet to reach the receiver and for the ACK to come back.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

RTT is important component for several functions which are given below:

A sender must retransmit packets if they are dropped somewhere in a network for reliable transmission. To realize this, each and every packet have acknowledgement. If ACK is missing then sender must be assumed that the packet is dropped and retransmission is required. This type of mechanism is called as Automatic Repeat Request (ARQ). Normally it seems, that the mechanism need some time out , but if the time out is large than it can take unnecessarily long time until a packet is retransmitted, if it is too small than spurious retransmission occurs. Both of conditions are wasting network capacity and causing errors at the receiver side. The ideal time out value seems to be one RTT by neglecting delay fluctuations from queuing and other interfering factors.

Here, too small time out value is disadvantage because by this spurious congestion is detected, which cause an unnecessary rate reduction. Second, the large time out value is also disadvantage because it can also harm the network, means it also causes congestion and sources should reduce their rates as soon possible. So, the ideal value is most probably a function of RTT.

In a reliable protocol based on ARQ, the time from sending a packet to receiving the corresponding ACK is on RTT.

2.2.6 Explicit congestion notification [13] [1]: ECN (Explicit congestion notification) is one kind of a scheme in which congestion can realize through setting bits in packet header, it means if these bits set to 1 than congestion can be found in networks [1] [13]. ECN is found in variety of technologies like Frame Relay, ATM and IP [1]. It is different from AQM mechanism where congestion control is done by dropping packet which is described in next topic but here congestion control is done through marking packets. These can be done by using ECN field in the IP header with two bits, making four ECN code points '00' to '11'. Data sender set ECT (0) and ECT (1) to indicate that the end points of the transport protocol are ECN capable, where ECT (0) and ECT (1) are the code points '10' and '01' of ECN capable Transport (ECT) respectively [9]. Routers can use any of these code points freely and senders are also free to use either ECT (1) or ECT (0) code points to use ECT [13]. Advantage of this mechanism is that it causes less loss but ECN bits do not suffice for each and every case as queue overflows. The de-facto is that ECN turns implicit feedback in to explicit feedback. Because of unwanted effect like signal noise which can lead checksum failure and without ECN the packets are dropped through AQM decision as queue overflows [1]. However there is a signal that conveys a clear message cannot be misinterpreted with ECN and because of explicit nature by any node that is traverse by packets that carry the signal.

2.3. Congestion avoidance algorithms:

2.3.1 TCP/IP Congestion Avoidance Terms [2]:

TCP avoidance technique is used to avoid basic congestion in the networks. When congestion occurs then TCP supports traffic-management mechanisms such as slow start fast retransmit, congestion avoidance and fast recovery which are discussed later after introducing basic terms and avoidance strategies.

If any interface on a router can't transmit a packet immediately then the packet is queued in either in an interface transmit (TX) ring or the interface output hold queue which is depends on the switching path. After this packets are taken out of queue and retransmitted on the interface sooner or later. If the transmission of packets from input to output interface is more than ability of router to buffer and forward traffic then the queues growth and congestion occur. So, in this technique Tail drop is the default queuing response to congestion. When the output queue is full and if tail drop is in effect than all the packets which are trying to enter to the queue are drop until the queue is no longer full.

Tail drop limitation: Tail drop does not work when network has large number of TCP flows or when selective dropping is required. Normally, when congestion occurs most of the TCP session is affected and is simultaneously back off and then restart again and aggressive flows seized temporarily all buffers which causes TCP global synchronization and TCP starvation respectively. Tail drop don't differentiated the packets so sometimes some important packets are also drop.

What is TCP global synchronization and TCP starvation?

TCP global synchronization [2]: "If any router receives to much data, assume more than queue limit, then by default many TCP session then simultaneously go into slow start which slow down the traffic and then all flows go into slow start again. This activity creates a condition called global synchronization." It occurs only when the transmission link is not fully used. When packets are dropped all at once then global synchronization of TCP hosts occurs and when multiple TCP hosts reduce their transmission rates in response to packet dropping. When congestion reduced, their transmission rates are increased.

TCP starvation: For example TCP transmit window for file transfer application, so, the TCP sends a number of large packets to its destination which results in immediately fill the queue on the router and rest of the packet are starved because of the lack of treatment that which packet are dropped and which are transmitted. Because of this less aggressive flows are dropped at the output interface.

Traffic Management mechanisms [2] [5]:

Slow start, Congestion avoidance, Fast retransmit, and Fast recovery are the four TCP/IP congestion avoidance algorithms. [5]

Slow start and Congestion avoidance algorithms [5] [1] [6]:

When there are two hosts on same LAN then old TCP is good enough to injecting multiple segments in to the network but if there are some routers and some fast or slow links between sender and receiver than these routers must queue the packet and routers have possibility in lack of space. To avoid this situation algorithm is used which called as slow start. The operation of this algorithm is done by observing the rate at which new packets should be injected into the network conformed by returned acknowledge from other end. The slow start and congestion avoidance algorithms must be used at the sender side to control the outstanding data which can be transfer in network. These algorithms have two kind of variable: CWND (congestion window on sender side) and RWND (receiver advertised window). CWND is on sender side which can limit on the data that transfer before receiving ACK to receiver. While RWND is a receiver side limit on the outstanding data.

After this here, one question come out and is "From both of the algorithms, which algorithm can control the data transmission?" Answer is, one variable SSTHRES (Slow Start Threshold) determine one of the algorithms for controlling the data transmission. Here are three conditions for sender to choose any one algorithm:

CWND < SSTHRES, Sender can use Slow Start algorithm.

CWND > SSTHRES, Sender can use Congestion avoidance algorithm.

CWND = SSTHRES, Sender can use any one of this two algorithms.

Slowly probe of the network by TCP is require for the begin transmission into a network with unknown conditions and this slow probe can also determines the available capacity in order to avoid congesting network with an inappropriately large burst of the data. For this purpose slow start algorithm is used. The IW (initial window size os sender's congestion window after three way handshake is completed) must be less than or equal to 2*SMSS and must not more than two segment.

In the equation given below, a TCP sender may use three or four segment IW, but the size of each segment can't be more than 4380 bytes.

IW= min (4*SMSS max (2*SMSS, 4380 bytes)) ..................................................................(1)

Now, during slow start, CWND may be increase in a TCP through SMSS bytes for each ACK receive that acknowledge new data. So, according to the three conditions of CWND and SSHTHRES, CWND increases more than SSHTHRES slow start ends. But in congestion avoidance CWND is incremented by full sized segment per round trip time (RTT) and it continues until congestion is detected. Most common formula to update CWND during congestion avoidance or detection is shown below:

CWND += SMSS*SMSS/CWND ..........................................................................................(2)

Every incoming original ACK is executed through this adjustment. "Equation (2) provides an acceptable approximation to the underlying principle of increasing CWND by 1 full-sized segment per RTT. (Note that for a connection in which the receiver acknowledges every dara segment, (2) proves slightly more aggressive than 1 segment per RTT, and for a receiver acknowledging every-other packet, (2) is less aggressive.)" But this equation (2) can't increase CWND in case of very large congestion window. If equation (2) yields 0 then the result of it should be rounded up to 1 byte. Additive constant at the right hand side of the equation may be incorrect and can reduce the performance. So, how we can increase this CWND? The answer to this question is, CWND can be increase during congestion avoidance by counting the number of bytes that have been acknowledged by ACKS for new data. CWND can be incremented by up to SMSS bytes when the number of bytes acknowledged reaches CWND but during congestion avoidance CWND can't be increase more than either 1 full-sized segment per RTT, or the value using equation (2).

Note that the value of SSHTHRES must be set to no more than the value given in equation (3) when a TCP sender detects segment loss using the retransmission timer.

SSTHRES = max (Flightsize / 2, 2*SMSS)............................................................................(3)

Where, Flightsize is the amount of outstanding data in the network.

Fast Retransmit and Fast Recovery [6] [5] [7]:

When any segment is out of order at that time a TCP receiver send an immediate duplicate ACK to inform the sender about the segment as well as the expected sequence number. The duplicate ACK caused by three networks problem. One is dropped segment in which all segments after a dropped segment will trigger duplicate ACKs. Second is reordering of data segments by the network and third is replication of ACK or data segments by the network.

A fast retransmit algorithm is used to detect and repair loss on the basis of these duplicate ACKs. Fast retransmit algorithm wait for three incoming duplicate ACKs. Once these ACKs arrive than TCP consider that the segment is loss. After loss of segment TCP performs a retransmission of what appears to be the missing segment, without waiting for the retransmission timer to expire. Once the process of fast retransmit algorithm will finished than fast recovery algorithm comes in action. Fast recovery algorithm handle the transmission of new data until a non duplicate ACK arrives.

The implementations of these algorithms are shown below which are specification of RFC 2581 [5]:

After receiving of the third duplicate ACK, SSTHRESH is set less than half the amount of outstanding data in the network but at least 2*MSS.

CWND is set to SSTHRESH plus 3 and retransmit the loss segment.

Increment CWND by MSS for each additional duplicate ACK received.

If the new value of CWND and the receiver's advertised window allows the segment than retransmit it.

Deflating the window which means set CWND to SSTHRESH when the next ACK arrives that acknowledges new data.

2.3.2 Active Queue Management (AQM) [2] [1] [8] [3] [9]: Active Queue Management disciplines avoid both TCP global synchronization and TCP starvation which are drawback of tail drop. In the absence of ECN, AQM is general mechanism which uses several alternatives for congestion indication and is restricted to using packet drops as a mechanism for congestion indication.

Random Early Detection (RED): RED is dropping mechanism which drops the packet before a queue is full. This mechanism is based on average queue length. When queue size getting increase then RED start packet dropping until queue length is become shorter. RED drops packet randomly so, from this we can say that RED has no per-flow intelligence. The logic of this algorithm is that the arriving traffic will represent by aggressive flow and RED will drop a packet of this aggressive session.

RED dropping strategy: Packet dropping strategy is based on the queue length and this dropping is based on three configurable parameters contained within the RED profile:

Minimum Threshold: When the average queue length is above the minimum threshold, Red starts dropping packets until the queue length increases up to maximum threshold.

Maximum Threshold: RED drops all the packets when the queue size goes above this maximum threshold.

Mark Probability Denominator: This number is the fraction of packets that are dropped when the average queue depth is at maximum threshold. For example, if the denominator is 512 then one out of 512 packets is dropped when the average queue is at maximum threshold. The linear increase of packet drop from minimum threshold to maximum threshold is depending upon the mark probability denominator number and the queue size between the thresholds.

The minimum threshold value should be set high enough by which we can maximize the use of link. If we set this value very low then the packet are unnecessarily dropped and we can't use the transmission link fully. And one more thing is that the difference between the maximum and minimum threshold should be large enough otherwise many packets may be dropped at once and global synchronization will occurs. RED has three dropping modes. One is called no drop and is occurs when the average queue size is between 0 and the configured minimum threshold. It will put all packets in queue. Second is when the queue size is between the configured minimum threshold and the configured maximum threshold then random drop will occurs, which is linearly proportional to the mark probability denominator and the average queue. Third one is full drop and will drop all the packets when the average queue size is above the maximum threshold.

The difference between the RED and TCP/IP avoidance: When all the sessions slow down then the TCP sessions restart their transmission at the same time. After some time same condition will repeated and TCP session back off again. This causes tail drop and this behaviour cycles constantly which results in useless link.

If we use RED instead of TCP then the packets are dropped randomly which influences a small number of sessions at a time, before the interface reaches congestion. By this we can increase the number of sessions an d average link is utilized. Global synchronization also occurs rarely.

Weighted RED (WRED): WRED combines the RED with the IP Precedence or DSCP and performs packet dropping based on IP Precedence or DSCP markings.

WRED monitors the average queue length and packet dropping is based on the length of queue. If length of the queue is below minimum threshold then there is no packet dropping occurs and packets are send in queue. If length between minimum threshold and maximum threshold then packet dropping occurs and packets are dropped randomly and length goes above maximum threshold then WRED follows the tail drop mechanism. It drops all the packets and retransmits them again. It seems like RED algorithm but difference is only that there isn't packets classification but in WRED packets are classify and drop policies are given to the traffic means if queue size is more than maximum threshold then selected traffic packets are dropped. When interface congested then it will give different performance for different traffic.

WRED can drop other flow rather than the RSVP (Resource Reservation Protocol) flow for RSVP configured interface. We can choose the traffic by IP precedence and DSCP so the traffic that has lower priority are dropped rather than higher priority. In this algorithm packets of large users are dropped rather than small users. And it also slow down the traffic sources that generate the more traffic instead of traffic sources that generate the low traffic.

As mention before, packet dropping are selectively when interfaces are congested will reduce the chances of tail drop. WRED can also reduces the chances of global synchronization because dropping large numbers of packets at once is avoided by dropping some packets early rather than waiting until the queue is full. At last by this WRED helps the maximum use of transmission lines.

Non-IP traffic should be considering as precedence 0 which is lowest precedence, because of this the Non-IP traffic is more likely to be dropped than IP traffic. And WRED is used in the core router instead of edge router of a network. Edge router assigns IP precedence or DSCP to packets as they enter the network. By using this assigned value WRED can determine how to treat different types of traffic.

Working of WRED:

WRED define the drop characteristics by the parameters which are configured in the traffic profile and these parameters also define the WRED probability slopes. The sketch of WRED algorithm is shown below:

C:\Users\jainis\Downloads\jainis.png

Figure 2 [2]: WRED Sketch

When a packet arrives at the output queue then correct WRED profile for the packet is selected. Once the profile is selected for the packet then it passes to WRED for processing. On the basis of selected traffic profile and average queue length the dropping probability is calculated. After this the packet is either dropped or passed to the output queue depends on three conditions:

If queue is full all the packets are dropped.

If queue is not full and less than minimum threshold than packets are eventually transmitted out the interface.

If queue is not full but queue size is between the maximum threshold and minimum threshold than packet dropping is based on dropping probability. Here WRED queues those packets or drop randomly.

Drawback of WRED:

Mostly WRED is not recommended for voice traffic. It's not sure whether WRED is applied on an interface which carrying voice traffic. Voice is based on the User Datagram Protocol so WRED can't drop voice packet. In general, in most of the network nobody wants to drop voice packet because by dropping voice packet will affect the voice quality.

2.4. Enhancing Active Queue Management:

Adaptive RED (ARED)

Dynamic RED (DRED)

Stabilized RED (SRED)

BLUE

Adaptive Virtual Queue (AVQ)

Flow Random Early Drop (FRED)

RED with Preferential dropping (RED-PD)

CHOKe

Random Early Marking (REM)

Comparison of enhancing active queue management mechanisms are shown in below table:

Table 1 Comparison of AQM Mechanisms [1]:

NO.

Mechanism

Monitored by mechanism

Work

1

RED,ARED, DRED

Queue length

Packets are randomly dropped based upon the average queue length.

2

SRED

Queue length, packet header

Estimated number of flow which is input for the drop function and is finding by comparing flow identifier with random packets in a queue.

3

BLUE

Packet loss, 'link idle' events, packet header.

The drop probability is increased upon

Packet loss and decreased when the link is idle.

4

AVQ

Packet arrival rate

A virtual queue is maintained; its capacity

is updated on the basis of packet arrival rate.

5

RED-PD

Queue length, packet header, packet drops.

The history of packet drops is checked for

Flow with a high rate; such flows are monitored and specially controlled.

6

FRED

Queue length, packet header

Flows that have packets buffered are monitored and controlled using per-flow thresholds.

7

CHOKe

Queue, packet header

If a packet is from the same flow as a randomly picked one in the queue, both are dropped.

8

REM

Arrival rate (optional), queue length

A 'price' variable is calculated on the basis of rate and queue mismatch; the drop probability is exponential in price, the end-to-end drop probability is exponential in the sum of all link prices.

Chapter 3 Methods:

3.1 The RED Algorithm:

The RED algorithm calculates the average queue size using low pass filter with an exponential weighted moving average [9]. As mention before this average queue size is a comparison of two thresholds. 1. Minimum threshold and 2. Maximum threshold. Marking schemes are depends on these two thresholds, when average queue size is less than minth then no packets are marked but when average queue size is more than maxth then each and every packets are marked, and when the average queue size is between these two threshold than each arriving packets are marked by probability which is a function of the average queue size and is denoted by Pa [9]. "Here a packet is marked through a particular connection is roughly proportional to that connection's share of the bandwidth at the router [9] [8]." The detail RED algorithm is shown in figure 3 given below [8]:

Initialization:

avg  0

count  -1

for each packet arrival

calculate new avg :

if the queue is nonempty

avg (1-wq)avg + wq . q

else

m  f(time - q_time)

avg  [(1 - wq)^m]avg

if minth ≤ avg ≤ maxth

increament count

calculate probability Pa:

Pb  maxp (avg - minth) / (mixth - minth)

Pa  Pb / (1 - count . Pb)

With probability Pa :

Mark the arriving packet

Count  0

else Count  -1

when queue becomes empty

q_time  time

Saved Variables:

avg: average queue size.

q_time: start of the queue idle time.

count: packets since last marked pkt.

Fixed parameters:

wq: queue weight

minth: minimum threshold for queue

maxth: maximum threshold for queue

maxp: maximum value for Pb

Other:

Pa: current pkt marking probability

q: current queue size

time: current time

f(t): a linear function of the time t

Figure 3 [9]: The RED Algorithm

The RED algorithm has two separate parts. One is computing the average queue size and second is calculating the packet marking probability and then determines how frequently the router marks packets at current level of congestion. The computing of the average queue size determines the degree of burstiness that will be allowed in the router queue. The calculation of the packet marking probability is for the router to mark packets at fairly evenly spaced intervals to avoid biases and global synchronization, and to mark packet frequently to control the average queue size. [9]

The empty queue is considered as the idle period of the router. At idle period the router calculation of the average queue size take into account by estimating the number m of small packets which are transmitted by the router during same period [8] [9].

As average queue size (avg) varies from minth to maxth, the packet marking probability Pb varies linearly from 0 to maxp [9]:

Pb  maxp(avg - minth)/(maxth - minth).

The final packet marking probability Pa increases slowly as the count increases since the last marked packet [8]:

Pa  Pb/(1 - count . Pb)

3.2 Calculating parameters of the RED algorithm:

The calculation of the parameters like avg, wq, minth, maxth, and packet marking probability are shown below:

The average queue length can be calculated using low pass filter by a RED gateway. In transient congestion don't show a significant increase in the average queue size but brusty traffic (explain in next section) results short term increases in the queue size. The equation for calculating the average queue length is shown below:

Avg (1-wq)avg + wq x q.

Here, if wq ie too low then response from the estimated average queue size is too slow to transient congestion. If wq is too high then transient congestion will not filter out at the router by estimated average [10]. According to this effect the value of wq can be set as upper bound and lower bound. The average queue length can be found using these parameters are shown below:

When average queue size is 0 then the queue is empty. Now, assume that the queue length increases from 0 to L packet over L packet arrivals (Where L is the number of packet arrivals). When L packets are arrived, average queue length can be denoted by avgL and can be derived as follow [11] [8]:

avgL=

=wq (1-wq)

=L + 1 + [{(1-wq)^(L+1) - 1}/wq]

According to this equation the function of the avgL is for a range of values for wq and L is shown in figure 4. In fig 4 x-axis and y-axis shows wq from 0.001 to 0.005 and L from 10 to 100 respectively. According to Sally Flloyd and Jacobson's 1993 paper on RED " If queue increases from 0 to 100 and if wq=0.001 then the avg queue size is 4.88".

Fig 4 [8]: avgL as a function of wq and L

Wq can be choose by one rule and is, if given minth and brusts of L packets arriving at the router are allowed than wq should be satisfied following condition [8]:

avgL < minth

now according to the equation avgL

L + 1 + [{(1-wq)^(L+1) - 1}/wq] < minth

This condition is only for upper bound.

Now, lower bound for wq means the value of wq is too low and if the value of wq is low then average queue size responds too slowly to changes in the actual queue size as described before. At this kind of situation RED router (In which RED algorithm is implemented) is not able to detect the initial stages of congestion [10]. When the queue size is zero then it takes -1/en(1-wq) arrival packets until average queue size reaches 0.63 = 1-(1/e) [12] [8].

Table given below shows the number of arriving packets according to the value of wq [8]:

Value of wq

Arriving Packets

0.001

1000

0.002

500

0.003

333

Table 2 [8]

Packets are marked when the average queue size is in between maxth and minth and dropped when average queue size is more than maxth as described before. Now, the question is how these parameters minth and maxth can be set to work RED router more efficiently?

According to the rule of thumb in 1993 paper Floyd and Jacobson shows that the maxth should be at least twice minth. But in [10], according to the Floyd's current rule of thumb is to set maxth three times minth. When the value of maxth-minth is larger than the typical increase in the calculated average queue size in one round trip time (Round trip time is explain in section 2.2.5) than the RED router functions most effectively [8]. The value of maxth depends on maximum average delay which can be allowed by the router [8]. The value of the minth depends on the link speed, propagation delay, maximum buffer size and exactly what desired trade off is at that router between low average delay and high link utilization. Too small minth means brustiness in the arrival process is not allowed. If the minth is large than it allow higher brustier traffic and would have to be to achieve a given average link utilization [10]. Minth of one packet would result in unacceptable low link utilization so according to floyd's ns-1 and ns-2 simulator output minth should be set at least five packet for efficient use of link [8] [10].

Calculating the packet marking probability [8]:

The initial packet marking probability can be calculated by the equation given below:

Pb  maxp (avg-minth)/(maxth-minth).

The final marking probability can be found by two methods; one is Geometric random variables and second is uniform random variables. Here when average queue size reaches the maxth, than the parameter maxp gives the maximum value for the packet marking probability Pb. In the first method of packet marking probability each packet is marked with probability Pb, So

Prob [X=n]=[(1-pb)^n-1]Pb.

Where, intermaking time X is the number of packets that arrive after a marked packet , until the next packet is marked and X is also a geometric random variable with parameter Pb, and E[X] = 1/Pb. [8]

In second method of uniform random variables for intermaking time X, uniform random variable from {1, 2, … 1/Pb} (assume 1/Pb as a integer for sake of simplicity) can be achieved when for arriving packet the marking probability is Pb/(1 - count.Pb), where count is the number of the unmarked packets that have arrived since the last marked packet. In this method,

Prob [X=n] = Pb/[1-(n-1)Pb]

=

And for method 2, E[x] = 1 / (2Pb) + ½.

Comparison of both methods is shown in figure 5 where top and bottom line shows method 1 and method 2 respectively. For method 1 each packet is marked with probability p, for p=0.02 and for method 2 each packet is marked with probability p / (1+ip), for p=0.01 and i is the number of unmarked packet since the last marked packet. There is a dot in figure which shows the marked packet for each method and roughly 100 out of the 5000 arriving packets are marked for each method. Long distance between the mark packets or too many dots can create a situation called global synchronization.

Figure 5 [8]: Comparison of two methods

Brusty Traffic: Bursty traffic means "traffic from a connection where the amount of data transmitted in one round trip time is small compared to the delay bandwidth product, but where multiple packets from that connection arrive the gateway in a short period of time" [8]. An FTP connection with a long delay bandwidth product but a small window can create brusty traffic at the router where two windows are sent before the ACK packets are return. Examples of the brusty traffic are variable bit rate video traffic and some forms of interactive traffic.