Advanced congestion control algorithm

Published: Last Edited:

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Abstract-TCP-Illinois is one of advanced congestion control algorithm used for high speed networks. It is based on idea on uses of packet loss information to determine whether the congestion window size should be increased or decreased, and uses packet queuing delay information to determine the amount of increment or decrement. It achieve. We perform simulations to demonstrate the performance of TCP Illinois and alter various parameters such by increasing receiver buffer size.

Keywords-TCP, Congestion Control, Throughput


TCP, the most widely used protocol in the transport layer on the Internet (e.g. HTTP, and SMTP), plays an important role in determining overall network performance The TCP protocol is used by the majority of the networks all our the world. TCP performance is strongly influenced by its congestion control algorithms that limit the amount of transmitted traffic based on the estimated network capacity and utilization. The four algorithms, Slow Start, Congestion Avoidance, Fast Retransmit and Fast Recovery.

When connection had is setup for a process then congestion window is set to one segment by Slow Start algorithm by the receiver. When acknowledgements are received by the receiver the congestion window is increased by one segment for each acknowledgement returned. At some point the congestion window may become too large for the network or network conditions may change such that packets may be dropped. Packets lost will trigger a timeout at the sender. When this happens, the sender goes into congestion avoidance.

In the Congestion Avoidance algorithm a reception of duplicate ACKs and retransmission timer expiring give indication to the sender that a network congestion situation is occurring. When duplicate ACKs are received to sender, it indicate that network congestion is imminent. In order to cope congestion different algorithms are used. The Fast Recovery and Fast Retransmit

If more then three duplicate ACKs are received the sender do not wait for timer which it had set when sending segments to expire before retransmitting the segment. If sender however received more than two duplicate ACKs, it indicatess that at least one segment has been lost. This process is called the Fast Retransmit algorithm.[1]

Fast Recovery algorithm is invoked when duplicate ACKs are being received, the TCP sender knew explicitly that data is flowing to receiver. So there is no congestion. Rather than start at a window of one segment as in Slow Start mode, the sender resumes transmission with a larger window, incrementing as if in Congestion Avoidance mode [2].

Most of New method developed for congestion control are based on idea additive increment multiplicative decrement (AIMD) algorithm. Being a window-based algorithm, TCP controls the send rate by maintaining a window size variable W, which limits the number of unacknowledged packets in the network from a single user. This window size is adjusted by the AIMD algorithm in the following manner: W is increased by α/W(α = 1 for standard setting) for each ACK, and thus is increased by a constant a/β per round trip time (RTT) if all the packets are acknowledged within an RTT, where b is the number of packets acknowledged by each ACK (b = 1 for original TCP, and b = 2 for delayed ACK [3]). On the other hand, W is decreased by a fixed proportion βW(β =1/2 for standard setting) once some packets are detected to be lost in the last RTT. With help of this algorithm, senders probe the network for spare bandwidth by cautiously increasing their send rates, and sharply reduce their send rates when congestion is detected. Along with other features like slow start, fast recovery, and fast retransmission, TCP achieves congestion control successfully in the current low speed networks.

Several alternatives to current versions of TCP have been proposed for implementation in high-speed networks. Some require the modification to router algorithms also, like XCP and some modify the sender side only, like HSTCP Scalable TCP, TCP Vegas, and Compound TCP. Although each of these has shown advantages over standard TCP in some aspects, none of them have yet provided convincing evidence that they are overwhelmingly better than standard TCP and are suitable for general deployment.

In this paper, we introduce the TCP-Illinois algorithm, which uses packet loss information as the primary congestion signal to determine the direction of window size and uses queuing delay information as the secondary congestion signal to adjust the pace of window size change (the amount of window size increment or decrement). We then show that how TCP Illinois can be configure in Linux in order to have maximum throughput for receiver by downloading file and using wire shark for analyzing packets


Since we know that TCP Illinois use idea of AIMD algorithm, where decrease and increase parameters are based upon estimation of buffer size and packet queuing delay.

To achieve a better throughput in network we need to know Whether congestion is imminent or not. And the only congestion indicating information is queuing delay.

The key idea is the following: when the average queuing delay da is small, the sender assumes that the congestion is not imminent and sets a large a and small b; when da is large, the sender assumes that the congestion is imminent and sets a small a and large b. As a result, a= f1(da) and b= f2(da), where f1(·) is decreasing and f2(·) is increasing functions.

Since we know that TCP-Illinois is a special form of AIMD algorithms in which we have two functions of f1(·) and f2(·).

In the congestion avoidance phase, the sender measures RTT T for each acknowledgement, and average the RTT measurements over the last W acknowledgements (one RTT interval) to derive the average RTT Ta. The sender records the maximum and minimum (average) RTT. ever seen as Tmax and Tmin, respectively, and computes the maximum (average) queueing delay dm =Tmax−Tmin and the current average queueing delay da = Ta−Tmin.

The sender picks the following parameters: 0 < amin ≤ 1≤ amax, 0< bmin ≤ bmax ≤1/2,Wthresh >0, 0≤ h1 <1, 0 ≤ h2 ≤ h3 ≤ 1. The sender sets di = hidm (i= 1,2,3), computes ki (i=1,2,3,4) and computes aand b.

W ←W + a/W for each ACK.

W ←W − bW, if in the last RTT there is packet loss detected through triple duplicate ACK.

Where a is step wise increasing function of W and b is decreasing function of W.

Once there is a timeout, the sender sets the slow start threshold to be W/2, enters slow start phase, and resets a= 1 and b= 1/2, and a and b values are unchanged until one RTT after the slow start phase ends.


These experiment of tuning Illinois congestion control algorithm were performed in Linux kernel 2.6.

During these experiments we download a file of size 100 MB is downloaded from Web site

As file is downloaded different graphs are obtain and valuable information of Link Delay Slow Start, Link Bandwidth and window size. Throughput Graph give all over performance of Link. On our first Downloading file we did not change any parameters.

Then we change our parameters for tuning such that by increasing Receiver Buffer Size. Such That we double it. The command used for increasing buffer size is sudo sysctl -w net.ipv4.tcp_rmem. Thus by increasing receiving buffer size the receiver will have more packets without acknowledgment it. Receiver buffer also shows receiver advertized window.

In order to have maximum throughput we have to also to make changes for sender side. We use command sudo sysctl -w net.ipv4.tcp_wmem and we double the present value of sender buffer. We have to increase buffer side for sender as it had to hold unacknowledged packets for retransmission.

IV- Analysis Result

We increased receiver buffer size by three times of its present value. While sender buffer size is double. As we will have more packets in receiver buffer thus our bandwidth will increased for downloading file. As we can observe in Figures captured with help of wireshark.

Before tuning the time sequence graph shown in figure 1, It is clear that we have not constant output graph as we have not straight line. After increasing buffer size line is straight in time sequence graph. The irregularities in throughput graph are due to delay of packets so slow start had to start again. This phenomena of slow start can be observed in figure 3. Slow start can be easily seen in Figure 3.

As I had used wireless internet service. So low Signal strength affect throughput of system.


We can conclude that by increasing receiver buffer size we had more bandwidth and irregularities in time sequence had also been removed as shown in figure2.


  1. Van Jacobson. Modified TCP Congestion Control Avoidance
  2. Algorithm.

  3. M. Allman, V. Paxson, and W. Stevens. TCP Congestion Control, RFC 2581.
  4. W. Stevens. TCP/IP Illustrated, Vol.1 The Protocols. Addison-Wesley,
  5. S. Floyd. Highspeed TCP for large congestion windows. Internet