This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.
The demand for fast transfer of large volumes of data, and the deployment of the network infrastructures is ever increasing. However, the dominant transport protocol of today, TCP, does not meet this demand because it favors reliability over timeliness and fails to fully utilize the network capacity due to limitations of its conservative congestion control algorithm. The slow response of TCP in fast long distance networks leaves sizeable unused bandwidth in such networks. A large variety of TCP variants have been proposed to improve the connection's throughput by adopting more aggressive congestion control algorithms.
Moving bulk data quickly over high-speed data network is a requirement for many applications. These applications require high-bandwidth links between network nodes. To maintain the stability of Internet all applications should be subjected to congestion control. There has been a growing interest in studying the Internet congestion control ever since the first congestion collapse occurred
Classification of congestion control algorithms
Many algorithms have been proposed in the literature to allocate the available network resources in a fair manner among the competing users, without overloading the network. The main idea behind all these algorithms is more or less the same: each user measures some feedback signal, such as packet loss or queueing delay, and accordingly adapts its transmission rate.
Some of the flavors of TCP congestion control are loss-based, high-speed TCP congestion control algorithms that uses packet losses as an indication of congestion; delay-based TCP congestion control that emphasizes packet delay rather than packet loss as a signal to determine the rate at which to send packets. Some efforts combine the features of loss-based and delay-based algorithms to achieve fair bandwidth allocation and fairness among flows.
There are many ways to classify congestion control algorithms:
By the type and amount of feedback received from the network: Loss; delay; single-bit or multi-bit explicit signals
By incremental deployability on the current Internet: Only sender needs modification; sender and receiver need modification; only router needs modification; sender, receiver and routers need modification.
By the aspect of performance it aims to improve: high bandwidth-delay product networks; lossy links; fairness; advantage to short flows; variable-rate links
By the fairness criterion it uses: max-min, proportional, "minimum potential delay"
The prevention of network congestion and collapse requires two major components:
A mechanism in routers to reorder or drop packets under overload,
End-to-end flow control mechanisms designed into the end points which respond to congestion and behave appropriately.
Practical network congestion avoidance
Implementations of connection-oriented protocols such as the widely-used TCP protocol, generally watch for packet errors, losses, or delays in order to adjust the transmit speed. There are many different network congestion avoidance processes, since there are a number of different trade-offs available.
TCP/IP congestion avoidance
The TCP congestion avoidance algorithm is the primary basis for congestion control in the Internet.
TCP is well-developed, extensively used and widely available Internet transport protocol. TCP is fast, efficient and responsive to network congestion conditions.
Problems occur when many concurrent TCP flows are experiencing port queue buffer tail drops. Then TCP's automatic congestion avoidance is not enough. All flows that experience port queue buffer tail-drop will begin a TCP retrain at the same moment - this is called TCP global synchronization.
Active Queue Management (AQM)
Fewer packets will be dropped with Active Queue Management (AQM).
The link utilization will increase because less TCP global synchronization will occur.
By keeping the average queue size small, queue management will reduce the delays and jitter seen by flows.
The connection bandwidth will be more equally shared among connection oriented flows, even without flow-based RED or WRED.
Internetworking is the practice of connecting a computer network with other networks through the use of gateways that provide a common method of routing information packets between the networks. The resulting system of interconnected networks is called an internetwork, or simply an internet.
Internet is a computerÂ network consisting of a worldwide network of computerÂ networks that use the TCP/IP network protocols to facilitate data transmission and exchange
Interconnection of networks
Internetworking started as a way to connect disparate types of networking technology, but it became widespread through the developing need to connect two or more local area networks via some sort of wide area network. The definition of an internetwork today includes the connection of other types of computer networks such as personal area network.
The network elements used to connect individual networks in the ARPANET, the predecessor of the Internet, were originally called gateways, but the term has been deprecated in this context, because of possible confusion with functionally different devices. Today the interconnecting gateways are called Internet routers.
Another type of interconnection of networks often occurs within enterprises at the Link Layer of the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical interfaces. Such interconnection is accomplished with network bridges and network switches. This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger, single sub-network, and no internetworking protocol, such as Internet Protocol, is required to traverse these devices. However, a single computer network may be converted into an internetwork by dividing the network into segments and logically dividing the segment traffic with routers. The Internet Protocol is designed to provide an unreliable (not guaranteed) packet service across the network. The architecture avoids intermediate network elements maintaining any state of the network. Instead Penef, this function is assigned to the endpoints of each communication session. To transfer data reliably, applications must utilize an appropriate Transport Layer protocol, such as Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do not require reliable delivery of data or that require real-time service, such as video streaming.
Two architectural models are commonly used to describe the protocols and methods used in internetworking.
The Open System Interconnection (OSI) reference model was developed under the auspices of the International Organisation for Standardization (ISO) and provides a rigorous description for layering protocol functions from the underlying hardware to the software interface concepts in user applications. Internetworking is implemented in the Network Layer (Layer 3) of the model.
The Internet Protocol Suite, also called the TCP/IP model of the Internet was not designed to conform to the OSI model and does not refer to it in any of the normative specifications in Requests for comment and internet standards. Despite similar appearance as a layered model, it uses a much less rigorous, loosely defined architecture that concerns itself only with the aspects of logical networking. It does not discuss hardware-specific low-level interfaces, and assumes availability of a Link Layer interface to the local network link to which the host is connected. Internetworking is facilitated by the protocols of its Internet Layer.
The art and science of connecting individual Local Area Networks (LAN) to create Wide Area Networks (WAN) and connecting WANs to form even larger WANs. Internetworking can be extremely complex because it generally involves connecting networks that use different protocols. Internetworking is accomplished with routers, bridges and gateways.
AÂ network addressÂ serves as a unique identifier for a computer on a network. When set up correctly, computers can determine the addresses of other computers on the network and use these addresses to send messages to each other.
One of the best known form of network addressing is the Internet Protocol Address. . IP addresses consist of four bytes (32Â bits) that uniquely identify all computers on the public Internet.
Address mapping translates network addresses from one format to another. This methodology permits different protocols to operate interchangeably.