Network simulation modeling and analysis
Network Simulation Modeling And Analysis Of An Ethernet-LAN Using OPNET
A data network is constituted by two or more nodes (computers, or workstations, printers, servers, and other devices) which are connected with each other and transmit or receive data. Data networks can be categorized according to the coverage of their geographic area - scale (Local Area Networks - LAN, Metropolitan Area Networks - MAN and Wide Area Networks - WAN etc).
Local Area Network (LAN)
In Local Area Networks the first node can have a 3km distance from the last one. These types of networks usually cover home, office or campus networks (LAN's usually don't exceed the limits of a building). The LAN data transfer rate depends from the technology that use and can be 10Mpbs, 100Mbps, 155Mbps and 1Gbps. A node (e.g. a computer) must have a network card (usually an Ethernet card) to establish a connection with the network. The network interface card (NIC) is placed into a slot and is connected to the medium (wire, optical fiber).
Local area networks are low budget systems (networks) because the installation, implementation and the maintenance of them have low cost. Most LAN's are constructed by inexpensive hardware such as Ethernet cables (RJ-45 jack), network adapters, and hubs.
Ethernet is local area network (LAN) technology that allows you to connect a variety of computers together with a low-cost and extremely flexible network system.”  (Charles E. Spurgeon p.xi) It is covers by the IEEE 802.3 standard. Ethernet has three data rates over optical fiber and twisted-pair cables: 10 Mbps - 10Base-T Ethernet, 100 Mbps - Fast Ethernet, 1000 Mbps - Gigabit Ethernet. The latest Ethernet standard is Ten-Gigabit Ethernet that operates at 10 Gbps.
Ethernet is the most widely LAN technology because it has the following characteristics:
· Is easy to understand, implement, manage, and maintain
· Allows low-cost network implementations
· Provides extensive topological flexibility for network installation
· Guarantees successful interconnection and operation of standards-compliant products, regardless of manufacturer  (CISCO)
“The term physical topology refers to the way in which a network is laid out physically: Two or more devices connect to a link; two or more links form a topology. The topology of a network is the geometric representation of the relationship of all the links and linking devices (usually called nodes) to one another. There are four basic topologies possible: mesh, star, bus, and ring.”  (Behrouz A.Forouzan p.8-9)
The most common LAN topologies are bus, star and ring.
Each network can connect with another network and so on. This type of connection is called internetworking or internet. TCP stands for Transmission Control Protocols and is a communication protocol that belongs to the family of the internet protocols. The most known communications protocols are the Transmission Control Protocol (TCP) and the Internet Protocol (IP). The TCP and IP protocols allow one network to communicate with another. The TCP/IP protocol suite was developed before the OSI model.
The TCP/IP protocol is constituted by four or five layers instead of the OSI model that is constituted by the seven layers. The RFC 1122 that determines the Internet Suite Protocols isn't clear about the layers number. Thus, opinions vary if the physical layer must be represented or not.
Each upper-level protocol of the TCP/IP is supported by one or more lower-lever protocols.
The TCP protocol is found at the transport layer. TCP transport is used to deliver data across IP networks and is the core of the internet. “UDP and TCP are transport level protocols responsible for delivery of a message from a process (running program) to another process.”  ((Behrouz A.Forouzan p.44)
TCP is a reliable stream transport protocol or connection oriented. That means that a connection must be established between both ends before transmitting. It data units are called segments because TCP divides a stream of data into smaller units. Each of these units or segment is provided with a sequence number for reordering or reconnecting after receipt. It also retransmits any segments that might be lost and organize them into the correct order. This operation makes TCP slower than UDP.
UDP stands for User Datagram Protocol and is found at the transport layer too as the TCP. It is also responsible for delivery of a message from a process (running a program) to another process as TCP. The main difference between UDP and TCP is that UDP offers connectionless services and is unreliable.
“It does not add anything to the services of IP except to provide process-to- process communication instead of host-to-host communication. Also, it performs very limited error checking.”  (Behrouz A.Forouzan p.709)
These are the main reasons that UDP is faster than TCP protocol.
Despite it disadvantages it has some advantages too. It is a very simple protocol using a minimum of overhead. UDP can be used by a process that wants to send a small message and it does not care much about reliability. Sending a small message by using UDP takes much less time since UDP is faster than TCP for the reasons that mentioned above.
The UDP packets are called user datagrams. It was mentioned before that UDP s connectionless. That means that UDP datagrams are independent between them and there not numbered. “Also, there is no connection establishment and no connection termination, as is the case for TCP. This means that each user datagram can travel on a different path.”  (Behrouz A.Forouzan p.713)
UDP is been applied on TFTP (TFTP internal flow and error control mechanisms), is suitable for multicasting, it is used on SNMP (management process) and finally is used for some route updating protocols (e.g. Routing Information Protocol - RIP). (Fourouzan 2007).
Ethernet Network With 2 Stations
Perform the simulations runs with an Ethernet network having two stations using the UDP protocol and derive the 3 throughput load curves corresponding to the following packet sizes: L1 = 46, L2 =500 and L3 = 1500. Repeat the same simulations using TCP protocol.
Note: In order to change the offered load of the network you need to:
1. Double click on one of the Ethernet stations.
2. Right click and edit the attributes of the bus_rxX and bus_txX (where X is either 0 or 1 for the initial network).
3. Change the Data Rate (bps) of the channel to the required offered load
The required offered load can be calculated using:
Offered Load=Number of Stations ×Length of data fieldAverage Interarrival Time
In the current task there is a simulation of a LAN with two stations using the TCP protocol. Three different packets are transmitted with different offered load for each one. From the offered load equation it can be noticed that offered load is increased when the packet size is increased too, since the average interarrival time and the stations number are constant.
Packet size L1=46, Offered Load=2×8×460.2=3680bps
Packet size L2=500, Offered Load=2×8×5000.2=40000bps
Packet size L3=1500, Offered Load=2×8×15000.2=120000bps
From the above figure; blue line indicates L1=46 packet size, L2=500 packet size is indicated with the red line and L3=1500 packet size is indicated with the green line.
The throughput is measure of how fast data can be send through a network. It can be clearly noticed that throughput gets bigger as the packet size been increased.
As it mentioned above TCP has the ability to retransmit any lost segments or to retrieve any errors. This makes the TCP work slower. From the above figure it can be noticed that the three curves have a high peak. This happens because the packets were transmitter with an x data rate (different data rate for L1, L2, L3) and TCP protocol detected congestion or collision (packet loss or an error). So, in order to retransmit the packet it continues the transmission with lower throughput (it slows down the transmission rate).
Finally, the slow-start that can be noticed from the above figures it is owed to the TCP protocol. It has a slow-start so that can avoid congestion collapse.
From the utilization figure it can be noticed that small packets use the channel bandwidth less efficiently than the big packets. The blue line that indicates L1=46 packet size at its high peak has a utilization of 43%. It's higher than the green line that indicates L3=1500 packet size. This happens because small packets send/receive more acknowledgements. How this happens? A large number of small packets can be transmitted over time, thus is a need for bigger bandwidth. Small packets aren't use efficiently the channel capacity.
Ethernet Network With 100 Stations
Perform the simulation for an Ethernet network having 100 stations for TCP protocol using the same packet sizes i.e. L1 = 46, L2 = 500, L3 = 1500. (Hint: Select similar nodes option by right clicking any node)
In the current task there is a simulation of a LAN with 100 stations using the TCP protocol. Three different packets are transmitted with different offered load for each one. From the offered load equation it can be noticed that offered load is increased when the packet size is increased too, since the average interarrival time and the stations number are constant.
Packet size L1=46, Offered Load=100×8×460.2=184000bps
Packet size L2=500, Offered Load=100×8×5000.2=2000000bps
Packet size L3=1500, Offered Load=100×8×15000.2=6000000bps
For the current task the Ethernet network has 100 stations. This is the only difference between this task and the previous one. Thus, the number of the offered load is much larger for each packet size (L1, L2, L3) in contrast with task 1.1. It is obvious that throughput is bigger for in this case. As the number of stations is increase there is a need for more speed and a need for more bandwidth. Thus, data rate must be bigger.
It can be obtained again as the previous case that channel capacity is used more efficiently by large packets and less for small packets.
For an Ethernet model with 2 stations and a message length (size of data field in the Ethernet frame) of:
• L1 = 46 bytes (minimum length)
• L2 = 500 bytes (mean length)
• L3 = 1500 bytes (maximum length)
a) Calculate interarrival times for the following loads (use equation in 3.1)
b) Find the throughput curves for the above scenarios. 3 simulations with 3 different interarrival times must be executed to derive each of the curves for different frame lengths.
c) Compare all the throughput graphs obtained in part b commenting on performance. Make sure you use the simulation times in column 3.
Average Interarrival Time=Number of Stations ×Length of data fieldOffered Load
Answer 2b & c
Packet size=46 bytes, Offered Load 1, 5, 10 Mbits/s
In this case packets with 46 bytes length are sent throw the Ethernet network. The blue line indicates has the following elements: 1 Mbit/s, average interarrival time 0.000736s. Red line has the following elements: 5 Mbits/s, average interarrival time 0.0001472s. Finally, green line has the following elements: 10 Mbits/s, average interarrival time 0.0000736s.
Throughput can be analyzed concerning three different values of offered load in the current simulation. Since the packet size is constant it can be noticed that throughput is been increased as long as the value of the offered load is been increased too. Thus, it can be obtained that throughput and offered load have a linear relation.
For offered load=1 Mbit/s the average interarrival time is 0.000736s and for offered load=10 Mbits/s the interarrival time is 0.0000736s. The average interarrival time is been decreased when the offered load value is been increased. This makes the current network work faster because the packets can be sent earlier.
Utilization shows that channel capacity is used efficiently in the current simulation.
In this case packets with 46 bytes length are sent throw the Ethernet network. The blue line indicates has the following elements: 1 Mbit/s, average interarrival time 0.008s. Red line has the following elements: 5 Mbits/s, average interarrival time 0.0016s. Finally, green line has the following elements: 10 Mbits/s, average interarrival time 0.0008s.
It can be noticed that throughput is less than the previous case. This happens because the packet size is bigger than the previous case and thus makes the network to work less efficiently. The utilization figure proves it because its high peak is lower than the previous case.
In this case packets with 46 bytes length are sent throw the Ethernet network. The blue line indicates has the following elements: 1 Mbit/s, average interarrival time 0.024s. Red line has the following elements: 5 Mbits/s, average interarrival time 0.0048s. Finally, green line has the following elements: 10 Mbits/s, average interarrival time 0.0024s.
Throughput is less than the previous case and even less than the first case. This happens because the packet size is bigger than the previous cases and thus makes the network to work less efficiently. The utilization figure proves it because its high peak is lower than the previous case and much lower than the first case.
In the current assignment a Local Area Ethernet Network simulation was applied for different scenarios. The network performance was studied and some useful results and observations took place.
Performance is often evaluated by throughput and delay network metrics. Usually more throughput is needed and less delay. This is not standard because if a large number of data is been sent to the network, throughput is going to increase and so is delay because of traffic congestion in the network.
Throughput and bandwidth are not the same thing. Bandwidth is a potential measurement of a link and throughput is an actual measurement of how fast data can be send. It also came out from the simulations in task 1 and task 2 that throughput and offered load have a linear relation.
The TCP protocol is slower than the UDP protocol because TCP has the ability to detect errors and retransmit the packets. It also has a slow-start so it can avoid congestion collapse that stations (users) might cause.
Small packets travel through the network faster than large packets this can be noticed from task 2. In task 2 the interarrival time is much less for small packets and this means that more packets can be sent in a certain time.
1. Charles E. Spurgeon (February 2000) - Ethernet: The definitive guide, (First Edition), Published by O'Reilly & Associates, Printed in the United States of America
2. EdrawSoft, Local Area Network Technologies Retrieved 30th March 2009 from http://www.edrawsoft.com/Local-Area-Network.php
3. Behrouz A.Forouzan (2007), Data Communications and Networking, (Fourth Edition), DeAnza College, Published by McGraw-Hill, New York
4. CISCO, Internetworking Technology Handbook - Internet Protocols (IP), Retrieved 31st March 2009 from http://www.cisco.com/en/US/docs/internetworking/technology/handbook/Internet-Protocols.html#wp3632
5. ADBH Web, T171 TMA4 - The importance of TCP/IP in the development of the Internet, Retrieved 31st of March 2009 from http://www.adbh.co.uk/t171/tma4.php
6. Dr. Evangelos V.Kopsachilis (April 2005) - Technologies of Transport, Processing and Depiction of Data, (Version 1.00), Theory Elements of the “Transport and Depiction of Data” course, Alexandrium Technological Institute of Thessaloniki, Department of Automatism
7. William Stalings(2005), Wireless Communications & Networks, (Second Edition), Protocols and the TCP/IP suite (Chapter 4, p.69-93), Published by Pearson Prentice Hall
8. TrainSignalTraining, Networking Basics: TCP,UDP,TCP/IP & OSI models, Retrieved 31st of March 2009 from http://www.trainsignaltraining.com/networking-basics-tcp-udp-tcpip-osi-models/2007-10-29/
9. CISCO, Internetworking Technology Handbook - Ethernet, Retrieved 1st of April 2009 from http://www.cisco.com/en/US/docs/internetworking/technology/handbook/Ethernet.html
10. Jochen H.Schiller (2003), Mobile Communications”, (Second Edition), Mobile Transport Layer (Chapter 9, p.351-372), Published by Pearson Prentice Hall
Need an essay? You can buy essay help from us today!