Cell Pattern With A Frequency Reuse Computer Science Essay

Published:

LAN bridging is defined as the connection of multiple physical LANs to support a single logical LAN environment. Cisco defines a bridge as A device used to connect two separate Ethernet networks into one extended Ethernet. Bridges only forward packets between networks that are destined for the other network

Bridges and switches primarily operate at layer 2 of the OSI model and are widely referred to as layer 2 devices. Layer 2 handles transmission errors, controls data flow and provides physical addressing. These functions are provided by using various data-link layer protocols that specify certain variables.

When networks are geographically separated into several parts, bridges could prove to be very useful. Something is required to join these networks so that we can connect the whole network. Bridges could also come in handy due to the fact that certain LAN such as Ethernet can be limited in distance. Furthermore, through the use of bridges, network administrators' jobs are made easier as they can control the amount of traffic going through the devices sent across the network media and no device configuration is required as the bridge is plug and play device.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

Undoubtedly, many possibilities and benefits are offered when bridging connections between different LANs. Unfortunately, there are also some issues involved such as:

Data Rate Mismatch

Poor Throughput

Intermittent Connectivity

Data Rate Mismatch

When Bridging between LANs, connectivity issues may be encountered if the bridges are misconfigured. If the devices are configured with data rate settings which are incorrect or not completely correct, a data rate mismatch is created and the bridges fail to communicate.

For instance when a bridge is configured with a fixed data rate and the other bridge is set with a different fixed data rate there will be attempts to transmit data at the highest data rate which would result in failure of communication due to the bridge with the lower data rate setting. If there is some form of interference, communication takes place at the highest rate that allows data transmission.

In a similar scenario where a bridges has a high fixed data rate and the other bridge has an automatic data rate configuration, communication will take place at the rate of the bridge with the fixed data rate. However, if there is some form of error and the devices are required to drop under that same fixed amount, the device with the fixed data rate would not be able to participate in communication, leading to a communication failure.

Bit Ordering 

Both Ethernet and Token Ring use 48-bit Media Access Control addresses. Yet, there is a difference is the internal hardware representation of these MAC addresses. Token Ring treats the first bit as a high-order bit; this is referred to as a noncanonical method. On the contrary, Ethernet uses a canonical method as it treats the first bit as a low-order bit. Translation between noncanonical and canonical formats involves the reversal of the order of the bits in every byte of the address. When Bridging between an Ethernet LAN and a Token Ring LAN, correct translation of addresses must take place in order for communication to take place.

Incompatible MTUs

Similarly to the previous issue, incompatible maximum transfer unit sizes could be another issue involved when using Token Ring and Ethernet as they support different maximum frame sizes. The default MTU size of Ethernet is 1500 bytes whereas Token Ring frames are generally larger. Bridges cannot perform fragmentation and reassembly of frames. This would result in dropped packets when the packet sizes exceed certain MTU limits.

Q2.

Cellular Systems

Cell Pattern with a Frequency-Reuse Factor of 3

H:\CNP\Home Assignment 2\cell pattern 2.jpg

A cell is a section of the total coverage area which usually spans from 1 to 20 miles in radius. Conceptually, a cell is represnted by a hexagonal shape though in reality it is shaped based on geographic considerations. The cell site is the base station of a cell and this is consists an antenna, transceivers and other important equipment required to service mobile units in that cell. A coverage area, also known as a service area is the geographic area which the wireless system covers. (Tbaytel, 2012)

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

Neighbouring cells can't make use of the same frequency set for communication because interference may be created which would affect the users located close to the cell boundaries. The frequency sets available are limited; this is where the frequency reuse is involved. The increased capacity in a cellular network is available because the same radio frequency can be used again in different areas for completely different and separate transmission. A frequency reuse pattern is an assembly of N cells in which each cell uses a unique frequency set. Once this pattern is repeated, the frequencies can be reused.

"The frequency reuse factor is the rate at which the same frequency can be used in the network." "In case of N sector antennas on the same base station site, each with different direction, the base station site can serve N different sectors. A reuse pattern of N/K denotes a further division in frequency among N sector antennas per site." (scribd, n.d.)

K - The number of cells which can't make use of the same frequencies

N is frequency reuse factor

The higher the frequency reuse factor, the lower the network capacity but the further cells of similar frequency sets are located. This results in lower interference.  The lower the frequency reuse factor, the higher the network capacity but the closer cells of similar frequency sets are located. This results in higher interference. On the whole, a high frequency reuse factor would provide more capacity and lower levels of interferences.

P4.2

Q1.

TCP

Transmission Control Protocol is currently the most used protocol on the Internet. TCP is a byte-oriented, connection-based protocol. When a message is sent, it will get delivered unless the connection fails and this is thanks to the error correction and flow control it provides. When using TCP, messages will be received in the same order in which they are sent. Works using stream-oriented data where data is treated as a stream, so the beginning and end of the packet are indistinguishable. Being stream-oriented, TCP automatically breaks up data into packets and sends them across the network. Some examples of Transmission Control Protocols are HTTP (Apache TCP port 80), SMTP (TCP port 25) and FTP (port 21)

UDP

The User Datagram Protocol is a message-oriented, connectionless protocol and is commonly used for streaming media and other operations which focus on speed rather than reliability. UDP is faster than TCP due to the fact that no flow control or error correction is present. When using UDP, packets could be duplicated, delivered out of order or not even delivered at all. UDP does not change much on top of the Internet Protocol but it guarantees that whole packet would arrive at the destination or it would not arrive at all, there is no in between. Some examples of Transmission Control Protocols are DNS (UDP port 53), VoIP and TFTP.

Message Boundary Protection

Although Transmission Control Protocol provides a greater sense of reliability through mechanisims such as flow control and error detection, it is the User Datagram Protocol which protects the boundaries of a message. When using TCP, data is read as a byte stream without any distinguishing indications transmitted to signal segment boundaries. With UDP however, packets must be manually broken in order to be sent and packets are sent individually whilst being checked for integrity once they arrive. Furthermore, with UDP, Packets have definite boundaries which are defined once the packets are received, so a receiver's read operation would place forth an entire message as it was originally sent.

"UDP was created to solve the message boundary problem of TCP. UDP preserves data boundaries of all messages sent from an application to the network. Because UDP was specifically designed not to worry about reliable data transport, it does not need to use local buffers to keep sent or received data. Instead, each message is forwarded as a single packet as it is received from the application program. Also, each message received from the network is forwarded to the application program as a single message." Regarding message boundaries is it slightly easier when using UDP, but it is more complicated due to the fact that manual checks must be done for lost packets. (vendetta, 2012)

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Q2.

Network Security

Network Security is not simple. There are a number of vulnerable areas spread throughout different layers of the OSI model with every layer of the model involves unique security challenges. The most common attacks found at the transport layer are Denial of Service attacks or Distributed Denial of Service attack. Some other forms of attacks:

SSL Man-in-the-Middle Attack

Land Attack

TCP Connecting Hijacking

UDP Flood Attack

Port Scan Attack

However, the screenshot below displays a packet capture which is not affected by any of the above but rather a TCP "SYN" Attack. Keeping in mind that TCP is probably the most used transport layer protocol, an attack of the sort is quite common and possible.

TCP SYN Attack 

A TCP SYN attack is also referred to as SYN Flooding. In this method the 3-way-handshake between communicating hosts is hijacked and manipulated. SYN is one of the flags in the flags field of TCP and it is used in the 3-way-handshake for synchronization purposes.

When a host node receives a SYN request from a sending host, it must keep up to date on the partially opened connection in what is called a "listen queue" for a minimum of 1 minute and 15 seconds. Generally, the number of connections which can be kept up to date is rather limited.

"A malicious host can exploit the small size of the listen queue by sending multiple SYN requests to a host, but never replying to the SYN&ACK the other host sends back. By doing so, the other host's listen queue is quickly filled up, and it will stop accepting new connections, until a partially opened connection in the queue is completed or times out." (Javvin, n.d.)

The 75 second gap in which hosts can be removed from the network could be used as an opportunity window to initiate other forms of attacks such as denial of service attacks or it could even be used as a tool to carry out other attacks such as IP Spoofing.