Print Email Download Reference This Send to Kindle Reddit This
submit to reddit

Computer networks

Computer networks can be regarded as devices to organize computer systems; they merge computers and communications and offer a means to minimize the distinction between tools to process and store information and tools to collect and transport information. They can be used to implement a communication and information environment in which the barriers among information on several computers can be omitted. Computer networks allow the organization to enhance several information sharing practices within the organization. These include cost-reduction through sharing of hardware and software resources and reliability through the introduction of multiple sources of data and redundancy. In today's business environment, it's important to study and understand various technologies that have been used in computer networks.

Today, organizations have come to rely heavily on information technology to enhance their processes and decision-making capabilities. Through the use of networking, organizations are capable of communicating across their entirety in a fast and efficient way. Networking allows for the sharing of resources among offices and departments regardless of the geographical distances. However, as more and more data is put through the network connections, the probability that a data packet would contain information that is sensitive in nature, increases. Hence, it is important to secure that data over any network in order to prevent leaking of such information to unwanted entities.

The purpose of this assignment is to look at various networking technologies and services to gain an understanding of the requirements and implementation and assess how they play their role in providing some of the most important functions in a computer network, including security, addressing and transportation of data. Technologies discussed in this assignment include TLS, IPSec, IP and IDS.

IPSec and TLS are two of the security specifications used by network administrators worldwide to secure connections between two communication devices. TLS works at the transport layer of the TCP/IP protocol stack and is an uncomplex protocol to use, hence it is used in providing security over the network where bandwidth and network infrastructure placement is an issue. The most common example of the use of TLS is in internet banking, where the customer is able to updates his or her account over the Internet securely through the provision of a TLS secured connection. IPSec defines a more complex architecture to provide data security over public networks. It offers much more powerful data protection than TLS but specifies special entities within its architecture. The placement of this requirement means that IPSec is preferred only in cases in which special connections are required to be made and in cases where data security absolutely cannot be compromised. One such application is a VPN in which IPSec is used to provide secure connectivity by creating a protective tunnel over a public network connection (Tipton & Krause, 2005).

The network infrastructure used for this task belongs to Nexus Edge Pte Ltd. (NEPL), an Australian medical diagnostic equipment company, headquartered in Sydney with branch offices in Brisbane and an international presence in Hong Kong and Malaysia. The significant geographical separation of its offices requires the need for constant connectivity among these offices to coordinate business activities and to enable the sharing of information. Therefore, NEPL has both LAN and WAN implementation, which allows for computer connectivity within its various offices but and allows for the sharing of resources. As secure connection over the Internet is used for the connectivity between headquarters and branch offices. Headquarters also hosts a file server and a web server that hosts NEPL's website www.nexusedge.com. The web server, which is also accessible from the Internet, has been configured to support TLS Protocol while others including the file server are accessible only to branch office networks through the use of IPSec over VPN

NEPL's website www.nexusedge.com is hosted on a separate system in the DMZ of the Sydney Headquarter network and is accessible from the internet. The website allows customers to browse through the product catalogue and services provided by NEPL, which includes imported medical diagnostics equipment, custom-made medical equipment and the company's information. Customers can maintain an online account on the website to place orders for equipment and to enquiry about the status of their orders. This is done over a secure channel established through TLS protocol. The TLS channel is a secure encrypted channel established between the client's browser and the web server placed in the DMZ and is initiated when the customer tries to login to www.nexusedge.com. The process of establishing a secure connection prevents the data from being intercepted by an unauthorized user. This is illustrated in Figure-2. It begins with a handshake procedure when the client and the web server agree on a number of parameters used to establish the security of the connection. The exchange of messages enables both the web server and the customer's browser to share the security parameters and hence a secured connection is established (Tipton & Krause, 2005).

Figure-3 shows the use of IPSec in the NEPL network. The interconnectivity between headquarters and branch offices of NEPL allows for the sharing of resources in  headquarter's network with other offices. For example, the file server that stores the company's current and historical databases, and important financial documentation has been a central repository over the years. These files and database have been made accessible to the employees in the branch networks, which often require it when dealing with business transactions and order processing. Though the file server itself is not accessible by any public address, the interconnection of branch officers through the Internet, demands that a secure connection be established whenever access to the file server is required. This demand for secure connectivity has been achieved by connecting NEPL offices through VPN technology. A secure VPN connection is achieved by implementing IPSec standards in routers present in all offices (Tipton & Krause, 2005).

Routers in the branch offices maintain a security association, which is a logical connection with the router in headquarter, which allows the communication between the routers to take place within the parameters of the defined security connections. All routers use Encapsulate Security Payload (ESP) in IPSec to make the connection between the file server and the remote computer secure by encrypting the whole packet they receive from their internal network. This pack is then forwarded to the public network (here the Internet). The ESP service is used to translate the data from both routers into an encrypted format to make it unreadable. The ESP header is inserted between the IP header and any other subsequent layer packet contents. Once the ESP packet is received, the router removes the encapsulation header, decrypts the packet and forwards the IP datagram onto its local network, which is then received by the file server (Tipton & Krause, 2005).

Both TLS and IPSec are security protocols meant for securing data communications, however, both have been built differently; hence, they offer advantages over each other in different scenarios.

The use of TLS in NEPL's website connections offers several advantages; the biggest advantage of TLS over IPSec is that it was built for use in web communication or with HTTP. Hence, TLS support has been built-in in HTTP protocol specifications. A secure connection with a web server using TLS can be initiated using an 'https://' prefix before the fully qualified web address www.nexusedge.com. Furthermore, TLS is a very mature protocol, which has been implemented by a number of vendors and has greater implementation support. In addition, TLS is much simpler than IPSec; it allows for minimum wait time when establishing security and when sending and receiving data over the secure channel. In contrast, though IPSec offers much more security than TLS, its complexity not only makes the architecture heavy but also adversely affects the performance of the web server (Fung, 2004).

IPSec offers several advantages over TLS. The primary requirement for NEPL's file server was maximum security, as the file server stored critical business information. Hence, IPSec was chosen as it provides more complete security than TLS. IPSec encrypts the entire IP packet rather than in layer 4 as is the case with TLS. IPSec can encrypt packets from a protocol as it is implemented to operate in a protocol independent manner; hence, it can operate with a number of network technologies and protocols allowing NEPL to seamlessly change the implementation at any layer above it, which could not have been possible if TLS was used (Fung, 2004).

The most basic architecture of a network that supports video streaming services relies on two routers with multicast capabilities. One is used to route the streaming traffic from the network on which the streaming server exists. The other router provides an external interface to the network to which the streaming service will be delivered. Computers present in this network communicate with external networks through this router and are interconnected to each other through a Layer 2 switch capable of IGMP snooping. An illustration of this architecture is shown in Figure-4. This architecture uses IGMP to deliver the contents of the service on the local network link among the client computer, switch and the local multicast router while the communication between the multicast routers will take place through Protocol Independent Multicast (PIM) (Stevens & Wright, 1995).

The multicast mechanism in the network works by the transmission of an IP datagram packet to a "host group," which consists of one or most hosts or computers that are identified by a single IP destination address. The membership of the host group is dynamic, and allows any host to leave or join the group at any time. Also, a multicast packet can be sent to any multicast group by any host on the network even if it is not a member of that group (Stevens & Wright, 1995).

As shown in Figure-5, the routers at the edge of each of the networks also act as multicasting agents. These multicast agents are responsible for creation and maintenance of host groups and their membership's information. In order for multicasting to work, at least one multicast agent is required in the multicast receiving network. The video service can be subscribed to or unsubscribed by a host in the network by exchanging messages with its corresponding multicast agent (Stevens & Wright, 1995).

In this architecture, the multicast routers are also responsible for cross network delivery of the multicast datagram. When a multicast IP datagram containing the contents of the video stream needs to be sent, the streaming server will transmit it to the local network multicast address, which identifies all the neighbouring members of the destination host groups. If the members of the group are on other networks, the multicast router that receives this transmission will also relay it to the multicast agent of the network on which the service is to be delivered (Stevens & Wright, 1995).

Multicast agents or routers use IGMP to determine the members of groups on each of the networks upon which they are physically attached. These routers maintain a list of multicast group memberships for each of the attached networks, as well as a timer for each membership.

Three types of IGMP messages are exchanged between multicast agents and hosts. These include: Membership Queries, which are used by multicast agents to learn about the memberships, Membership Reports, used by hosts to report their membership information, as well as to join a group, and Leave Group messages, used by hosts to leave a multicast group. Whenever a multicast router starts on a network, it broadcasts General Membership Query messages on the entire network. This message is then retransmitted at regular intervals, throughout the period of connectivity between the router and the rest of the network. The purpose of this message is to create, maintain or update the multicast group membership list maintained by the router (Blank, 2004).

When a host receives a General Query message, instead of sending reports immediately, it starts a delay timer report for each group in which it has a membership. This is placed on the interface of the incoming message. Each of the timers is set to different values that are chosen randomly. When the timer expires, the host generates a report for the corresponding host group and sends it out. However, if the host hears a report being broadcast from other hosts for a group to which it is a member, it stops its timer for that particular group and does not generate a report, in order to avoid duplicate reports being generated. Thus, under normal circumstances, only one report is generated for each group on a particular network, and it is created by the member host whose timer expires first. When the router receives a report, it adds the group and reports the list of multicast groups that it maintains for each of the networks to which it is attached and sets the timer for each of the memberships on the list (Blank, 2004).

Also, when a host wants to join a multicast group, it immediately transmits an unsolicited IGMP version 2 Membership Report for that group. This is done to cover the possibility that the host could be the first member of the group on the network; it does this rather than waiting for the query message from the router. To prevent message loss or damage, this report is repeated once or twice after a pre-determined interval (Blank, 2004).

Some of the products available on the market that support multicast and are used in real networks include Firefly Media Server, Quicktime Broadcaster, SHOUTcast, Peercast and XMMS.

The PIM represents a group of protocols, each of which has been developed to perform optimally in different environments. Both PIM Sparse Mode (PIM-SM) and PIM Dense Mode (PIM-DM) can be used either throughout the multicast domain or in combination with each other, forming a network in which some devices operate Sparse Mode while others use Dense Mode. The PIM family share a common control message format. These control messages can be sent as raw IP datagrams, multicast to the link-local ALL PIM ROUTERS multicast groups or even as a unicast to a particular destination (Miller, 2009).

2

2.1

2.2

The basic assumption on which PIM-DM operates is that, in a network a multicast packet stream has receivers at most locations; that is, the receivers of the packet stream are distributed densely throughout the network (Miller, 2009).

Conversely, the PIM-SM operates on the assumption much of the subnets in the network will not be interested in getting the multicast stream packets (Miller, 2009).

PIM-DM makes use of only source-based trees. The use of source-based trees requires the building of a separate multicast distribution tree for each source sending multicast data. This tree is rooted at the router adjacent to the source, to which the source sends the data directly. The use of source-based trees also enables the use of Source-Specific Multicast (SSM), which also allows hosts to specify the source from which they want to receive data (Miller, 2009).

In PIM-SM, however, shared trees are used by default. These shared trees are multicast distribution trees rooted at some selected routers, called the Rendezvous Point (RP), and are used by all resources to send data to the multicast group. To send data to the RP, the designated router on the source network encapsulates the data in PIM control messages and sends them through unicast to the RP. This additional mechanism adds complexity to the architecture of a network using PIM-SM protocol (Miller, 2009).

In PIM-DM, each router on the source LAN receives and forwards the data received from the source to all of the PIM neighbours. Each of the neighbouring routers performs the same task although a check is made to determine if a packet has arrived on an upstream interface or not. Routers that do not have links to hosts interested in the streams or themselves do not require the data to respond to these requests by sending a PIM Prune message upstream, create a Prune state in the upstream router for that particular link. This "broadcast and Prune" behaviour in PIM-DM network ensures that the data is sent to only those parts of the network where it is required (Miller, 2009).

In contrast, in PIM-SM, data is only forwarded to those routers and hosts that have declared their interest in receiving the stream to their upstream neighbours. This means that in PIM-SM, routers are required explicitly to tell their neighbours about their interest in particular groups and sources. This is done by using PIM Join and PIM Prune messages to join specific multicast distribution trees in the neighbouring router (Miller, 2009).

In a PIM-DM enabled network it is assumed that the multicast packet stream will be received by most hosts in the destination network. A real life example could be a video conference by the CEO or director of a company in which all employees are required to participate.

In contrast, PIM-SM assumes relatively fewer receivers. An example would be the company orientation training video for new employees. This difference shows up in the initial behaviour and mechanisms of the two protocols. PIM-SM only sends multicasts when requested to do so. Whereas PIM-DM starts by flooding the multicast traffic, and then stopping at each link where it is not needed, using a Prune message. One can think of the Prune message as one router telling another "we don't need that multicast over here right now." (Goralski, 2008).

Today Email has become one of the foremost tools on the Internet for efficient communications. The use of email allows for much quicker delivery of messages compared to conventional means, but it also provides for delivery of important documents over large geographical distances within minutes. The purpose of this task is to look at the structure of an Email and the infrastructure involved in its delivery, in order to gain an understanding of what gives Email such as fascinating capability.

Figure-6 shows the four email hosts involved in the given scenario, the email that Ruba writes will be transferred by Ruba's email client, acting as Mail User Agent (MUA) to Simple Mail Transfer Server (SMTP) server of Ruba's ISP. This SMTP server serves as a Mail Transfer Agent (MTA) and is responsible to exchange emails with other MTAs worldwide. Once the exchange of emails occurs with the MTA of KISBA's ISP, the email resides on the server until it is retrieved by Kisba's email client, another MUA, through Post Office Protocol version 3 (POP3) or Internet Access Message Protocol (IMAP) (Kozierok, 2005).

There are three protocols commonly used in email exchanges. The SMTP has been defined as a standard protocol for sending email to the servers, while in order to receive the emails either IMAP or POP3 can be used. Both protocols have different advantages and disadvantages over each other. For example, IMAP allows for multiple clients to connect to a single mailbox at the same time while allowing a single user to maintain multiple mailboxes. However, addition of such capabilities has made IMAP protocol complex. If the protocol is not implemented carefully, then an IMAP client would end up using a major part of the IMAP server's resources. POP3 protocol is a much simpler protocol and does not consume many server resources. It also enables the email to be read and responded to offline, in contrast to IMAP (Kozierok, 2005).

The structure of the email can be classified into two parts, the header and the body. The header contains the information about the sender and the receiver as well as information about the content of the email. Since the email contains both text and attachments in proprietary formats, it will contain text in its original form while both the attachments will be encoded in MIME format, so that a standard stream of text can be transferred. The typical structure of an email is shown in Figure-7.

The received email message would be different from the sent email message in a number of ways. The message would differ in size and in the type of information contained in the header. For example, the mail that was sent from Ruba's email client only contained information about the sender and receiver of the email along with its date, time and encoding, however, when the email was exchanged between different MTA that were involved in the delivery of the email to Kisba's ISP, each of the agents appended a signature to the header, indicating the name of the agent that received the mail, the name of the agent from which the mail was received and the date and time at which the exchange took place. This additional information found in the received email address also changes the size of the email that was received by Kisba. This change, as illustrated in Figure-8 in size would vary according to the number of agents involved in the transfer of the email; hence it could range from a few hundred bytes to a few kilobytes (Kozierok, 2005).

Though IPv4 was considered more than sufficient at the time it was standardized, the development and adoption of Internet technologies among the general public led to the realization that IPv4 would become insufficient to cater to the needs of future networks. It was felt that future generations of networks would suffer from a deficiency of IP addresses and the lack of support for current and future network services would prove a major hindrance in the development of network technologies. It was also felt that with the enhancement in the performance in other layers of computer networks and development of standards such as IPTV, multimedia support on a large scale would become mandatory in the future. Hence, an effort was made in the 1990s to develop a new standard of addressing, which would not only cater to the needs of a large amounts of IP addresses in the future, but would also be dynamic enough to support the technologies of the future. Recommendations for the IP Next Generation (IPng) Protocol was approved by the IETF Steering Group and was published as a proposal standard on 17th November 1994 (Kozierok, 2005).

IPv6 provides a platform for new Internet functionalities that will be required in the near future. IPv6 also includes a transition mechanism designed to permit users to adopt and deploy the new generation of IP protocol in a highly diffused environment as well as to address the interoperability issues between two different versions of IP standards (IPv4 and IPv6). This implies that Ipv6 will be much easier to adopt than its predecessor. Furthermore, Ipv6 also supports priority traffic routing allowing certain packets to take precedence over others. This feature would greatly facilitate streaming over IPv6 network, allowing real-time streaming of video or audio to take place without delay or disruption (Kozierok, 2005). An illustration of the differences between IPv4 and IPv6 datagram is shown in Figure-9.

One of the major difference in the capabilities of IPv4 and IPv6 is the number of devices that each of the standards can support. The older version IPv4 reserves 32-bits for source and destination addresses in its packet, which makes it capable of supporting 4294.97 million addresses at one time (Kozierok, 2005). Though this range was seen as enough by the researchers when IPv4 was standardized, due to the unpredictable technology boom of the 1990s, this range has been predicted to be insufficient.

In comparison to IPv4, IPv6 extends the capabilities of IPv4 to a new level. One of the major advancements in IPv6 from IPv4 is the extension of IP address ranges available for assignment. IPv6 reserves 128-bits for source and destination addressing through which it can support 3.4x1038 addresses, which is more than sufficient for the needs of the foreseeable future. It is predicted that after the transition to IPv6 is complete worldwide, each person on the earth would be able to have his/her own IP address and yet there would still be room for assigning addresses to additional devices (Kozierok, 2005).

IPv4 supports a maximum packet size of 576 bytes. This means that for larger packet sizes, IPv4 implements packet fragmentation, which breaks a large packet into smaller packets that are then assembled back on the receiver side. This facility of host packet fragmentation is supported by both the host that is sending the data and the routers that switch the data in between. The fragmentation support at the router means that the router has to perform additional work in reassembling packets and fragmenting them again, hence slowing down the transport of packets (Kozierok, 2005).

In IPv6, however, the maximum packet size of an IP datagram has also been increased to 1280 bytes, which also implies that there would be less need for packet fragmentation, hence in IPv6, only sending host support IP datagram fragmentation (Kozierok, 2005). The absence of the fragmentation mechanism at the routers means that routers switch the packet faster; hence an IPv6 network offers significantly less round trip delay timing than its predecessor.

Standardized in the 1980s, IPv4 lacks the support for new security and traffic manipulation mechanisms. For example, in IPv4, the support for IPSec is optional. This means that IPSec is added to the IPv4 network as an add-on, which affects the performance of the networks. The IPv4 header also does not contain necessary fields that could be used to implement QoS at routers. Implementing QoS at higher levels means a degradation in the overall performance of the network (Kozierok, 2005).

The support of IPSec is mandatory in IPv6, which makes it more secure than IPv4. IPv6 also fully supports implementing QoS at end devices and on routers in between through the introduction of fields such as Flow ID and Traffic Class used to set priority for data packets. Implementation of QoS at the transport level means that traffic priorities can be set, hence, it would not allow for efficient switching of packets with little or no delay tolerance (for e.g. multimedia streaming) (Kozierok, 2005).

When IPv4 was standardized, the research in network protocols was not advanced. There were not many higher or lower layer protocols in the market that could perform data integrity. In order to deal with scenarios in which IPv4 was used with protocols not capable of data integrity check, IPv4 header also contained a checksum field that was included to perform data integrity checks at the network layer (Kozierok, 2005).

However, as time passed, the research progressed and gave rise to new protocols capable of performing data integrity more efficiently in other levels of the protocol stack. Therefore, in IPv6 the support for data integrity was deemed as redundant and IPv6 datagrams no longer support data integrity checks. Removal of the data integrity field also means that the space has been used for other required features in IPv6 giving it greater improvement over IPv4 (Kozierok, 2005).

When considered in the mobile environment, a node using IPv4 for addressing, requires the presence of a Foreign Agent (FA) in order to communicate successfully with the home network (Kozierok, 2005). This FA maintains the address of the visiting node; all packets being routed to the node in the visiting network go through the FA. This has a potential of creating a bottleneck in a network that has a lot of visiting nodes; hence, the delivery of data to these networks would be delayed.

However, IPv6 does not require a FA present in the IPv4 mobility architecture. The mobile node through an auto configuration mechanism implemented in IPv6, is able to get the same address in the foreign network without any external help. This saves network resources and reduces the chance of delay in delivery of packets to the visiting node, as the home network is able to route packets directly with the node (Kozierok, 2005).

One more feature that sets IPv6 apart from its predecessor is the use of the extension header. In IPv4, the IP datagram header contains the options field, which is used to encapsulate additional information. However, the size of this field is only limited to 40 octets, which means that only a fixed and limited amount of additional information can be inserted per datagram. Another disadvantage of IPv4 is the flexible size of the datagram header, meaning that the whole datagram has to be read by IP switching devices such as routers, in order to get complete information in a datagram.

IPv6 makes use of the Extension Headers as shown in Figure-10. With the introduction of Extension Headers, IPv6 has done away with the options field that was present in IPv4. The use of extension headers has provided a more powerful way to attach additional information to an IP datagram and has resulted in the fixed length of the IPv6 datagram. The Extension Header allows a greater amount of data to be transferred per IP datagram header, which is extremely efficient. The fixed size of the header and inclusion of additional information in the header extensions means that the routers do not have to go through the whole IP header looking for additional information, which was the case with IPv4, but can directly skip the whole header to read any additional information present in the header extension. Hence, in IPv6, the switching and routing of the IP packets is significantly faster than in IPv4 (Kozierok, 2005).

IPv4 has sufficient capabilities to support the current network services and technologies. However, with the emergence of concepts such as Fixed-Mobile Convergence and telecommunication networks, IPv4 does not provide enough capabilities to support these networks in terms of capacity and services. Thus, IPv6 has been designed with the thought of supporting future networks. It is capable of supporting deployment of IP networks on a massive scale and is able to support all the services provided by these networks.

IDS are used in combination with other measures to provide a second line of defence and to bolster the security to tackle with internally carried out attacks. An IDS is a device, in most cases a separate computer system that monitors the activity of the network traffic to identify any malicious or suspicious events. When such an event is detected, the IDS issues an alarm to the relevant authority to which it has been configured to report. A typical IDS performs a numbers of functions, as shown in Figure-11. These include, monitoring of users and system activity on the network, auditing of system configuration for problems and possible points of vulnerability. In addition, it is responsible for assessing the integrity of critical systems and data files, recognition of known attack patterns in activities on the system, identification of abnormal activities through the use of statistical analysis, correction of system configuration errors as well as installation and operation of traps that would be used to record information about intruders in case a possible attack is detected (Frincke, 2002).

IDS can be classified in a number of ways. However, mainly, they are classified based on the techniques they use for intrusion detection on the basis of their scope of protection (placement). When classified with respect to their techniques, IDS can be Signature-based or Heuristic-based, as “How to Detect?” On the other hand, if classified based on their scope of protection, IDS can be Network-based or Host-based, as “Where to Detect?” The following lines provide a brief description of their classification and an explanation of each type (Frincke, 2002).

SB-IDS use pattern matching for detection of malicious activities and report all events in which a pattern is detected, which correspond to a known attack type pattern. SB-IDS typically maintains a database that contains the data patterns of a number of known attacks. These systems use statistical analysis to obtain sample measurements of several key indicators in which it matched against patterns present in the database to determine if an attack is being carried out or not (Frincke, 2002).

This can be illustrated by the example, that the ICMP packet to port 59 of any host or computer will trigger the SB-IDS; however, the ICMP packet to other ports will not be detected as an intrusion. This limitation occurs because the database of the SB-IDS only contains information that identifies the ICMP packet sent to port 59 as an attack. This also implies that it is difficult to make a SB-IDS work with a large number of protocols, since it would require a very large database (Frincke, 2002).

An example of SB-IDS is NIKSUN NetDetector. NetDetector monitors network traffic on a continuous basis and raises alerts on pre-defined signatures and when traffic patterns are detected.

HB-IDS, also called Anomaly-based IDS, works by building a model of acceptable behaviour and then detecting and reporting any exceptions. These reported exceptions are then used by the administrator to improve the response of the IDS through a learning mechanism that allows the IDS to use the feedback of the decision that was made by the administrator on similar decisions on previous occasions. The core working of a HB-IDS focuses around the differentiation of normal from abnormal behaviour.

An example to illustrate how HB-IDS work is a port scan. A port scan involves sending of SYN packets to a large number of ports of a host, to discover which port is responsive, or in other words, to detect the presence of a service on a host. However, this port scan could actually be a user or the network administrator himself, trying to test connectivity or service availability of another system. HB-IDS classify this event initially as a malicious attempt, however, the network administrator is then able to view the results of the classification and set a threshold for such an event. HB-IDS will then use information from this feedback to improve its detection, and hence the next time such an event takes place, this threshold will be used for classification. The classification and feedback usage in a Heuristic-based system is done by the inference engine of the IDS. It performs the analysis of activities taking place on the network or on system on a continuous basis, and raises the alarm whenever malicious behaviour is detected (Pfleeger & Pfleeger, 2003).

One example of HB-IDS is Snort. Snort is a very popular IDS that allows the network administrator or manager to configure the software for the required level of detection.

H-IDS work by placing special software onto a specific host or computer. This software is used to monitor the system files, processes, log files and changes to privileges of users in order to detect any malicious activity. Figure-12 illustrates a typical deployment of a H-IDS on the network. Here, an H-IDS is shown to be running on PC3. Being a software application, H-IDS typically starts running when the computer starts and stays memory resident until the computer shuts down. Being run on a PC (host), an H-IDS has scope, which is limited only by the boundaries of PC3, hence it has no knowledge or capability of detection of the traffic that flows beyond this PC. Whenever, a malicious or suspicious activity is detected, the H-IDS triggers an alarm and forwards it to the monitoring system, where the concerned personnel investigate the problem and take necessary preventive measures (Frincke, 2002).

An example of H-IDS is of the popular software Zone Alarm. Zone Alarm, though primarily a firewall, also has built-in IDS, which is able to detect malicious activity on a host by monitoring the processes and files as well as the data generated by them. Any probable malicious activity is displayed to the user of the system through its interface.

A N-IDS operates by placing the sensors on various segments of the network. These sensors are then used to analyze the traffic that flows through different segments of the network to determine if any malicious activity is being carried out. N-IDS are flexible in a sense that they can support different architecture and devices such as routers, bridges and switches. Sensors of N-IDS are then able to connect to any of these devices to perform intrusion detection and protection to other devices or computers attached to it. Figure-13 illustrates a network with N-IDS (Ghorbani, 2009).

The best example of N-IDS is Snort. Snort uses a combination of pattern matching and Heuristic-based detection techniques to monitor data passing through the network to provide protection against malicious activities. Snort is simply an application that typically runs on a dedicated computer that is connected to the segment of the network on which protection is required (Koziol, 2003).

The IDS have both strengths and weaknesses depending on the type of IDS. A summary of these is presented in Figure-14, while a detailed discussion is presented in the next section.

H-IDS operate by placing sensors through the installation of special software on a system. Hence there is no need for additional network infrastructure for H-IDS to operate. Due to the limited scope, H-IDS are also capable of monitoring specific activities on a local host, which is impossible by N-IDS. For example, Zone Alarm's IDS are capable of monitoring all the running processes on the host and of blocking their access to different resources of the computer as well as controlling their external access (Pfleeger & Pfleeger, 2003).

H-IDS are capable of performing duties even when the data on the network is encrypted. A H-IDS monitors the data entering and leaving the host, hence it can perform detection before encryption and after the decryption of data has taken place (Techtopia, 2009).

H-IDS maintain the data in a log file in context with the overall system, hence it provides a better evaluation of an attack being successful. This type of detection is considered more accurate; hence, H-IDS are less prone to generate false alarms (Internet Security Systems, 1998).

The N-IDS allows for deployment at points of access that are critical for network security. It can then be used to monitor network traffic that is destined for numerous hosts requiring to be protected (Ghorbani, 2009).

Furthermore, they also offer reduced costs of ownership, unlike H-IDS, which to be installed on every host on the network to achieve significant protection, requires a single N-IDS capable of protecting the entire network, which not only costs less in deployment but also reduces maintenance overhead (Internet Security Systems, 1998).

In addition, N-IDS scans all headers of the packets travelling through the network for malicious and suspicious activity. This means that a N-IDS is able to detect many of the common forms of attacks (such as DoS) with ease. Furthermore, their placement on the network also allows them to be the first line of defence, detecting the attack before the attacker gets to any host (Techtopia, 2009).

When an N-IDS is used, it collects and analyzes data in real-time. Hence, it will be difficult for an attacker to remove evidence of the attack in its presence. In addition, N-IDS are independence from operating systems (Internet Security Systems, 1998).

Since an H-IDS is local to a host, it makes use of the resources of the host, such as CPU and Memory to perform its activity. Though not a serious issue for typical users, it can have a significant impact on the host when the host computer is required to perform resource hungry tasks or fulfil real-time demands (Frincke, 2002).

The H-IDS can be turned off if the user is able to gain administrative access to the host computer, something beyond the control of IDS. Moreover, because of the deployment on the local host level, the scalability and manageability will become very cumbersome in a large corporate network environment, and increase maintenance overheads in general.

H-IDS rely on log-based analysis that takes away their capability of detecting anomalies in real-time. It also emphasizes that the logs themselves can be manipulated by an attacker, which totally takes away the effectiveness of an H-IDS, making the IDS vulnerable. Furthermore, a H-IDS does not check the network traffic, hence it does not trigger an alarm if an attack is carried out on another host (Pfleeger & Pfleeger, 2003).

The N-IDS is unable to monitor encrypted traffic. Since such traffic uses encrypted keys only known to the source and destination systems, these IDS are unable to perform any detection on such types of data (Techtopia, 2009).

N-IDS can only detect attacks that pass through that segment on the network on which they are placed. Even if an attack is carried out on a host present on the internal network of an organization, the IDS cannot guarantee that the attack will be detected. The importance of this factor is further elevated by the fact that IDS are among the top systems studied by attackers and hackers, and any important information with regard to a particular IDS sells like hot cakes. Hence, the attackers devise new and innovative ways to bypass the IDS during an attack and often they are the first system that is disabled during the attack. Moreover, N-IDS are unable to detect local host attacks, since no traffic passes through the network when these attacks are carried out (Frincke, 2002).

Here, we present the advantages and disadvantages of IDS as they are deployed on the network of Nexus Edge, the company discussed in Task 1.

To provide comprehensive network security, NEPL's Network Manager has used both Network-Based and Host-Based IDS. Figure-15 shows how the IDS have been deployed in the network architecture. For N-IDS, Enterasys Dragon was selected and placed right after the Head Office Router that connects the network to the Internet. The network manager is able to monitor the status and configure of the N-IDS through the Enterasys Network Management Service (NMS) Server. GFI LANGuard Security Event Log Manager (LANGuard-SELM) is used as the H-IDS that has been installed and running on each of the servers on the network.

Enterasys Dragon N-IDS was selected by the Network Manager, because of its advantage of network behavioral anomaly-based detection, which allows the tracking, analyzing and recording of the package to identify threats that are least recognizable by signature, and can be detected and blocked in NEPL's networks (Enterasys, 2010).

In addition, N-IDS can detect and notify the Network Manager before any damage is done to any of the servers on the NEPL's network. Here, the IDS have also been configured to display an alert on the Enterasys NMS Server and to send an automated notification email with the details of the activity to the Network Manager's email.

N-IDS deployment enables the Network Manager to proactively protect against threats and vulnerabilities at NEPL's networks with a single device. This is cost-effective and saves time as the Network Manager just has to configure and maintain one device centrally.

The N-IDS performs detection in real-time, therefore an attacker cannot modify the evidence once it is recorded by the IDS. This helps the Network Manager in precisely identifying the source and type of attack and in taking effective measures to eliminate the loopholes that caused the attack (Peter & Rebecca, n.d.).

Furthermore, a large number of attacks can be detected from the information contained in the headers of IP datagram packets that are sent out during the attack. By performing a complete packet analysis of the content as well as the header of datagram packets, a N-IDS is able to gather evidence of attackers and remove their access into NEPL's network with ease, hence by using a N-IDS, the network manager can have peace-of-mind against all common types of attacks that could occur (Ghorbani, 2009).

LANGuard-SELM was chosen as H-IDS, because of its advantage of real-time protection monitoring of critical security events and regular analysis of host security logs, that it enables NEPL's Network Manager to detect and respond immediately to external and internal threats against all such activities on the server that could lead to the compromise of its security (GFI, 2010).

In addition, intelligent policies with event-processing rules in H-IDS, enables the Network Manager flexibility to configure what behavior requires to monitor and what action to be taken when an attacker is triggered, according to the individual host's requirements and needs, which is impossible for N-IDS (GFI, 2010).

Furthermore, an H-IDS is able to detect errors in server configuration. This means IDS helps in protecting the server itself and helps the Network Manager in identifying and suggesting the ways in which the server's security can be increased, thus making the system less vulnerable; see example in Figure-16 (GFI, 2010).

H-IDS is capable of functioning with either network traffic before encryption or after decryption, thus enabling Network Manager to protect the servers from attacks that orginate from secure connections, such as an attack carried out from one of NEPL's branch networks (Peter & Rebecca, n.d.).

Furthermore, H-IDS are easy to operate as these systems do not require any complex configuration. A simple installation of software and configuration of a network connection for regular updates is all that is needed for H-IDS to function. Thus, Network Manager does not need to employ expertise to perform the tasks.

When using N-IDS, a network manager has to ensure that measures are taken to protect the host from attackers that do not use the network. For example, if an attack is carried out locally on a system by attempting to log in a user, these activities cannot be detected by an N-IDS; the Network Manager has to offer protection to servers against such attacks by using other means.

Not all N-IDS are capable of remaining invisible, as N-IDS can be detected and disabled easily before an attack is carried out. Thus, Network Manager has to implement H-IDS as a second line of defence to complete the protection needs of NEPL's servers.

In addition, an N-IDS is unable to monitor encrypted traffic such as a TLS connection or IPSec connectivity. In order to cover this lack, the Network Manager has used H-IDS, as NEPL uses both TLS and IPSec connectivity in its operations.

N-IDS has limited scalability, which means it can only monitor traffic up to certain amounts of speed. This is a hardware limitation; going beyond the maximum data capability of the N-IDS results in “skips” in packet analysis, which create loopholes in the security. Therefore, if the NEPL's network grows, the Network Manager will be required to buy another N-IDS device to handle the additional traffic and to prevent any security loopholes due to traffic increases (Techtopia, 2009).

It is always a difficult task for a Network Manager to achieve a level of security optimum for a network.  A Network Manager has to constantly go through logs and tweak the setting. If the threshold is set too high, the N-IDS would generate a lot of false positive alarms, while setting it too low reduces the level of protection it offers, which could be quite troublesome when the Network Manager is engaged in other important activities (Ghorbani, 2009).

Since H-IDS cannot detect any malicious activity on the NEPL's network, it permits the attacker to penetrate deep into the network without being detected or even gaining access to servers. Hence, H-IDS cannot be used for complete protection by the Network Manager. If such an attack is successful, the attacker can also delete the log file of the H-IDS after the attack, which causes trouble to the Network Manager as the attack cannot be traced.

The limited scope of the H-IDS requires a separate application to be run on each host. Hence, in NEPL's network, there are six instances of the IDS running separately on each of the servers. This is quite an expensive measure for the Network Manager as it requires separate licenses to be bought for each copy of the IDS.

Furthermore, the Network Manager is also required to manage and configure all the IDSs separately. This is quite cumbersome and may even become difficult to manage as the size of the network grows. This may require additional headcount of administrators for support (Techtopia, 2009).

An H-IDS with an outdated pattern database poses a severe security flaw in the network security. Therefore, the Network Manager has to ensure the database is updated. Comprehensive protection is only possible if a massive database of pattern is available, which would not only be impossible to implement in each host but would also degrade the performance of the IDS. Therefore, the Network Manager must to concentrate on improving network security using other devices (Ghorbani, 2009).

Print Email Download Reference This Send to Kindle Reddit This

Share This Essay

To share this essay on Reddit, Facebook, Twitter, or Google+ just click on the buttons below:

Request Removal

If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:

Request the removal of this essay.


More from UK Essays

Doing your resits? We can help!