Transport Layer & Network Layer Protocols Attack
Published: Last Edited:
Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here.
Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Chapter 1: Introduction
Data Communication technologies and the underlying protocols in the twenty-first century is one of the critical elements that acts as the backbone for electronic commerce and use of the world-wide-web (Todd and Johnson, 2001). The increase in the growth of electronic commerce as well as other forms of internet-based secure communication have increased not only the risks associated with networking related attacks that involve in the loss of personal information and possible financial loss to the victims. One of the major components in the Internet communication is the underlying protocol that governs the compilation and communication of the information from the source computer to the target and back (Nash et al, 2001). The role of the protocols in the networking also has a key influence on its ability to securely deliver the information as part of the overall communication architecture. This makes it clear that the robustness of the protocol and the extent to which a given protocol architecture can resist intruder attacks through encryption efficiency etc dictates the security associated with the information transfer as argued by Todd and Johnson, (2001). In this report a critical overview on the transport layer and the network layer protocols of the TCP/IP protocol architecture is presented to the reader. The research aims to throw light on the possible security attacks on these protocols and the possible countermeasures in order to prevent such attacks. The attacks in these cases mainly concern with the infringement of the information through unauthorised access bypassing the security or breaking the encryption surrounding the information being transported.
1.2: Aim and Objectives
The aim of this research is to investigate the possible attacks on the Transport layer & Network layer protocols and present possible countermeasures on overcoming the threat of these attacks on the day-to-day Internet-based data communication.
The above aim of the research is accomplished through embracing the research on the following objectives
- To conduct a literature review on the Transport and Network layers of the TCP/IP protocol architecture.
- To conduct a critical overview on the possible types of attacks on the Transport Layer and Network Layer protocols.
- To present a critical analysis on the possible countermeasures to prevent the attacks on the Transport layer protocols.
1.3: Research Methodology
A qualitative approach is used to conduct the research. Since there are five layers to the TCP/IP model of which the research aims to investigate the protocols associated with the Transport and Network layer, a qualitative approach is deemed effective as the infrastructure required to simulate tests for conducting a quantitative research is limited for conducting the research. As it is also apparent that the analysis on the five layers of the TCP/IP model is beyond the scope of the research conducted in this report, the research conducted mainly focuses on the key threats and possible types of attacks on the protocols of the TCP/IP layers discussed.
1.4: Chapter Overview
Chapter 1: Introduction
This is the current chapter that presents the aim, objectives and brief overview on the research conducted to the reader.
Chapter 2: Literature Review
This chapter presents an overview on the layers of the TCP/IP model followed by a detailed overview on the key Transport layer and Network layer protocols. The chapter also presents a brief overview on the network attacks and the possible threats associated with the Internet data transfer.
Chapter 3: Protocol Attacks
This chapter presents a critical overview on the types of attacks on the Transport Layer and Network Layer protocols. The chapter presents a critical analysis on the methods used and the potential losses that may result due to the attacks.
Chapter 4: Countermeasures
This chapter presents a critical overview on the possible countermeasures that are implemented in order to prevent the attacks discussed in chapter 3. A comparative study on the countermeasures discussed is also presented in this chapter.
Chapter 2: Literature Review
2.1: Internet Security in the twenty-first century
The increase in the need for internet security from unauthorised access and malicious attacks is not only due to the need protecting personal/sensitive information of the users but also the service providers (Ganesh and Thorsteinson, 2003). This is naturally because of the fact that the service providers can perform effectively only when the requests sent to the server are valid thus making justifiable use of the resources (Rayns et al, 2003). The use of the resources in terms of the number of connections and the allocation of memory to cater for each connection established with the web server of the service provider is deemed to attribute to the extent to which a given website establishment performs effectively. This makes it clear that the need for internet security is not only a matter of protecting personal information but also effective utilisation of the computer resources dedicated for the purpose as argued by Rayns et al (2003).
Walden (2007) further argues that the security over the internet is mainly accomplished through implementing security measures on the connection-oriented and connection-less protocol used for transferring information from one end to another. It is interesting to note that the above focuses especially on the resource utilisation and protection of computer from malicious attacks through ensuring that the communication to and from the computer are not only secure but also valid. It is necessary to ensure both the validity and security of a given connection over the internet because of the former corresponds to the availability of the service whilst the later attribute to the reliability of the available service (Walden, 2007). It is also interesting to note that the prevention of unauthorised access to information systems connected to the Internet is deemed effective as opposed to implementing access control on each individual system as argued by Todd and Johnson (2001). This makes it clear that the implementation of the security over the Internet is mainly through implementing the preventive measures against malicious attacks through strengthening the protocols used in the various layers of the TCP/IP model. As the TCP/IP model forms the basis for communication over the internet, it is apparent that the robustness of the protocols implemented in each layer of the TCP/IP stack dictates the effectiveness of the Internet security implemented (Walden, 2007). In the next section a critical overview of the TCP/IP model is presented to the reader.
2.2: TCP/IP Model
‘TCP/ IP is a set of rules that defines how two computers address each other and send data to each other’ as argued by Blank (2004) (p1). Naturally the above makes it clear that TCP/IP is merely a framework that governs the methods to be deployed in order to enable communication over the internet between two computing devices. As TCP/IP is platform independent in nature, it provides a communication framework that can be deployed across any given operating system on a computing device connected to the Internet or even a dedicated network as opposed to the World Wide Web. This further opens room for development of new protocols and communication standards/rules that can be implemented using the TCP/IP model on any one of its five layers as argued by Rayns et al (2003). Hence securing the information being transferred from one end to another over a given network or the Internet can be accomplished through implementing a combination of protocols to operate within the layers of the TCP/IP framework. The five layers of the TCP/IP model are
- Application Layer
- Transport Layer
- Network Layer
- Data Link Layer and
- Physical Layer.
From the above it is evident that TCP/IP can be implemented in a given network using any number of protocols in each layer of the TCP/IP model depending upon the level of security required and the speed in data transfer. This is because of the fact that the increase in the number of protocols naturally increases the size of the data packet being transferred as part of the communication thus having a direct impact on the speed of communication as argued by Rayns et al (2003). It must also be noted that the protocols presented in each layer of the TCP/IP model shown in Fig 1 is merely a selection and not the exhaustive list of the protocol suite.
From the model represented in Fig 1 one should also appreciate that the layers of the TCP/IP model are arranged in a logical fashion so that the protocols closer to the top at the layer 1 associate themselves with the computing applications that handle data encryption and security. The protocols to the bottom of the TCP/IP stack on layer 5 on the other hand associate themselves with the actual data transfer from one end to another through establishing connection and enabling communication between sender and receiver as argued by Blank (2004).
As the research presented in this report focuses on the Transport and Network layers of the TCP/IP model a detailed overview on the five layers is beyond the scope of this report. A brief overview on each TCP/IP layer is presented below.
Application Layer – This layer of the TCP/IP model comprises of the protocols that associate with the handling of data and the encryption of the information order to effectively transfer the information from one end to another. The application layer is also deemed as the layer of the TCP/IP model that communicates with the actual application that is handling the information prior to its transfer over the Internet. The protocols of the application layer enable the interaction between the computer and the actual web application that performs the business logic associated with the application prior to preparing the information for transfer over the Internet. This makes it clear that the application layer encryption is mainly associated with the segmentation of the data into packets and allocates the associated headers in order to enable their transfer over the Internet. This also makes it clear that the security associated with the information transfer is not implemented at the Application Layer of the TCP/IP model. This makes it clear that the application layer protocols are extensively used in case of client server applications where the data transfer between the client and the server is in the full-duplex mode (Feit, 1998).
Transport Layer – This is the actual layer that manages the connection between the two computers and the success or failure of the information being transferred as argued by Blank (2004). The purpose of the Transport layer protocol as the name suggests is to ensure the secure and successful transfer of information over the Internet between the communicating parties as argued by Ganesh and Thorsteinson (2003). The process of enabling end-to-end communication for successful data transfer is the major task that is accomplished using the Transport layer of the TCP/IP model.
It is also interesting to not that the transport layer of the TCP/IP model provides the error tracking, flow control and data fragmentation capabilities independent of the underlying network as argued by Feit (1998). The transport layer of the TCP/IP model also performs the task of assigning the header to the data fragment off the overall information being transferred from one end to another.
The transport layer of the TCP/IP model implements two forms of communication strategies. These are connection-oriented and connectionless implementation as discussed below.
Connection-Oriented Implementation – The TCP (Transmission Control Protocol) protocol of the transport layer accomplishes the connection-oriented strategy of data communication. The connection-oriented approach o data communication corresponds to the process where a connection must be available between the communicating parties in conformance with the authentication and association rules prior to actually performing data transfer. This makes it clear that the data transfer in case of a connection-oriented approach depends on the extent to which the connection being established is live between the communicating computers. This makes it clear that the data transfer in a connection-oriented implementation can be accomplished only with the ability to maintain the connection between the computers thus making the data transfer reliable as argued by Feit (1998). This is naturally because of the fact that termination of the connection of loss of connection established during the course of the communication/data transfer would trigger a request to resend the information thus providing room for transferring all the information from one end to another. The session based communication strategies in terms of enabling the communication security is one of the key features of the connection-oriented implementation as prolonged inactivity or termination of the session will naturally terminate the connection established thus protecting the information transferred over the internet. Public Key Infrastructure (PKI) which will be discussed in the next section depends on the establishment of a connection-oriented communication strategy in order to ensure that the communication between the two computers using the connection oriented approach will help protect the information being transferred by the transport layer protocol. As discussed earlier, the transfer of information from one end to another in a communication channel is accomplished through segmenting the information into equal sized segments of data called packets that are assigned a header containing the details of the packet as well as its sequence in the information being transfer. The connection-oriented implementation of the transport layer has following key features
- Sequential data transfer – This is method which follows the First-in First-out (FIFO) strategy. Thus the sequence in which the data packets are received is the same in which it is being sent from the source computer. This approach is deemed secure to ensure that the information being transferred is not tampered with and loss of one of the packets will enable the sender to resend the entire information again. However, the major disadvantage is that the increase in the size of the information will result in poor performance in terms of speed of data transfer.
- Higher level of error control –As the connection oriented approach ensures that the connection established is live between the sender and the receiver throughout the entire communication process, it is clear that the error control is accomplished successfully through enabling the sender to resend the packets that were not received in the initial transfer. The control in the loss of packets using the above resend strategy naturally minimizes the error associated with the data transfer.
- Duplication Control – The connection-oriented strategy also has the inherent ability to eliminate duplicate data packets transferred thus allowing the connection-oriented architecture to ensure consistency in the information being transferred.
- Congestion Control – The TCP protocol monitors the network traffic as part of the transport layer activities. This ensures that that the session established between the sender and the receiver can transfer the required information successfully prior to reaching the session time-out situation as argued by Feit (1998).
The client-server communication over the internet is a classical example for the implementation of a connection oriented strategy in the Transport layer of the TCP/IP model. The use of the PKI in the communication is one of the key aspects of the connection-oriented implementation that makes the TCP protocol a key element in the secure data transfer strategies of the day.
Connectionless Implementation – As the name suggests the connectionless implementation is the case where a dedicated connection is not required to complete the data transfer between the communicating computer as argued by Blank (2004). The User Datagram Protocol (UDP) is used in the case of connectionless implementation where the transfer of the data packets merely comprises the packet order and the source/target details alone. This makes it clear that the transfer of data can be achieved at a higher rate as the authentication and validation of the data transferred is not restricted to a time frame or the session that controls the communication. However, the major issue associated is the lack of security and inaccuracy of the data transferred. Alongside, the key issue with the UDP protocol and the connectionless implementation is the lack of traceability of the information thus resulting in a non-reliable communication channel as argued by Blank (2004). The UDP is thus deemed to be an insecure mode of communication over the internet due to the lack of security measures apart from authentication and identification of the communicative parties. It is further important to appreciate that implementing PKI using the connectionless approach would result the exposure of the information and the lack of effective acknowledgement of the authentication between the communicating computers thus affecting the information security providing room for network attacks that can directly affect the information being transferred through the connection.
Network Layer – Blank (2004) argues that the network layer of the TCP/IP model performs the task of delivery of the data within the network one the data packet has reached the appropriate network subnet. This makes it clear that the network layer of the TCP/IP model plays a critical role in identifying the correct network target/destination in order to enable effective communication between the communicating parties as argued by Feit (1998). In case of the World Wide Web, the Network Layer plays the vital role of identifying the destination network and enabling the routing of the packets through the network in order to effectively reach the destination without the data being tampered by unauthorised users. The protocols that are widely used in the Network layer include the Internet Protocol (IP) and the Internet Control Message Protocol (ICMP). The Routing Information Protocol (RIP) of the information TCP/IP model which is predominantly used in the Application layer plays a vital role in the network layer for enabling routing of the information across the networks in order to effectively reach the target computer in the communication channel established over the Internet. It is further critical to appreciate the fact that the routing of the packets alone is not the task of the network layer protocols but also to enable the transport layer protocol to effectively enable the communication and data transfer between the communicating computers. This makes it clear that network attacks over the internet by hackers to affect the performance of the communicating computers in order to gain unauthorised access is accomplished through manipulating the communication strategies implemented by the protocols in the Transport and Network Layers of the TCP/IP model. The access to information and the actual infringement of the information which is deemed as the consequence of the hacking or network attack is related to the infringement of the information at the application layer protocols that hold the actual information being transferred (Blank, 2004). However, the attacks themselves that facilitate the aforementioned are accomplished through manipulating the procedures associated with the Transport Layer and Network Layer protocols. The attacks typically include spoofing, overloading, flooding etc., which are discussed in detail at chapter 3 of this report.
The Data Link Layer and the Physical Layer of the TCP/IP model involve the actual hardware based communication strategies that are beyond the scope of this research. Hence these two layers of the TCP/IP model are not discussed any further. It is important to appreciate the fact that the top three layers of the TCP/IP stack interact frequently in order to enable the secure communication and allocation of computing resources on the computing devices involved in the communication (Blank, 2004).
2.3: Public Key Infrastructure – an overview
PKI implements a form of cryptography known as the asymmetric cryptography in order to enable secure communication between two computers over the Internet as argued by Todd and Johnson (2001). This process mainly involves the use of a public key and private key that are used for encrypting and decrypting the information at the client and server ends respectively (Blank, 2004). The process of encryption is beyond the scope of this research although its role in the secure communication and the extent to which a hacker can manipulate the authentication strategies for launching an attack is relevant to the research. Hence the discussion in this section mainly concerns with the handshake and the communication strategies deployed along with an overview of the players in the PKI. This will help in identifying the various attacks plausible and the level manipulation that can be implemented by the hacker over the protocols used in order to infringe the communication between the client and server computers.
It is deemed that the PKI is a reliable communication strategy to implement secure communication through the use of Trusted Third Party (TTP) authentication and approval of the overall communication process between the server and the client computers. The key components of the PKI infrastructure that enable successful and reliable communication over the internet are discussed below
- Certificate Authority (CA) – The CA is the issuer and control of the public key and the digital certificate associated with the authentication and transfer of secure information over the connection established using the TCP protocol. The primary role of the CA is to generate the public and the private keys simultaneously for a given server computer or service provider (Blank, 2004). The public key as the name suggests is made available over the public domain for encryption/decryption of the information at the client-end of the connection. The private key is not shared and stored at the server which is used for encryption/decryption of the information as applicable at the server end of the connection established for communication. From the above description it is evident that the role of the CA in the PKI is pivotal for the effective implementation of the PKI for secure communication free of network attacks. This is because, if the server hosting the CA application is attacked either using cross site scripting or flood attacks, the public keys stored as well as the associated certificates for verification are compromised thus resulting in the hacker gaining control over the communication channel without the knowledge of the server or the client as argued by Blank (2004). This makes it clear that the security at the CA computer is critical to establish a reliable TTP computer for implementing connection-oriented communication using TCP protocol of the TCP/IP model.
- Registering Authority (RA) – The RA as the name implies is the verifier of the digital certificate before it is issued to a requestor as argued by Todd and Johnson (2001). The role of the RA computer in the PKI implementation is to enable an independent authorisation of the digital certificates issued thus providing a secondary verification of the information prior to communicating to the server. This presence of an independent verifying program or computer as part of the communication makes the PKI a reliable communication strategy to implemented connection-oriented communication over the internet in a secure fashion. It is also deemed to the key weakness of the PKI strategy owing to the fact that the reliability of the RA as a TTP in the communication process dictates the effectiveness of the communication and the protection of the server from intruder attacks as argued by Todd and Johnson (2001). However, the reliability of the CA or RA is not the issue of debate in this research but the potential attacks that threaten the stability of the computers hosting the CA and RA programs in order to enable secure connection oriented implementation across the Internet. The key area where the attacks can be accomplished by hackers to disable the RA or the CA computer eventually compromising the information held within is the use process of handshake where the RA or the CA computer is expected to receive an acknowledgement (ACK) from the requesting computer for each message successfully communicated. It is through manipulating these handshake communications a CA or RA can be compromised as the communication channel is expected to open for a specific time period to receive the ACK as well as allocate sufficient resources to complete the data transfer. Abuse of this feature is one of the major areas where the connection-oriented communication faces threat of attacks. These are discussed in chapter 3 elaborately.
- Directories – The directories are the locations on the public domain that host the public keys for enabling the encryption of the information. The keys are normally held in more than one location in order to enable easy/quick access to the information as well as a verification strategy to ensure that the key retrieved is indeed the valid one for data transfer between the client a given server computer.
- Certificate Management System (CMS) – This is the application that controls or monitors the certificates issued and facilitates the verification process. The CMS forms the core of the PKI infrastructure as the CA and RA computers in the given PKI implementation are expected to host a validated CMS program to enable the connection-oriented communication between the client and the server. The key issue associated with the case described above is the fact that the CMS program itself is an independent application and hence its reliability/robustness to prevent malicious attacks alone dictates the extent to which a given CA or RA is reliable over the Internet.
The key security strategy in case of the PKI implementation is the sharing of the public key whilst retaining the private key at the server computer as argued by Burnett and Paine (2001) . This strategy allows the server computer to effectively encrypt or decrypt the information without depending upon the public key and hence leading a two pronged attack as the information encrypted using private key can be deciphered using the public key and vice versa. Although the use of the two independent keys helps overcome the security threats to the information being transferred, the transfer process itself is not governed by the PKI. This is the major weakness of the PKI infrastructure that allows room for malicious attacks that can hamper the performance by the CA, Ra or the host server computers as argued by Burnett and Paine (2001).
From the above discussion it is evident that the security established using the PKI is mainly dependant on the following key elements of the PKI infrastructure
- CA and RA – The validity and reliability of these computers play a vital role in the effective implementation of the PKI. Apart from the fact that the client computer sending the information depends upon these computers security of the information in its entirety, it is also clear that the availability of these computers and their responses in terms of session time control and preventing session time-out cases are critical to enable successful communication in a connection-oriented implementation using the TCP protocol. An attack on the hosting server for the CA or the RA mainly in terms of flooding or denial of Service will result in the failure of the PKI infrastructure in terms of lack of availability. This situation is one of the major elements that must be addressed as part of the security strategies to be implemented on the transport layer protocols.
- Encryption Algorithm – The encryption algorithm used for issuing the public and private keys is another element that influences the security and reliability of the PKI as argued by Burnett and Paine (2001). The effectiveness of the hashing algorithm used for the purpose is not only essential for ensuring the security of the information through encryption but also dictates the size of the information for transmission after encryption as well as the speed associated with the data transfer for a given encryption strategy. As the complexity of the encryption algorithm naturally increases the size of the data being transferred thus affecting the speed associated with the communication, it is critical to establish a balance between security and speed in order to enable effective communication over an established connection. It is also important to note that the use of encryption algorithm dictates the extent to which a hacker can hack into the information that is being transfer whilst launching a transport/network layer attack as argued by Burnett and Paine (2001). It is deemed that the hackers through launching malicious attacks at the transport layer or the network layer tend to utilise the time gap to decipher the information being transferred in order to use the data for personal benefits. This makes it clear that the code hacking at the protocol level mainly attribute to the speed with which a given payload data transferred over a connection can be decrypted prior to termination of the connection itself as argued by Burnett and Paine (2001). From the above arguments we can deduce that the encryption poses the threat of single point of failure to the PKI in terms either being weak to prevent infringement or highly strong affecting the communication speed as argued by Nash et al (2001).
Advantages or benefits of PKI
The major benefits of the PKI include the following
- The TTP presence enables higher level of security through verification by independent entities in the communication process. The CA and RA in the PKI play a vital role in achieving the aforementioned.
- The dedication of resources for developing stronger algorithms to generate reliable public and private keys is yet another advantage associated with the PKI implementation. This makes it clear that the transport layer protocol (TCP) of the network layer protocol (IP, ICMP) weaknesses in terms of the request for resend and other key elements of connection verification can be overcome through robust algorithms. The growth f the electronic commerce at an exponential rate is one of the key factors that attribute to the availability of resources dedicated to the development of the PKI security strategies (Nash et al, 2001).
- The security infrastructure behind the storage and retrieval of the public keys is yet another area where the reliability and effectiveness of the PKI is evident. The fact that in case of reliable CA and RA, the security is indeed robust and the information being transferred is secure as well as the communication process as argued by Nash et al (2001).
Constraints, Weaknesses and threats
- TTP reliability and costs – As discussed before the major issue is the TTP reliability. The involvement of the TTP not only questions the reliability but also increases the operational costs associated with the implementation and continuous maintenance of the CA and RA (Todd and Johnson, 2001). This naturally hampers the development process associated as the costs associated increases in case of the maintenance of the CA and RA thus limiting the funds availability for development strategies for the PKI infrastructure.
- Encryption limitations – The encryption applied is limited to the data being transferred and hence the contents of the application layer alone are encrypted thus leaving the transport layer and network layer unsecured. This provides room for hackers and intruders to tamper with the header information at the network or transport layer to trigger a man in the middle attack. As the ability to mask the header information on the data packets at the transport layer level especially in case of the connection oriented architecture through launching a Trojan attack is plausible, the loss of information or the security of the data being transferred is not entirely reliable if the information being transferred is infringed at the transport level. This is plausible if the hacker can successfully intrude into the network switch or the hub for the service provider in a given geography. This will provide the hacker the ability to view the traffic as an administrator thus affecting the information transfer as well as the performance of the service provider through prolonged wait times or session time-out extensions.
- The fact that PKI can be implemented on a model like that of TCP/IP makes it vulnerable as the security strategy is weak in case of all the service providers deploying the same communication model providing the hackers a common ground to attack upon. This situation normally arises in case of application layer or transport layer protocols as argued by Todd and Johnson (2001). This is because of the fact that the protocols in these two layers are expected to interact in a seamless fashion in a PKI set-up thus providing room for compromising the protocol security of one layer to another.
2.4: Protocol Attacks
In the overview presented in sections 2.2 and 2.3 the attacks on the protocols were mentioned only briefly. This section provides an overview on the kind of attacks that are typically launched over the Transport layer and Network layer protocols prior to analysing specific attacks in chapter 3.
The attack on the transport layer and network layer protocols is mainly accomplished through abusing the handshake principles that underline effectiveness of the protocol for data accuracy and security as argued by Conway (2004). This is mainly because of the fact that the handshake principle behind each protocol is the major attribute that influences the effectiveness of the protocol and hence forfeiting or discarding the handshake rules is not plausible at the server or the client ends of the communication. One such attack that is popular in the Transport Layer on the TCP protocol is the SYN flooding attack. The key strategy behind this attack is to flood the server with malicious SYN requests from unauthorised users or hackers thus emptying the number of connections available at the server side of the application. This naturally makes it impossible for the legitimate client to establish connection with the server as the server has already exceeded the number of connections that can be made.
From the above overview it is clear that the attacks on the transport layer is mainly to render the server of the client computer inaccessible to authorised users in order to extend the session time-out period for the hacker. Upon achieving this stage at the transport or network layer level, the hacker can then utilise the time gained to decrypt the information on the payload to gain unauthorised access to sensitive information or resources as argued by Conway (2004). The process of code hacking is yet another procedure that is prominent in the case of transport layer and network layer protocols. This is because of the fact that the algorithms used by these protocols are not encryption standards thus providing room for the hacker to effectively launch an attack to modify the header information to direct the delivery of the packets to a different server location to the actual destination. This procedure although similar to spoofing does not use the process of replicating the environment of the server to trick the client in revealing personal information but intercepts the information on the channel through acting as a server at the hub or switch level where the network traffic can be filtered and identified to the single user effectively (Conway, 2004).
Other attacks on the transport layer and network layer protocols include the classical Denial of Service (DoS) attacks (Conway, 2004). This is the case where the increase in the number of connections through flooding the server as in the case of SYN flooding is one of the methods makes the service unavailable. However, the use of code hacking strategies to launch scripts at the server side (web server) to change the application state to busy using code hacking strategies is yet another successful strategy to implement DoS attacks. The methods involved in launching a DoS attack is limited only to the creativity and resources available to the hacker as the attempt to make a web service unavailable to its intended users can be accomplished at all layers of the TCP/IP model (Conway, 2004). A detailed overview on the range of attacks plausible on the transport and network layer protocols is presented in chapter 3.
Chapter 3: Protocol Attacks
This chapter presents an overview on the various attacks plausible on the transport layer and network layer protocols focusing mainly on the TCP, UDP, ICMP, RIP and IP protocols. The list of attacks presented in this chapter mainly present an overview on attack and not the actual process associated with implementing the attack on the network. The countermeasures presented in chapter 4 provide a detailed overview on that aspect as it is prudent to discuss the attack alongside its remedy. This approach is deemed to develop a better appreciation of the techniques used behind the attacks.
3.1: Spoofing and Flooding
Spoofing is deemed to be the Network Layer attack strategy for the hackers as the process actually aims to divert the connection of the communication to the hackers’ server computer as opposed to the intended server (Conway, 2004). The process of spoofing is mainly implemented through intercepting the Internet Protocol or the RIP protocol whilst performing the handshake with the communicating server. The process of intercepting a network layer protocol is plausible only at the router or switch level as the transfer of the information from one end to another over the Internet involves the use of the routing through internet hubs. This process normally communicates to dedicated ports across the globe that provides the hackers with the ability to target the specific communication ports in a given communication channel connecting to the server thus diverting the communication to the hackers’ server effectively. The variants of the spoofing attack include the following
- ICMP floods
- Permanent Denial-of-Service attacks
From the aforementioned it is evident that spoofing attacks on the network layer can be achieved by various strategies intended on the one purpose of diverting the communication to a different computer than the intended computer. This makes it clear that the spoofing attacks are plausible to affect the communication not only to the server but also from the server if the hacker can effectively mask the client computer or introduce cross-site scripting through hacking into the client computer. Cross-site scripting as argued by McClure et al (2005) is the process by which a hacker can successfully run un-trusted scripts or codes at the client computer without the notice of the user on the client computer. The major issue associated with the cross-site scripting in the case of transport layer and network layer protocols is the fact that the information infringement is achieved at the application layer level itself and hence the TCP or IP protocol can be manipulated with ease to not only communicate with the server but also act as a conduit for the hacker to gain access to the server unnoticed.
3.2: Tiny Fragment Attack
This is a kind of network attack that is targeted mainly on the TCP protocol (Miller, 2001). The scope of the attack is mainly focused on cases ‘where the filtering rules allow incoming connections to a machine AND there other ports which allow only outgoing connections on the same host, the attack allows incoming connections to the supposedly outgoing-only ports’ (Miller, 2001, p1).
One should note that only the initial connection message needs to be fragmented in order to launch the aforementioned attack. Upon establishing the connection the hacker can communicate to the server and access information through the outgoing ports without the server or the attacked computer realising the infringement. However, the security policy deployed on the network in question must also be taken into account which can help control the attack but not prevent it.
The process involved in the attack is as follows
“Fragment 1: (Fragment offset = 0; length >= 16) Includes whole header and is entirely legal. Typically it describes a SYN packet initiating a new TCP connection to a port on the target host that is allowed to receive incoming connections. e.g., Incoming connection to port 25 SMTP.
Fragment 2: (Fragment offset = 0; length = 8) Is only the first 8 bytes and could be legal depending on the other 8-bytes of the header, but is NOT legal combined with the corresponding bytes from Fragment 1. Such a fragment includes only the port numbers and sequence number from the TCP header. Typically this packet replaces the destination port number with a port number on which the destination host that is not allowed to receive incoming connections.
Fragment 3: (Fragment offset >= 2; length = rest of message) Contains no header and completes the message. (This third fragment is not part of the attack. However Fragment 1 cannot be the complete message or it would be passed up to the application before Fragment 2 arrived so a third fragment is necessary.)” (Miller, 2001, p2).
From the description above it is clear that the effective implementation of the attack is accomplished through fragmenting the header data in order to lead the host computer to divert the communication from the incoming port to the outgoing port. As opposed to spoofing or the flooding attacks, the Tiny Fragment attack is performed through focusing on a specific target computer and not a landscape of computers. This makes it clear that the servers or host computers with dedicated incoming and outgoing ports must implement rigid security policies in order to flag such attacks if not prevent them as argued by Conway (2004).
3.3: Denial of Service Attacks
The DoS attacks discussed in chapter 2 vary in nature depending upon the protocol layer being attacked and the nature of the application (Conway, 2004). The classical example is the case of the Tiny Fragment Attack discussed in section 3.2. In this case, the outgoing port is not only turned into a dedicated communication channel for the hacker but also prevents the attacked computer from providing more connections to intended users. This naturally results in the DoS attack especially in case of server computers where the communication ports are dedicated for incoming or outgoing messages as argued by Conway (2004).
Another variant of the DoS attack is targeted on the RIP protocol especially in case of the network switches and routers used in a given network as argued by Conway (2004). The approach RIP attacks to inflict Denial of Service are mainly accomplished at the hub level. The attack typically diverts all the requests from the intended users to the hackers’ server which provides a busy response thus preventing the intended users from connecting to the server.
3.4: Blind connection-reset attack
This is a TCP attack that is intended to reset the TCP connection through using the ICMP error messages. The TCP connection established between two computers when handles an ICMP error message performs one of the following two fault recovery functions
- Hard-Error –This is the case where there is a failure that cannot be recovered within the session time-out limits (Conway, 2004). If the network problem being reported is a hard error, TCP will abort the corresponding connection thus making the port available for communication.
- Soft-Error – This is the case where the error reported by the ICMP message is a soft error (i.e.) the error is only temporary and recoverable before session time-out. In this case TCP will just record this information, and repeatedly retransmit its data until they either get acknowledged, or the connection times out (Johnston, 2006).
From the aforementioned it is evident that an attacker can effectively terminate the TCP connection through sending the hard error message using the ICMP protocol. An attacker can use ICMP to perform a blind connection-reset attack. In this case the attacker can even be off-patch (i.e.) the attacker need not be connected directly to the network but merely have the facility to trigger the ICMP hard error messages on a frequent basis in order to reset any TCP connection taking place in a given communication channel as argued by Conway (2004). In order to perform such an attack, an attacker needs only to send any ICMP error message that indicates a "hard error", to either of the two TCP endpoints of the connection. This process will naturally result in the termination of the connection as the TCP fault recovery policy is to immediately abort the connection (Conway, 2004). In order to successfully launch the attack all an attacker needs to know is the socket pair that identifies the TCP connection to be attacked. In some scenarios, the IP addresses and port numbers in use may be easily guessed or known to the attacker thus making the process a key threat to the TCP connection and transport layer communication.
3.5: Blind throughput-reduction attack
This attack using ICMP protocol to affect the TCP communication without resetting or terminating the connection is aimed at reducing the effective bandwidth used by the connection to transfer information. As it is the host requirement according to RFC1122 ‘that hosts MUST react to ICMP Source Quench messages by slowing transmission on the connection’, it is clear that an attacker through triggering such an error message using ICMP (type 4, code 0) to a TCP endpoint can reduce the rate at which data is sent over the connection. This process results in the host to deliberately reduce the bandwidth used even if the resources are available due to the ICMP error message transmitted by the attacker to one of the end points in the socket pair (Conway, 2004). RFC1122 also has a recommended procedure to put the corresponding connection in the slow-start phase of the TCP’s congestion control algorithm as (RFC2581). This process makes it clear that the communication bandwidth as well as the communication process between the corresponding connections if encountered by an initial congestion window can reduce the throughput of the server by a greater extent without the server facing any resource constraints (Conway, 2004). This approach has been one of the major issues faced by e-commerce service providers and the CAs and RAs of the PKI infrastructure. The introduction of such an error message can drastically reduce the number of online transactions handled by the server thus affecting the performance both technically and monetarily as argued by Conway (2004).
3.6: Blind performance-degrading attack
This attack is focused on the alteration of the "Path MTU Discovery" (PMTUD) used by the IP hosts to determine the Path MTU (PMTU) or the datagram size that does not require fragmentation across the connection established for data transfer. The process Blind performance-degrading attack is where an attacker uses the ICMP protocol to send a forged ‘Destination Unreachable, fragmentation needed and DF set’ packet (or their ICMPv6 counterpart) to the sending host, advertising a small Next-Hop MTU.
This naturally results in the attacked system to reduce the packet size that it sends on subsequent communication thus affecting the performance of the host server without having to increase the number of outgoing ports or flooding the connections.
The process is achieved through abusing the PMTUD mechanism used by the TCP to determine the maximum size of packet that can be sent without further fragmentation. This occurs especially in cases where a given IP host intends to send large quantity of data from one end to another as argued by Conway (2004). The consequences of the Blind Performance-degrading attack include the following
- It will increase the header/data ratio thus increasing the overhead needed to send data to the remote TCP end-point. This makes the overall process tedious and presents the server with false information to decide on terminating the connection or reducing the bandwidth allocation for sending the information.
- The second and most drastic consequence is the actual performance degradation through increasing the CPU utilization at the server computer. This is because of the fact if the attacked system intends to keep the same throughput it was receiving before being attacked, it will increase the packet rate (Conway, 2004). This makes it clear that the IRQ (Interrupt Request) rate increases significantly on virtually all the systems that are involved in the connection. This will naturally lead to the increase in the processor utilisation thus degrading the overall system performance at the hosting server.
3.7: TCP Port 80 attacks
This is one of the major areas of internet attacks targeted by the hackers extensively as argued by Conway (2004). This is naturally because of the fact that the web servers of electronic commerce implementations across the World Wide Web implement the communication strategies through port 80 of the client and server computers thus providing a common operational ground for the attackers to target a variety of attacks on Port 80.
The graph presented in fig 1 provides an insight on the extent to which the TCP port 80 attacks have been utilised by hackers across the globe.
From the above it is evident that the requests submitted targeting on TCP Port 80 is not only voluminous in nature but the attacks also vary drastically with the intention of the attacker thus making the process complex and equally vulnerable as argued by Bellamy (2002). Fig 2 further represents a geographical segmentation of the range of Port 80 attacks
From the figure above it is evident that the range of attacks however varied in nature predominantly involves the HTTP header attacks and the FTP attacks. These application layer protocols are attacked by communicating using the TCP Port 80 thus providing room for malicious information to enter into the client or server computer (Conway, 2004).
From the discussion presented in this chapter it is evident that the range of attacks on the Transport and Network Layers can directly on indirectly attack the end users through any number of methods from merely introducing a DoS at the server up to affecting the processor utilization at the attacked computer resulting the performance degradation. In the next chapter a range of counter measures both attack specific and generic to the protocols concerned is presented to the reader.
Chapter 4: Counter-Measures
4.1: Spoofing countermeasures
4.1.1: Detect multiple replies (IDS)
The detection of multiple IDS replies is deemed to be one of the major strategies in identifying and preventing spoofing techniques as argued by Conway (2004). The case of the Man in the Middle attacks where the client and the server are spoofed by the attacker is where the aforementioned is highly prominent as argued by Miles et al (2007). This process can be accomplished through monitoring the network traffic and the extent to which a connected computer receives requests from the communicating computer as argued by Miles et al (2007). This is because in case of spoofing or man in the middle attacks, the attacker must send multiple requests to both the client and server simultaneously in order to effectively control the communication between both ends without having to break the connection due to session time out as argued by Miles et al (2007). This process of sending multiple replies would result in the flooding of the victim computers thus resulting increasing the level of CPU resources dedicated for the task. This procedure is deemed effective to identify and control the spoofing especially in case of DNS spoofing as argued by Miles et al (2007).
4.1.2: Header Verification
Another spoofing countermeasure that is most popular and effective in nature is the treatment of the TCP connection and the verification of the header data and the port randomization (Conway, 2004). This latter will be discussed in later section of this chapter. The header verification especially checking the sequence and verification of the payload would provide the details of the communicating computer as well as the level of requests sent or received. As the spoofing computer is expected to communicate simultaneously with the client and the server computers in order to effectively implement a man-in-the middle attack, the verification of the header information in most cases will help identify a potential spoofing attack on the client or the server computer (Conway, 2004).
4.1.3: URL verification
This is the approach where the details transferred over the URL and the actual IP address being connected to are monitored in order to identify a potential attack. Conway (2004) argues that the use of firewalls and the listing of trusted sites mapped against their DNS schemes provides the firewall program in place to effectively identify a spoofing attack on a computer thus enabling the network security and preventing potential loss of information or connectivity loss to the intended server.
It is further argued by Miles et al (2007) that the use of the trusted sites listing and the storage of their DNS data as part of the firewall program will also help overcome the issues associated with the allocation of resources and the URL spoofing. This because the policies drafted as per RFC1122 aims expects the host not to transfer sensitive information over the URL as part of the communication (Conway, 2004).
4.2: Tiny Fragment Attack – Countermeasure
The most critical attack faced by the transport layer protocol at the server side that can affect the performance as well as infringe information is the Tiny Fragment Attack discussed in section 3.2.
The validation method that is proposed in RFC3128 in order to effectively overcome the issue associated with the Tiny Fragment attack and Overlapping Fragment Attacks is mainly accomplished through rejecting the packets with Fragment Offset (FO) as value 1. The treatment of the packets for FO = 1 will help reject most of the fragments that are sent through the outgoing port where the FO is expected to be set as 0. However, if none of the fragments sent have FO = 1 (i.e.) if the first three fragments sent to the outgoing port do not have FO = 1 then the packets are not rejected. This naturally opens the communication channel to the attacker.
The RFC1858 states that ‘The indirect method relies on the observation that when a TCP packet is fragmented so as to force "interesting" header fields out of the zero-offset fragment, there must exist a fragment with FO equal to 1’.
The above statement holds true for the fragment are genuine in nature and generally such fragments contain the FO value as 1 hence allowing the protocol to reject the communication to the outgoing port as argued by Conway (2004). However, in case of attacks and hackers, this is deemed to be the major weakness or vulnerability of the countermeasure implemented. Hence an extended countermeasure is implemented as follows.
In addition to blocking fragments with FO = 1, blocking all the fragments with FO = 0 and an incomplete header must also be rejected by the server at the outgoing ports. This strategy will help overcome the Tiny Fragment Attack as well as Overlapping Fragments attack on the TCP protocol.
The driver program thus required as part of the protocol configuration algorithm is as follows
‘If FO=0 and PROTOCOL=TCP and TRANSPORTLEN < tmin then
if FO=1 and PROTOCOL=TCP then
DROP PACKET’ (RFC 3128)
4.3: Denial of Service Attacks
The countermeasure for the DoS attacks on the server involves the assessment of the waiting time associated with the server connection to the client in order to establish connection. The client guarantees that are established as part of the service policies established on the server can help achieve the aforementioned. As the client requests sent by the attacker to flood the server is normally from a single DNS, the connection service policy of the server computer must program and distinguish the anomalies to restrict multiple connections from a single user (Johnston, 2006).
Johnston (2006) further argue that the assessment of the waiting times will help the server to identify the connection and activity of the users connected in order to overcome the DoS attacks. The waiting following variables when assessed as part of the connectivity policies and client guarantees will help control the DoS attacks to a greater extent.
Maximum Waiting Time (MWT) – This is a variable that attributes to the fact that a client connection must be accepted after a preset MWT value of T. The agreement of the MWT value and the policies surrounding the provision of this guarantee must be registered as part of the web server configuration catering the client requests.
Finite Waiting Time (FWT) – This is the case where the client request is accepted eventually for service.
Probabilistic Waiting Time (PWT) – This is the probability that a client’s request is accepted for service in time T is not less than p (p is independent of attack). The definition of the rule in a given network environment will help identify the extent to which a server can determine the extent to which there is an DoS attack on the server through denying connection for intended clients. This makes it clear that the effective use of the PWT value would help determine and prevent the TCP “SYN” flooding attacks as well as the DoS attacks on the server (Johnston, 2006).
As DoS is deemed to be an end result for almost all server-side attacks on the transport layer protocol, the countermeasures discussed are only generic and hence attack specific countermeasures are critical for successful prevention. The TCP attacks using the ICMP protocol error messages (sections 3.4, 3.5 and 3.6) also trigger DoS to the intended clients, the countermeasures specific to these attacks also contribute to preventing the DoS attacks.
4.4: Countermeasures specific for Blind connection-reset attack
- Port Randomization – In order to perform the ICMP attacks on a TCP connection or the server machine, an attacker would need to guess (or know) four-tuple that identifies the connection to be attacked. By increasing the port number range used for each outgoing RCP connection process of identifying the four-tuple required can be made harder to the attacker thus posing a significant barrier to prevent the ICMP attacks (Johnston, 2006).
- Filtering ICMP error messages based on the ICMP payload - Conway (2004) argue that the source address of ICMP error messages does not need to be spoofed to perform the attack. This makes it clear that using firewall programs that perform ingress and egress packet filtering based on the source IP address of the IP header contained in the payload of the ICMP error message would help prevent this attack.
- Changing the reaction to hard errors – The major error codes transmitted by ICMP protocol that correspond to hard-error include
- ICMP type 3 (Destination Unreachable), code 2 (protocol unreachable)
- ICMP type 3 (Destination Unreachable), code 3 (port unreachable)
- ICMP type 3 (Destination Unreachable), code 4 (fragmentation needed and DF bit set)
- ICMPv6 type 1 (Destination Unreachable), code 1 (communication with destination administratively prohibited) and
- ICMPv6 type 1 (Destination Unreachable), code 4 (port unreachable)
The above errors when encountered by TCP must be treated as soft-errors as opposed to hard errors in order to ensure that the connection is not reset (Conway, 2004). This st
Cite This Essay
To export a reference to this article please select a referencing stye below: