The Transport Layer Security Protocol is used to communicate between client-server applications across a network. TLS helps in the communication preventing from the following

  • Tampering,
  • Eavesdropping,
  • Message forgery.

TLS provides the authentication at the endpoints and confidentiality over the network using cryptography and also it provides RSA security with 1024 and 2048 bit strengths

In typical end-user/browser usage, TLS authentication is unilateral: only the server is authenticated (the client knows the server's identity), but not vice versa (the client remains unauthenticated or anonymous). TLS uses handshake protocol for the communication over internet.

Following are the steps involved in TLS Handshake Protocol:-

  1. Both client and server exchanges Hello messages to agree on the algorithms exchange the random values and check for the session resumption between them.
  2. Both client and serer Exchange the necessary cryptographic parameters to agree on a premaster secret.
  3. The certificates and cryptographic information are exchanged between client and server for authenticating themselves. Generate a master secret from the premaster secret and exchanged random values.
  4. Security parameters will be provided to the record layer.
  5. It allows the client and server to verify that their peer has calculated the same security parameters and that the handshake occurred without tampering by an attacker.

Note that higher layers should not be overly reliant on TLS always negotiating the strongest possible connection between two peers. There are a number of ways a man in the middle attacker can attempt to make two entities drop down to the least secure method they support. The protocol has been designed to minimize this risk, but there are still attacks available: for example, an attacker could block access to the port a secure service runs on, or attempt to get the peers to negotiate an unauthenticated connection. The fundamental rule is that higher levels must be cognizant of what their security requirements are and never transmit information over a channel less secure than what they require. The TLS protocol is secure, in that any cipher suite offers its promised level of security: if you negotiate 3DES with a 1024 bit RSA key exchange with a host whose certificate you have verified, you can expect to be that secure."

The message that ends the handshake sends a hash of all the exchanged data seen by both parties. The pseudo random function splits the input data in two halves and processes them with different hashing algorithms (MD5 and SHA), then XORs them together. This way it protects itself in the event that one of these algorithms is found vulnerable.

The Windows Server 2003 operating system can use three related security protocols to provide authentication and secure communications over the Internet:

  • Transport Layer Security Version 1.0 (TLS v1.0)
  • Secure Socket Layer Version 3.0 (SSL 3.0)
  • Secure Socket Layer Versions 2.0 (SSL 2.0)


IPsec is designed to provide interoperable, high quality, cryptographically-based security for IPv4 and IPv6. The set of security services offered includes access control, connectionless integrity, data origin authentication, protection against replays (a form of partial sequence integrity), confidentiality (encryption), and limited traffic flow confidentiality. These services are provided at the IP layer, offering protection for IP and/or upper layer protocols.

These objectives are met through the use of two traffic security protocols, the Authentication Header (AH) and the Encapsulating Security Payload (ESP), and through the use of cryptographic key management procedures and protocols. The set of IPsec protocols employed in any context, and the ways in which they are employed, will be determined by the security and system requirements of users, applications, and/or sites/organizations.

When these mechanisms are correctly implemented and deployed, they ought not to adversely affect users, hosts, and other Internet components that do not employ these security mechanisms for protection of their traffic. These mechanisms also are designed to be algorithm-independent. This modularity permits selection of different sets of algorithms without affecting the other parts of the implementation. For example, different user communities may select different sets of algorithms (creating cliques) if required.

A standard set of default algorithms is specified to facilitate interoperability in the global Internet. The use of these algorithms, in conjunction with IPsec traffic protection and key management protocols, is intended to permit system and application developers to deploy high quality, Internet layer, cryptographic security technology.

The IPSec process

This topic provides an overview of IPSec concepts that are central to understanding the IPSec process, including IPSec policy configuration and the Internet Key Exchange (IKE) protocol. In addition, this topic describes how IPSec network traffic processing works, using two intranet computers as an example.

IPSec Policy Configuration

In Windows2000, WindowsXP, and the Windows Server2003 family, IPSec is implemented primarily as an administrative tool that you can use to enforce security policies on IP network traffic. A security policy is a set of packet filters that define network traffic as it is recognized at the IP layer. A filter action defines the security requirements for the network traffic. A filter action can be configured to: Permit, Block, or Negotiate security (negotiate IPSec).

IPSec filters are inserted into the IP layer of the computer TCP/IP networking protocol stack so that they can examine (filter) all inbound or outbound IP packets. Except for a brief delay required to negotiate a security relationship between two computers, IPSec is transparent to end-user applications and operating system services.

A collective set of IPSec security settings is known as an IPSec policy. Windows2000, WindowsXP, and the Windows Server2003 family provide a graphical user interface and several command-line tools that you can use to configure an IPSec policy, and then assign it to a computer.

To ensure that IPSec communication is successful and that IPSec meets the security requirements of your organization, you must carefully design, configure, coordinate, and manage IPSec policies. In many organizations, one administrator might be responsible for configuring and managing IPSec policies for many, if not all, computers.

Internet Key Exchange (IKE) security associations

The IKE protocol is designed to securely establish a trust relationship between each computer, to negotiate security options, and dynamically generate shared, secret cryptographic keying material. The agreement of security settings associated with keying material is called a security association, also known as an SA. These keys will provide authenticity, integrity, and optionally, encryption of IP packets that are sent using the security association. IKE negotiates two types of security associations:

  • A main mode security association (the IKE security association that is used to protect the IKE negotiation itself).
  • IPSec security associations (the security associations that are used to protect application traffic).

You can configure IPSec policy settings for both types of security associations.

The IPSec service interprets an IPSec policy, expanding it into the components that it needs to control the IKE negotiation. The IPSec policy contains one definition of a packet filter. The packet filter is interpreted in two ways: one uses only the address and identity information to allow IKE to establish a main mode SA (the IKE security association); the other allows IKE to establish the IPSec security associations (also known as quick mode security associations).

IPSec network traffic processing

The following illustration shows how IPSec works in terms of the IPSec components for two intranet computers.

For simplicity, this example is of an intranet in which two computers have an active IPSec policy.

  1. Alice, using a data application on ComputerA, sends an application IP packet to Bob on ComputerB.
  2. The IPSec driver on ComputerA checks its outbound IP filter lists and determines that the packets should be secured.
  3. The action is to negotiate security, so the IPSec driver notifies IKE to begin negotiations.
  4. The IKE service on ComputerA completes a policy lookup, using its own IP address as the source and the IP address of ComputerB as the destination. The main mode filter match determines the main mode settings that ComputerA proposes to ComputerB. ComputerA sends the first IKE message in main mode, using UDP source port 500, destination port 500. IKE packets receive special processing by the IPSec driver to bypass filters.
  5. ComputerB receives an IKE main mode message requesting secure negotiation. It uses the source IP address and the destination IP address of the UDP packet to perform a main mode policy lookup, to determine which security settings to agree to. ComputerB has a main mode file that matches, and so replies to begin negotiation of the main mode SA.
  6. ComputerA and ComputerB now negotiate options, exchange identities, verify trust in those identities (authentication), and generate a shared master key. They have now established an IKE main mode SA. ComputerA and ComputerB must mutually trust each other.
  7. ComputerA then performs an IKE quick mode policy lookup, using the full filter to which the IPSec driver matched the outbound packet. ComputerA selects the quick mode security settings and proposes them, and the quick mode filter, to ComputerB.
  8. ComputerB also performs an IKE quick mode policy lookup, using the filter description offered by ComputerA. ComputerB selects the security settings required by its policy and compares those settings to those offered by computerA. ComputerB accepts one set of options and completes the remainder of the IKE quick mode negotiation to create a pair of IPSec security associations.
  9. One IPSec SA is inbound and one IPSec SA is outbound. The IPSec SAs are identified by a Security Parameter Index (SPI), which is inserted into the IPSec header of each packet sent.
  10. The IPSec driver on ComputerA uses the outbound SA to sign and, if required, encrypt the packets. If the network adapter can perform hardware offload of IPSec cryptographic functions, the IPSec driver formats the packets, but does not perform the IPSec cryptographic functions.
  11. The IPSec driver passes the packets to the network adapter driver, indicating whether the adapter must perform the IPSec cryptographic functions. The network adapter transmits the packets into the network.
  12. The network adapter driver at ComputerB receives the encrypted packets from the network. The SPI is used by the receiver of an IPSec packet to find the corresponding IPSec security association, with the cryptographic keys required to verify and decrypt the packets. If the network adapter can decrypt the packets in hardware, it verifies whether it can recognize the SPI. If it cannot decrypt the packets in hardware, or if it cannot recognize the SPI, it passes the packets up to the IPSec driver.
  13. The IPSec driver on ComputerB uses the inbound SA SPI to retrieve the keys required to validate authentication and integrity and, if required, to decrypt the packets.
  14. The IPSec driver converts the packets from IPSec format back to standard IP packet format. It passes the validated and decrypted IP packets to the TCP/IP driver, which passes them to the receiving application on ComputerB.
  15. The IPSec SAs continue to provide very strong, transparent protection for application data traffic. The IPSec SAs are automatically refreshed by an IKE quick mode negotiation for as long as the application sends and receives data. When the application stops sending and receiving data, the IPSec SAs become idle and are deleted.
  16. Typically, the IKE main mode SA is not deleted. By default, the main mode SA has a lifetime of 8 hours. You can configure the main mode SA lifetime to as short as 5 minutes to a maximum of 48 hours. Whenever more traffic is sent, a new quick mode is negotiated automatically to create two new IPSec SAs to protect application traffic. This process is rapid, because the main mode SA already exists. If a main mode SA expires, it is automatically renegotiated as needed.

Advantage of TSL

  • Encryption — Both request and response bodies are protected from intermediate prying eyes.
  • Server authenticated — Clients who record the server's SSL certificate can monitor it to ensure it does not change over time (which could indicate a man-in-the-middle attack). Using a certificate signed by a signing authority can also provide a similar level of assurance for the client application.
  • Easy setup — No additional coding required, just configure the web server the advantages of SSL VPN are no any client software needed in the client computer, they just need a web browser that can support SSL protocol is enough, because no any client software needed in the client computer, so no any additional license cost needed for the client pc to connect to the host.
  • besides that, it is easy to use and setup, so the IT department staff no need to worry about the configuration for the worker who want to use the VPN.

Advantages of IPsec

There are, however, advantages to doing it at the IP level instead of, or as well as, at other levels.

IPsec is the most general way to provide these services for the Internet.

  • Higher-level services protect a single protocol; for example PGP protects mail.
  • Lower level services protect a single medium; for example a pair of encryption boxes on the ends of a line make wiretaps on that line useless unless the attacker is capable of breaking the encryption.

IPsec, however, can protect any protocol running above IP and any medium which IP runs over. More to the point, it can protect a mixture of application protocols running over a complex combination of media. This is the normal situation for Internet communication; IPsec is the only general solution.

IPsec can also provide some security services "in the background", with no visible impact on users. To use PGP encryption and signatures on mail, for example, the user must at least:

  • remember his or her passphrase,
  • keep it secure
  • follow procedures to validate correspondents' keys

These systems can be designed so that the burden on users is not onerous, but any system will place some requirements on users. No such system can hope to be secure if users are sloppy about meeting those requirements.


The Internet Group Management Protocol (IGMP) is a communications protocol used to manage the membership of Internet Protocol multicast groups. IGMP is used by IP hosts and adjacent multicast routers to establish multicast group memberships.

It is an integral part of the IP multicast specification, operating above the network layer, though it does not actually act as a transport protocol. It is analogous to ICMP for unicast connections. IGMP can be used for online streaming video and gaming, and allows more efficient use of resources when supporting these types of applications.

IP multicast is a technique for one-to-many communication over an IP infrastructure in a network. It scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple receivers only when necessary. The most common low-level protocol to use multicast addressing is User Datagram Protocol (UDP). By its nature, UDP is not reliable—messages may be lost or delivered out of order. Reliable multicast protocols such as Pragmatic General Multicast (PGM) have been developed to add loss detection and retransmission on top of IP multicast.

Key concepts in IP multicast include an IP multicast group address, a multicast distribution tree and receiver driven tree creation.

An IP multicast group address is used by sources and the receivers to send and receive content. Sources use the group address as the IP destination address in their data packets. Receivers use this group address to inform the network that they are interested in receiving packets sent to that group. For example, if some content is associated with group, the source will send data packets destined to Receivers for that content will inform the network that they are interested in receiving data packets sent to the group The receiver "joins" The protocol used by receivers to join a group is called the Internet Group Management Protocol (IGMP).

Once the receivers join a particular IP multicast group, a multicast distribution tree is constructed for that group. The protocol most widely used for this is Protocol Independent Multicast (PIM). It sets up multicast distribution trees such that data packets from senders to a multicast group reach all receivers which have joined the group. For example, all data packets sent to the group are received by receivers who joined There are many different variations of PIM implementations: Sparse Mode (SM), Dense Mode (DM), Source Specific Mode (SSM) and Bidirectional Mode (Bidir, or Sparse-Dense Mode, SDM). Of these, PIM-SM is the most widely deployed as of 2006[update]; SSM and Bidir are simpler and scalable variations developed more recently are gaining in popularity.

IP multicast operation does not require a source sending to a given group to know about the receivers of the group. The multicast tree construction is initiated by network nodes which are close to the receivers or is receiver driven. This allows it to scale to a large receiver population. The IP multicast model has been described by Internet architect Dave Clark as follows: You put packets in at one end, and the network conspires to deliver them to anyone who asks.

Multicast (top) compared with unicast broadcasting (bottom). Orange circles represent endpoints, and green circles represent routing points.

IP multicast creates state information ("state") per multicast distribution tree in the network, i.e., current IP multicast routing protocols do not aggregate state corresponding to multiple distribution trees. So if a router is part of 1000 multicast trees, it has 1000 multicast routing and forwarding entries. As a result there are worries about scaling multicast to large numbers of distribution trees. However, because multicast state exists only along the distribution tree it is unlikely that any single router in the Internet maintains state for all multicast trees. This is a common misunderstanding compared to unicast. A unicast router needs to know how to reach all other unicast addresses in the Internet, even if it does this using just a default route. For this reason, aggregation is key to scaling unicast routing. Also, there are core routers that carry routes in the hundreds of thousands because they contain the Internet routing table. On the other hand, a multicast router does not need to know how to reach all other multicast trees in the Internet. It only needs to know about multicast trees for which it has downstream receivers. This is key to scaling multicast-addressed services. It is very unlikely that core Internet routers would need to keep state for all multicast distribution trees they only need to keep state for trees with downstream membership. When this type of router joins a shared forwarding tree it is referred to as a graft and when it is removed it is called a prune.

Multicast Process

Figure 2 illustrates the process whereby a client receives a video multicast from the server.

  1. The client sends an IGMP join message to its designated multicast router. The destination MAC address maps to the Class D address of group being joined, rather being the MAC address of the router. The body of the IGMP datagram also includes the Class D group address.
  2. The router logs the join message and uses PIM or another multicast routing protocol to add this segment to the multicast distribution tree.
  3. IP multicast traffic transmitted from the server is now distributed via the designated router to the client's subnet. The destination MAC address corresponds to the Class D address of group
  4. The switch receives the multicast packet and examines its forwarding table. If no entry exists for the MAC address, the packet will be flooded to all ports within the broadcast domain. If a entry does exist in the switch table, the packet will be forwarded only to the designated ports.
  5. With IGMP V2, the client can cease group membership by sending an IGMP leave to the router. With IGMP V1, the client remains a member of the group until it fails to send a join message in response to a query from the router. Multicast routers also periodically send an IGMP query to the "all multicast hosts" group or to a specific multicast group on the subnet to determine which groups are still active within the subnet. Each host delays its response to a query by a small random period and will then respond only if no other host in the group has already reported. This mechanism prevents many hosts from congesting the network with simultaneous reports.


Protocol Independent Multicast (PIM) is a collection of multicast routing protocols, each optimized for a different environment. There are two main PIM protocols, PIM Sparse Mode and PIM Dense Mode. A third PIM protocol, Bi-directional PIM, is less widely used.

Typically, either PIM Sparse Mode or PIM Dense Mode will be used throughout a multicast domain. However, they may also be used together within a single domain, using Sparse Mode for some groups and Dense Mode for others. This mixed-mode configuration is known as Sparse-Dense Mode. Similarly, Bi-directional PIM may be used on its own, or it may be used in conjunction with one or both of PIM Sparse Mode and PIM Dense Mode.

All PIM protocols share a common control message format. PIM control messages are sent as raw IP datagrams (protocol number 103), either multicast to the link-local ALL PIM ROUTERS multicast group, or unicast to a specific destination.

PIM Sparse Mode

PIM Sparse Mode (PIM-SM) is a multicast routing protocol designed on the assumption that recipients for any particular multicast group will be sparsely distributed throughout the network. In other words, it is assumed that most subnets in the network will not want any given multicast packet. In order to receive multicast data, routers must explicitly tell their upstream neighbors about their interest in particular groups and sources. Routers use PIM Join and Prune messages to join and leave multicast distribution trees.

PIM-SM by default uses shared trees, which are multicast distribution trees rooted at some selected node (in PIM, this router is called the Rendezvous Point, or RP) and used by all sources sending to the multicast group. To send to the RP, sources must encapsulate data in PIM control messages and send it by unicast to the RP. This is done by the source's Designated Router (DR), which is a router on the source's local network. A single DR is elected from all PIM routers on a network, so that unnecessary control messages are not sent.

One of the important requirements of PIM Sparse Mode, and Bi-directional PIM, is the ability to discover the address of a RP for a multicast group using a shared tree. Various RP discovery mechanisms are used, including static configuration, Bootstrap Router, Auto-RP, Anycast RP, and Embedded RP.

PIM-SM also supports the use of source-based trees, in which a separate multicast distribution tree is built for each source sending data to a multicast group. Each tree is rooted at a router adjacent to the source, and sources send data directly to the root of the tree. Source-based trees enable the use of Source-Specific Multicast (SSM), which allows hosts to specify the source from which they wish to receive data, as well as the multicast group they wish to join. With SSM, a host identifies a multicast data stream with a source and group address pair (S,G), rather than by group address alone (*,G).

PIM-SM may use source-based trees in the following circumstances.

  • For SSM, a last-hop router will join a source-based tree from the outset.
  • To avoid data sent to an RP having to be encapsulated, the RP may join a source-based tree.
  • To optimize the data path, a last-hop router may choose to switch from the shared tree to a source-based tree.

PIM-SM is a soft-state protocol. That is, all state is timed-out a while after receiving the control message that instantiated it. To keep the state alive, all PIM Join messages are periodically retransmitted.

Version 1 of PIM-SM was created in 1995, but was never standardized by the IETF. It is now considered obsolete, though it is still supported by Cisco and Juniper routers. Version 2 of PIM-SM was standardized in RFC 2117 (in 1997) and updated by RFC 2362 (in 1998). Version 2 is significantly different from and incompatible with version 1. However, there were a number of problems with RFC 2362, and a new specification of PIM-SM version 2 is currently being produced by the IETF. There have been many implementations of PIM-SM and it is widely used.

PIM Dense Mode

PIM Dense Mode (PIM-DM) is a multicast routing protocol designed with the opposite assumption to PIM-SM, namely that the receivers for any multicast group are distributed densely throughout the network. That is, it is assumed that most (or at least many) subnets in the network will want any given multicast packet. Multicast data is initially sent to all hosts in the network. Routers that do not have any interested hosts then send PIM Prune messages to remove themselves from the tree.

When a source first starts sending data, each router on the source's LAN receives the data and forwards it to all its PIM neighbors and to all links with directly attached receivers for the data. Each router that receives a forwarded packet also forwards it likewise, but only after checking that the packet arrived on its upstream interface. If not, the packet is dropped. This mechanism prevents forwarding loops from occurring. In this way, the data is flooded to all parts of the network.

Some routers will have no need of the data, either for directly connected receivers or for other PIM neighbors. These routers respond to receipt of the data by sending a PIM Prune message upstream, which instantiates Prune state in the upstream router, causing it to stop forwarding the data to its downstream neighbor. In turn, this may cause the upstream router to have no need of the data, triggering it to send a Prune message to its upstream neighbor. This 'broadcast and prune' behavior means that eventually the data is only sent to those parts of the network that require it.

Eventually, the Prune state at each router wills time out, and data will begin to flow back into the parts of the network that were previously pruned. This will trigger further Prune messages to be sent, and the Prune state will be instantiated once more.

PIM-DM only uses source-based trees. As a result, it does not use RPs, which makes it simpler than PIM-SM to implement and deploy. It is an efficient protocol when most receivers are interested in the multicast data, but does not scale well across larger domains in which most receivers are not interested in the data.

The development of PIM-DM has paralleled that of PIM-SM. Version 1 was created in 1995, but was never standardized. It is now considered obsolete, though it is still supported by Cisco and Juniper routers. Version 2 of PIM-DM is currently being standardized by the IETF. As with PIM-SM, version 2 of PIM-DM is significantly different from and incompatible with version 1. PIM Dense Mode (PIM DM) is less common than PIM-SM, and is mostly used for individual small domains.

The current version of the Internet Protocol IPv4 was first developed in the 1970s, and the main protocol standard RFC 791 that governs IPv4 functionality was published in 1981.

With the unprecedented expansion of Internet usage in recent years - especially by population dense countries like India and China.

The impending shortage of address space (availability) was recognized by 1992 as a serious limiting factor to the continued usage of the Internet run on IPv4.

The following table shows a statistic showing how quickly the address space has been getting consumed over the years after 1981, when IPv4 protocol was published

With admirable foresight, the Internet Engineering Task Force (IETF) initiated as early as in 1994, the design and development of a suite of protocols and standards now known as Internet Protocol Version 6 (IPv6), as a worthy tool to phase out and supplant IPv4 over the coming years. There is an explosion of sorts in the number and range of IP capable devices that are being released in the market and the usage of these by an increasingly tech savvy global population. The new protocol aims to effectively support the ever-expanding Internet usage and functionality, and also address security concerns.

IPv6 uses a128-bit address size compared with the 32-bit system used in IPv4 and will allow for as many as 3.4x1038 possible addresses, enough to cover every inhabitant on planet earth several times over. The 128-bit system also provides for multiple levels of hierarchy and flexibility in hierarchical addressing and routing, a feature that is found wanting on the IPv4-based Internet.

Internet Protocol version 6 (IPv6) is the next-generation Internet Protocol version designated as the successor to IPv4, the first implementation used in the Internet that is still in dominant use currently[update]. It is an Internet Layer protocol for packet-switched internetworks. The main driving force for the redesign of Internet Protocol is the foreseeable IPv4 address exhaustion. IPv6 was defined in December 1998 by the Internet Engineering Task Force (IETF) with the publication of an Internet standard specification, RFC 2460.

IPv6 has a vastly larger address space than IPv4. This results from the use of a 128-bit address, whereas IPv4 uses only 32 bits. The new address space thus supports 2128 (about 3.4×1038) addresses. This expansion provides flexibility in allocating addresses and routing traffic and eliminates the primary need for network address translation (NAT), which gained widespread deployment as an effort to alleviate IPv4 address exhaustion.

IPv6 also implements new features that simplify aspects of address assignment (stateless address autoconfiguration) and network renumbering (prefix and router announcements) when changing Internet connectivity providers. The IPv6 subnet size has been standardized by fixing the size of the host identifier portion of an address to 64 bits to facilitate an automatic mechanism for forming the host identifier from Link Layer media addressing information (MAC address).

Network security is integrated into the design of the IPv6 architecture. Internet Protocol Security (IPsec) was originally developed for IPv6, but found widespread optional deployment first in IPv4 (into which it was back-engineered). The IPv6 specifications mandate IPSec implementation as a fundamental interoperability requirement.

In December 2008, despite marking its 10th anniversary as a Standards Track protocol, IPv6 was only in its infancy in terms of general worldwide deployment. A 2008 study by Google Inc. indicated that penetration was still less than one percent of Internet-enabled hosts in any country. IPv6 has been implemented on all major operating systems in use in commercial, business, and home consumer environments.

IPv6 header format

The new IPv6 header is illustrated in figure, while the IPv4 header is shown in Figure 2 to facilitate comparison between the two protocols.

The IPv6 header fields are as follows:

  • Version (4 bit): Indicates the protocol version, and will thus contain the number 6.
  • DS byte (8 bit): This field is used by the source and routers to identify the packets belonging to the same traffic class and thus distinguish between packets with different priorities.
  • flow label (20 bit): Label for a data flow
  • Payload length (16 bit): Indicates the length of the packet data field.
  • Next header (8 bit) identifies the type of header immediately following the IPv6 header.
  • Hop limit (8 bit): Decremented by one by each node that forwards the packet. When the hop limit field reaches zero, the packet is discarded.
  • Source address (128 bit): The address of the originator of the packet.
  • Destination address ( 128 bit) : The address of the intended recipient of the packet.

Compared to IPv4, header format is simpler, which permits better performance.

The decision to eliminate the checksum springs from the fact that it is already computed at layer 2, which is sufficient in view of the error rate of current networks. Better performance is thus achieved, as the routers no longer need to re-compute the checksum for each packet. On the debit side, eliminating the checksum means that there is no protection against the errors routers can make in processing packets. However, these errors are not dangerous for the network, as they cause only the packet itself to be lost if there are fields with invalid values (e.g., nonexistent addresses).

The hop limit field indicates the maximum number of nodes (hops) that a packet can cross before reaching destination. In IPv4, this field is expressed in seconds (TTL: Time To Live), even though it has the same function. The change was made for two reasons. First, for the sake of simplicity: even in IPv4, in fact, the routers translate seconds into number of hops, which are then translated back into seconds. Second, the change ensures freedom from physical network characteristics such as bandwidth. As the hop limit field consists of 8 bits, the maximum number of nodes that a packet can cross is 255.

The advantages IPv6 offers over IPv4:-

Larger address space

The most important feature of IPv6 is a much larger address space than that of IPv4: addresses in IPv6 are 128 bits long, compared to 32-bit addresses in IPv4.

An illustration of an IP address (version 6), in hexadecimal and binary.

The very large IPv6 address space supports a total of 2128 (about 3.4×1038) addresses—or approximately 5×1028 (roughly 295) addresses for each of the roughly 6.5 billion (6.5×109) people alive in 2006. In another perspective, there is the same number of IP addresses per person as the number of atoms in a metric ton of carbon.

The size of a subnet in IPv6 is 264 addresses (64-bit subnet mask), the square of the size of the entire IPv4 Internet. Thus, actual address space utilization rates will likely be small in IPv6, but network management and routing will be more efficient because of the inherent design decisions of large subnet space and hierarchical route aggregation.

Stateless address auto configuration

IPv6 hosts can configure themselves automatically when connected to a routed IPv6 network using ICMPv6 router discovery messages. When first connected to a network, a host sends a link-local multicast router solicitation request for its configuration parameters; if configured suitably, routers respond to such a request with a router advertisement packet that contains network-layer configuration parameters.


Multicast, the ability to send a single packet to multiple destinations, is part of the base specification in IPv6. This is unlike IPv4, where it is optional (although usually implemented).

IPv6 does not implement broadcast, which is the ability to send a packet to all hosts on the attached link. The same effect can be achieved by sending a packet to the link-local all hosts multicast group. It therefore lacks the notion of a broadcast address—the highest address in a subnet (the broadcast address for that subnet in IPv4) is considered a normal address in IPv6.

Mandatory network layer security

Internet Protocol Security (IPsec), the protocol for IP encryption and authentication, forms an integral part of the base protocol suite in IPv6. IPsec support is mandatory in IPv6; this is unlike IPv4, where it is optional (but usually implemented). IPsec, however, is not widely used at present except for securing traffic between IPv6 Border Gateway Protocol routers.

Simplified processing by routers

A number of simplifications have been made to the packet header, and the process of packet forwarding has been simplified, in order to make packet processing by routers simpler and hence more efficient. Concretely,

  • The packet header in IPv6 is simpler than that used in IPv4, with many rarely used fields moved to separate options; in effect, although the addresses in IPv6 are four times larger, the (option-less) IPv6 header is only twice the size of the (option-less) IPv4 header.
  • IPv6 routers do not perform fragmentation. IPv6 hosts are required to either perform PMTU discovery, perform end-to-end fragmentation, or to send packets smaller than the IPv6 minimum MTU size of 1280 octets.
  • The Time-to-Live field of IPv4 has been renamed to Hop Limit, reflecting the fact that routers are no longer expected to compute the time a packet has spent in a queue.


Unlike mobile IPv4, Mobile IPv6 (MIPv6) avoids triangular routing and is therefore as efficient as normal IPv6. IPv6 routers may also support Network Mobility (NEMO) [RFC 3963] which allows entire subnets to move to a new router connection point without renumbering. However, since neither MIPv6 nor MIPv4 or NEMO are widely deployed today, this advantage is mostly theoretical.

Options extensibility

IPv4 has a fixed size (40 octets) of option parameters. In IPv6, options are implemented as additional extension headers after the IPv6 header, which limits their size only by the size of an entire packet. The extension header mechanism allows IPv6 to be easily 'extended' to support future services for QoS, security, mobility, etc. without a redesign of the basic protocol.


IPv4 limits packets to 65535 (216 - 1) octets of payload. IPv6 has optional support for packets over this limit, referred to as jumbograms, which can be as large as 4294967295 (232 - 1) octets. The use of jumbograms may improve performance over high-MTU links. The use of jumbograms is indicated by the Jumbo Payload Option header.


An intrusion detection system (IDS) is a device (or application) that monitors network and/or system activities for malicious activities or policy violations.

Intrusion detection is the process of monitoring the events occurring in a computer system or network and analyzing them for signs of possible incidents, which are violations or imminent threats of violation of computer security policies, acceptable use policies, or standard security practices.[1] Intrusion prevention is the process of performing intrusion detection and attempting to stop detected possible incidents.[1] Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, attempting to stop them, and reporting them to security administrators.[1] In addition, organizations use IDPSs for other purposes, such as identifying problems with security policies, documenting existing threats, and deterring individuals from violating security policies.[1] IDPSs have become a necessary addition to the security infrastructure of nearly every organization.

IDSes are classified in many different ways, including active and passive, network-based and host-based, and knowledge-based and behavior-based:

Active and passive IDS

An active IDS (now more commonly known as an intrusion prevention system — IPS) is a system that's configured to automatically block suspected attacks in progress without any intervention required by an operator. IPS has the advantage of providing real-time corrective action in response to an attack but has many disadvantages as well. An IPS must be placed in-line along a network boundary; thus, the IPS itself is susceptible to attack. Also, if false alarms and legitimate traffic haven't been properly identified and filtered, authorized users and applications may be improperly denied access. Finally, the IPS itself may be used to effect a Denial of Service (DoS) attack by intentionally flooding the system with alarms that cause it to block connections until no connections or bandwidth are available.

Intrusion prevention systems evolved in the late 1990s to resolve ambiguities in passive network monitoring by placing detection systems in-line. Early IPS were IDS that were able to implement prevention commands to firewalls and access control changes to routers. This technique fell short operationally for it created a race condition between the IDS and the exploit as it passed through the control mechanism. Inline IPS can be seen as an improvement upon firewall technologies, IPS can make access control decisions based on application content, rather than IP address or ports as traditional firewalls had done. However, in order to improve performance and accuracy of classification mapping, most IPS use destination port in their signature format. As intrusion prevention systems were originally a literal extension of intrusion detection systems, they continue to be related.

Intrusion prevention systems may also serve secondarily at the host level to deny potentially malicious activity. There are advantages and disadvantages to host-based IPS compared with network-based IPS. In many cases, the technologies are thought to be complementary.

An Intrusion Prevention system must also be a very good Intrusion Detection system to enable a low rate of false positives. Some IPS systems can also prevent yet to be discovered attacks, such as those caused by a buffer overflow.

A passive IDS is a system that's configured only to monitor and analyze network traffic activity and alert an operator to potential vulnerabilities and attacks. It isn't capable of performing any protective or corrective functions on its own. The major advantages of passive IDSes are that these systems can be easily and rapidly deployed and are not normally susceptible to attack themselves.

Network-based and host-based IDS

A Network Intrusion Detection System (NIDS) is an intrusion detection system that tries to detect malicious activity such as denial of service attacks, port scans or even attempts to crack into computers by monitoring network traffic.

A NIDS reads all the incoming packets and tries to find suspicious patterns known as signatures or rules. If, for example, a large number of TCP connection requests to a very large number of different ports are observed, one could assume that there is someone conducting a port scan of some or all of the computer(s) in the network. It also (mostly) tries to detect incoming shellcodes in the same manner that an ordinary intrusion detection system does.

A NIDS is not limited to inspecting incoming network traffic only. Often valuable information about an ongoing intrusion can be learned from outgoing or local traffic as well. Some attacks might even be staged from the inside of the monitored network or network segment, and are therefore not regarded as incoming traffic at all.

A network-based IDS usually consists of a network appliance (or sensor) with a Network Interface Card (NIC) operating in promiscuous mode and a separate management interface. The IDS is placed along a network segment or boundary and monitors all traffic on that segment.

A host-based IDS requires small programs (or agents) to be installed on individual systems to be monitored. The agents monitor the operating system and write data to log files and/or trigger alarms. A host-based IDS can only monitor the individual host systems on which the agents are installed; it doesn't monitor the entire network.

A host-based IDS monitors all or parts of the dynamic behaviour and the state of a computer system. Much as a NIDS will dynamically inspect network packets, a HIDS might detect which program accesses what resources and discover that, for example, a word-processor has suddenly and inexplicably started modifying the system password database. Similarly a HIDS might look at the state of a system, its stored information, whether in RAM, in the file system, log files or elsewhere; and check that the contents of these appear as expected.

One can think of a HIDS as an agent that monitors whether anything or anyone, whether internal or external, has circumvented the system's security policy. Monitoring dynamic behavior

Many computer users have encountered tools that monitor dynamic system behaviour in the form of anti-virus (AV) packages. While AV programs often also monitor system state, they do spend a lot of their time looking at who is doing what inside a computer - and whether a given program should or should not have access to particular system resources. The lines become very blurred here, as many of the tools overlap in functionality.

Monitoring state

The principle operation of a HIDS depends on the fact that successful intruders (crackers) will generally leave a trace of their activities. (In fact, such intruders often want to own the computer they have attacked, and will establish their "ownership" by installing software that will grant the intruders future access to carry out whatever activity (keystroke logging, identity theft, spamming, botnet activity, spyware-usage etc.) they envisage.

In theory, a computer user has the ability to detect any such modifications, and the HIDS attempts to do just that and reports its findings.

Ideally a HIDS works in conjunction with a NIDS, such that a HIDS finds anything that slips past the NIDS.

Ironically, most successful intruders, on entering a target machine, immediately apply best-practice security techniques to secure the system which they have infiltrated, leaving only their own backdoor open, so that other intruders can not take over their computers.

Knowledge-based and behavior-based IDS

A knowledge-based (or signature-based) IDS references a database of previous attack profiles and known system vulnerabilities to identify active intrusion attempts. Knowledge-based IDS is currently more common than behavior-based IDS. Advantages of knowledge-based systems include the following:

  • It has lower false alarm rates than behavior-based IDS.
  • Alarms are more standardized and more easily understood than behavior-based IDS.

Disadvantages of knowledge-based systems include these:

  • Signature database must be continually updated and maintained.
  • New, unique, or original attacks may not be detected or may be improperly classified.

A behavior-based (or statistical anomaly-based) IDS references a baseline or learned pattern of normal system activity to identify active intrusion attempts. Deviations from this baseline or pattern cause an alarm to be triggered. Advantages of behavior-based systems include that they

  • Dynamically adapt to new, unique, or original attacks.
  • Are less dependent on identifying specific operating system vulnerabilities.

Disadvantages of behavior-based systems include

  • Higher false alarm rates than knowledge-based IDSes.
  • Usage patterns that may change often and may not be static enough to implement effective behavior-based IDS.


In today's corporate market, the majority of businesses consider the Internet as a major tool for communication with their customers, business partners and the corporate community. This mentality is here to stay; as a result businesses need to consider the risks associated with using the Internet as communication tool, and the methods available to them to mitigate these risks. Many businesses are already aware of the types of risks that they are facing, and have implemented measures such as Firewalls, Virus detection software, access control mechanisms etc. However it is all too apparent that although these measures may deter the "hobby hacker", the real danger and threat comes from the "determined hacker". The determined hacker is just that "determined" and they will find a way of penetrating your system, sometimes for malicious intent but mostly because they can and it is a test of skills. Whilst the above mentioned tools are preventative measures, an IDS is more of an analysis tool, that will give you the following information:

  • Instance of attack
  • Method of attack
  • Source of attack
  • Signature of attack

This type of information is becoming increasingly important when trying to design and implement the right security programmed for an organization. Although some of this information can be found in devices such as Firewalls and access control systems as they all contain log information on system activity In these instances the onus is on the administrator to check the logs to determine if an attempted attack has occurred or after the event find out when the attack occurred and the source of the attack. Usually information pertaining to the method of the attack and the signature of the attack cannot be found in the logs. This is because devices such as Firewalls are designed to check the IP packet header information and not the payload portion of the IP packet. An IDS will check the payload of the packet to determine if the pattern of data held within, matches that of a known attack signature. The benefits of the above information are as follows:

Instance of attack: An IDS will alert when an attack is in progress, this gives you the benefit of counteracting the attack as it happens, without having to go through lengthy logs to find out when this particular attack occurred.

Method of attack: An IDS will let you know what area of your network or system on your network is under attack and how it is being attacked. This enables you to react accordingly and hopefully limit the damage of the attack by i.e. disabling communications to these systems.

Source of attack: An IDS will let you know the source of an attack, it is then down to the administrator to determine if it is a legitimate source. By determining the legitimacy of the source the administrator is able to determine if he/she can disable communications from this source.

Signature of attack: An IDS will identify the nature of the attack, and the pattern of the attack and alert accordingly. This information alerts the organization to the types of vulnerabilities that they are susceptible to and permits them to take precautions accordingly.

The above information allows an organisation to:

  • Build a vulnerability profile of their network and the required precautions
  • Plan its corporate defence strategy
  • Budget for security expenditure.


Network intrusion detection systems are unreliable enough that they should be considered only as secondary systems designed to backup the primary security systems. Primary systems such as firewalls, encryption, and authentication are rock solid. Bugs or misconfiguration often lead to problems in these systems, but the underlying concepts are "provably" accurate. The underlying concepts behind NIDS are not absolutely accurate. Intrusion detection systems suffer from the two problems whereby normal traffic causes many false positives (cry wolf), and careful hackers can evade or disable the intrusion detection systems. Indeed, there are many proofs that show how network intrusion detection systems will never be accurate.

Switched network (inherent limitation)

Switched networks poses dramatic problems to network intrusion detection systems. There is no easy place to "plug in" a sensor in order to see all the traffic. For example, somebody on the same switched fabric as the CEO has free reign to attack the CEO's machine all day long, such as with a password grinder targetting the File and Print sharing. There are some solutions to this problem, but not all of them are satisfactory.

Resource limitations

Network intrusion detection systems sit at centralized locations on the network. They must be able to keep up with, analyze, and store information generated by potentially thousands of machines. It must emulate the combined entity of all the machines sending traffic through its segment. Obviously, it cannot do this fully, and must take short cuts.

Network traffic loads

Current NIDS have trouble keeping up with fully loaded segments. The average website has a frame size of around 180-bytes, which translates to about 50,000 packets/second on a 100-mbps Ethernet. Most IDS units cannot keep up with this speed. Most customers have less than this, but it can still occasionally be a concern.

TCP connections

IDS must maintain connection state for a large number of TCP connections. This requires extensive amount of memory. The problem is exacerbated by evasion techniques, often requiring the IDS to maintain connection information even after the client/server have closed it.

Reasons to Acquire IDSs

Intrusion detection capabilities are rapidly becoming necessary additions to every large organization's security infrastructure. The question for security professionals should not be whether to use intrusion detection, but which features and capabilities to use. However, one must still justify the purchase of an IDS. There are at least three good reasons to justify the acquisition of IDSs: to detect attacks and other security violations that cannot be prevented, to prevent attackers from probing a network, and to document the intrusion threat to an organization.

Detecting attacks that cannot be prevented

Attackers, using well-known techniques, can penetrate many networks. This often happens when known vulnerabilities in the network cannot be fixed. For instance, in many legacy systems, the operating systems cannot be updated. In updateable systems, administrators may not have or take the time to install all the necessary patches in a large number of hosts. In addition, it is usually not possible to perfectly map an organization's computer use policy to its access control mechanisms and thus authorized users often can perform unauthorized actions. Users may also demand network services and protocols that are known to be flawed and subject to attack. Although, ideally, we would fix all vulnerabilities, this is seldom possible. Therefore, an excellent approach for protecting a network may be to use an IDS to detect when an attacker has penetrated a system using an uncorrectable flaw. It is better at least to know that a system has been penetrated so that administrators can perform damage control and recovery than not to know that the system has been penetrated.

Preventing attackers from probing a network

A computer or network without an IDS may allow attackers to leisurely and without retribution explore its weaknesses. If a single, known vulnerability exists in such a network, a determined attacker will eventually find and exploit it. The same network with an IDS installed is a much more formidable challenge to an attacker. Although the attacker may continue to probe the network for weaknesses, the IDS should detect these attempts, may block these attempts, and can alert security personnel who can take appropriate action.

Documenting the threat

It is important to verify that a network is under attack or likely to be attacked to justify spending money for securing the network. Furthermore, it is important to understand the frequency and characteristics of attacks in order to understand what security measures are appropriate for the network. IDSs can itemize, characterize, and verify the threat from both outside and inside attacks, thereby providing a sound foundation for computer security expenditures. Using IDSs in this manner is important, since many people mistakenly believe that no one (outsiders or insiders) would be interested in breaking into their networks.


Implementations of IDS vary based on the security needs of the network or host it is being implemented on. As we have seen, there isn't a universal implementation of an IDS model that can provide the best intrusion detection monitoring in all environments.

Complex architectures require complex IDS implementations - which will also require a high degree in IDS expertise to deploy and maintain. However, even with the highest level of IDS expertise, intrusions cannot be fully shut out.

The IDS techniques themselves do not offer a foolproof system to detect ALL the intrusions an attack can consist of. The information below details some of these shortcomings.

Anomaly Detection Disadvantages

  1. Since anomaly detection operates by defining a "normal" model of system or network behavior, it usually suffers from a large number of false alarms due to the unpredictable behaviors of users and networks. These behaviors may not have malicious intent.
  2. In fact, an anomaly-based IDS that has a detection rate of 20 false alarms to 1 real intrusion detection is considered good. This is due to the fact that normal system and network activity is, for the most part, very dynamic and very difficult to capture and predict.

  3. Anomaly detection approaches often require extensive training sets of network or system event records in order to characterize normal behavior patterns. These training sets can consist of various logs that capture the normal usage of the subject or object being monitored. Once the training sets are defined, they need to be fed into the anomaly detection engine to create a model of the normal system usage.

Misuse Detection Disadvantages

  1. Since misuse detection operates by comparing known intrusive signatures against the observed log, misuse detectors suffer from the limitation of only being able to detect attacks that are known. Therefore, they must be constantly be updated with attack signatures that represent newly discovered attacks or modified existing attacks.
  2. Vulnerable to evasion. Once a security hole has been discovered and a signature has been written to capture it, several other iterations of "copycat" exploitations usually surface to take advantage of the same security hole. Since the attack method is a variant of the original attack method, it usually goes undetected by the original vulnerability signature, requiring the constant rewrite of signatures.
  3. Many misuse detectors are designed to use tightly defined signatures that prevent them from detecting variants of common attacks.

Host-Based IDS Disadvantages

  1. The implementation of HIDS can get very complex in large networking environments. With several thousand possible endpoints in a large network, collecting and auditing the generated log files from each node can be a daunting task.
  2. If the IDS system is compromised, the host may cease to function resulting in a stop on all logging activity. Secondly, if the IDS system is compromised and the logging still continues to function, the trust of such log data is severely diminished.

Network-Based IDS Disadvantages

Network-based intrusion detection seems to offer the most detection coverage while minimizing the IDS deployment and maintenance overhead. However, the main problem with implementing a NIDS with the techniques described in the previous sections is the high rate of false alarms. Modern day enterprise network environments amplify this disadvantage due to the massive amounts of dynamic and diverse data that needs to be analyzed.

All the previously defined IDS techniques have their share of disadvantages. There just isn't a single IDS model that offers 100% intrusion detection with a 0% false alarm rate that can be applied in today's complex networking environment. However, incorporating multiple IDS techniques can, to a certain extent, minimize many of the disadvantages illustrated in the previous section.