This dissertation has been submitted by a student. This is not an example of the work written by our professional dissertation writers.
Chapter # 1
In this project we have designed a topology using OPNET Modeler 14.5 on which traffic engineering features are enabled that would work at core of networks. It optimizes the network utilization, handling of unexpected congestion and handling of link and node failure.
In this chapter we have covered the background knowledge of MPLS. It consists of literature review to understand this latest technology .MPLS stands for multi protocol label switching. It is a layer 2.5 protocol that becomes very popular now days. Labels are used in MPLS, instead of IP addresses and MPLS packets are also forwarded on the basis of these labels. In this chapter there is a brief discussion on label switch routers and MPLS label operations.
In this chapter we discussed the main feature of MPLS that is traffic engineering. This chapter also comprises of the overview of MPLS-TE, its features and its objectives. Some signaling protocols are also discussed in these chapters that are used for Traffic engineering. In this chapter there are some of the practical aspects of MPLS-traffic engineering through which we can handle unexpected congestion and link or node failures.
In this chapter we discussed the IPv6 over MPLS.IPv6 is a protocol that is used to increase the address range, and provides simplified packet format for routing efficiency. The comparison of IPv4 to the IPv6 is also discussed in this chapter.
In this chapter we specify the list of requirements that are used for the development of our project. These requirements are categorized into several different groups on the basis of their functionality.
In this chapter we just specify the network topology which we needs for the development of our project.
In this chapter we are going to implement the network topology. We simulate it on OPNET 14.5 to get the required results. In this chapter we evaluate this project and shows the results which are obtained by implementing OSPF, MPLS-TE and IPv6 over MPLS Backbone.
Chapter # 2
Multiprotocol Label Switching (MPLS)
Multi-Protocol Label Switching (MPLS) provides a mechanism to forward packets for any network protocol. In IP protocol, to choose a next hop packets should be partitioned in to Forwarding Equivalence Classes (FECs) and map each FEC to the next hop by each router where as in MPLS as the packet enters the network, the assignment of a particular packet to a particular FEC is done just once. A fixed length value which is used to encode when the packet is assigned to the FEC, that value is also known as a label. Label is Information which is attached to each data packet at the entering node which tells the information to the intermediate node for the further treatment of the packet.  The label is also sent along with a packet while forwarded to the next hop. Label Distribution Protocol (LDP) is the set of procedures and messages by which Label Switching Routers (LSRs) establish Label Switched Paths (LSPs) through a network. LDP defines several options in label allocation scheme, LSP trigger strategy, label distribution control mode, and label retention mode. Traditional IP networks are connectionless where as MPLS networks are connection-oriented and packets are routed along pre-configured Label Switched Paths (LSPs).  MPLS is a technique which is used to solve the problems that are faced by the present day's network. It is used to meet the requirements for Bandwidth Management and service requirements MPLS is used as a solution. 
There are some advantages of MPLS:
§ One unified Infrastructure.
§ Better IP over ATM integration.
§ BGP (Border Gateway Protocol)-core free.
§ MPLS Traffic Engineering.
§ MPLS VPN 
2.3 Label Distribution Protocol:
In an MPLS network there is a protocol used to distribute the label binding information to the label switch routers (LSRs).So LDP must run on all the LSRs. When all the label switched routers have the labels for a particular Forwarding Equivalence Class (FEC), the packet can be forwarded on the LSP (Label switching path) by means of label switching the packets at each LSR. 
2.4 MPLS Label Architecture
A MPLS Label is 32 bits field as shown in figure below:
No. of bits
Experimental bits (EXP)
Bottom of Stack (BoS)
Time to Live (TTL)
Table 2.1: Bits division in MPLS Label
2.5 Label Stacking
Label Stacking is the packing of one or more labels on the top of the packet needed by the MPLS enabled routers to send the packet through MPLS network. The first label which is called the top label and other labels except the bottom label; which has value 1 in BoS; has value 0 in BoS as shown in figure below:
2.6 MPLS and OSI Reference Model
OSI reference model consists of seven layers as shown in figure:-
Layer 1 i.e. physical layer, deals with the cabling, mechanical, and electrical characteristics; whereas Layer 2 i.e. the data link layer deals with the formatting of the frames. The significance of the data link layer is limited to only one link between two machines and the data link layer header is always replaced by the machine at the other end of the link. Layer 3 i.e. network layer, deals with the formatting of packets from end to end and has significance beyond the data link. MPLS fits neither in Layer 2 as a protocol because encapsulation is still present with labeled packets nor in Layer 3. So MPLS is viewed as 2.5 layer protocol for the sake of convenience.
2.7 Label switched router (LSR)
A Label Switch Router (LSR) is a MPLS enabled router which receives and transmits the labeled packet on the respective destination. There are three types of LSR i.e. Ingress, Egress and Intermediate LSR.
2.7.1 Ingress LSR
It assigns label to an unlabeled packet and then forwards it.
2.7.2 Egress LSR
It removes the label from the labeled packet and then sends it.
NOTE: Ingress and Egress LSRs both are edge routers.
2.7.3 Intermediate LSR
It replaces the label of the packet with another existing label and then sends it.
2.8 Label operations
There are three Label operations in MPLS i.e. swapping, pushing and popping.
2.8.1 SWAP Operation
Replacing of a label by another label as shown in figure:
2.8.2 PUSH Operation
Pushing of a label in the stack in the presence of already existing label as shown in figure. 
2.8.3 POP Operation
Removing of one label from multiple already attached labels is called POPING as in figure:
Chapter # 3
Multiprotocol Label Switching (MPLS) Traffic Engineering
Traffic Engineering is the art of moving traffic in a most optimal way from edge to edge in a network. It has the ability to move the traffic away from the congested path to the less congested path. Traffic Engineering is basically used for reliable network operation and it also optimizes resource utilization and network performance. When the traffic growth rate exceeds all the expectations and we are unable to upgrade our network rapidly then with the help of traffic engineering we are able to share the load to the underutilized path. 
How is traffic engineered??
§ Control of internet traffic
There are several objectives of traffic engineering:
Minimizing the congestion:
Lack of network resources and the inefficient way of mapping traffic over the network causes congestion. Due to the inefficient way of mapping traffic results in overutilization and underutilization of some links. So, with the help of traffic engineering we are able to send the traffic in the most optimal way. 
The Reliable network operations:
When the failure occurs between the multiple links in a network then there is a need for service restoration to recover the data. In Traffic Engineering we are able to reroute the traffic quickly from the redundant paths and avoid our data loss.
3.3 Problems in traditional IP routing
In IP routing the least -cost routing protocol is used through which every router finds the shortest path to reach the destination. There are several problems in traditional IP routing such that:
Forwarding of IP packets on each hop totally depends on destination IP address.
In IP protocol Packets are forwarded on the basis of Least- cost path forwarding.
There will be no account of available bandwidth capacity of the link in IP forwarding technique. So, some of the links may cause data loss due to the over utilization and under utilization of the links.
Addition of bandwidth to the links doesn't happen at once. For this purpose planning and time is required because traffic flow is too much random. 
3.4 MPLS TE as a Solution
MPLS-TE is a solution of problems that are faced in traditional IP Routing:
§ It allows the flow of data in the most optimal way and avoids under utilization and over utilization of the links in the network.
§ MPLS-TE has the ability to get the knowledge about the configured bandwidth of the links.
§ It also gets the knowledge about the link attributes.
§ MPLS-TE has ability to adapt changing of link attributes and the bandwidth automatically. 
3.5 MPLS TE Features
MPLS-TE has the following features:
In the networks some tunnels have more importance than other tunnels.
For example: If there are two tunnels one is for video and the other is for audio data and they fight for the same resources. Suppose we give higher priority to video tunnel than to audio tunnel then it will first establish video tunnel and then it will try to calculate another path to establish audio tunnel if it is successful to find it then its good otherwise it will just drop the data. Actually tunnels have two priority levels, one is setup priority and the other is hold priority. Higher the priority number then lowers the importance of the tunnel. Setup priority is used when we set up a tunnel and that tunnel is compared with the hold priority of already established tunnel. Hence, if the setup priority is greater than the hold priority of established tunnel, then pre-emption of established tunnel takes place. 
One of the important features of MPLS-TE is explicit route through which traffic is steered away from any selected path and also a vital tool for load balancing.
In a network, when a link or a node failure occurred then interior gateway protocols (IGP) may take of the order of 10 seconds to converge the traffic. Fast reroute has done the pre-signaling of alternative path alongside with the primary path.
3.6 Signaling Protocols for TE
The two important Signaling Protocols for MPLS TE are:
§ Resource Reservation Protocol with Traffic engineering Extension (RSVP-TE)
§ Constraint-based Router Label Distribution Protocol (CR-LDP) 
RSVP-TE sends request message for checking bandwidth whether available or not in path message and if available then the confirmation message also received in RESV message yes, then it means it is available we can make tunnel now. Explicit routing (using Explicit Route Object ERO) is also a feature of RSVP-TE. It has also a feature of bandwidth reservation for LSP.
CR-LDP is one of the protocols of MPLS. This Provides explicit routing & reservation of resources along the routes. For a while an LSP can be setup base constraints of explicit route, constraints of QOS, and other constraints. To meet the requirements of traffic engineering Constraint-based routing method is used. These requirements are met by extension of LDP to CR-LDP.
3.7 Traditional IP Routing
In traditional IP routing, each router in the network has to make independent routing decisions for each every incoming packet. When a packet arrives at any router, the router has to consult its routing table to find the next hop for that packet based on the packets destination address in the packets IP header (longest match prefix lookup). In order to build routing tables, each router runs IP routing protocols like Border Gateway Protocol (BGP), Open Shortest Path First (OSPF) or Intermediate System-to-Intermediate System (IS-IS). When a packet passes through the network, each router performs the same steps of finding the next hop for the packet.  The main issue with traditional routing protocols is that they do not take capacity constraints and traffic characteristics into account when routing decisions are made. The result is that some segments of a network can become congested while other segments along alternative routes become underutilized. Even in the congested links, traditional routing protocol will continue to forward traffic across these paths until packets are dropped.
Conventional IP packet forwarding has several drawbacks. It has limited capability to deal with addressing information beyond just the destination IP address carried on the packet in the network. Because all traffic to the same IP destination - prefix is usually treated similarly, various problems arise. For example, it becomes difficult to do traffic engineering on IP networks. Also, IP packet forwarding does not easily take into account extra addressing-related information such as Virtual Private Network membership. 
To accommodate highly interactive application flows with low delay and packet loss threshold, it creates the need to more efficiently utilize the available network resources. This is possible through traffic engineering and MPLS.
Assuming every link in the above network has same cost then the minimum cost path from router R1 to R5 is the path R1-R2-R5 and definitely IP traffic will follow this path while other path R1-R3-R4-R5 remains useless. Suppose if such network faces traffic which has high bandwidth requirements then the capacity of the link then loss of data will occur. In short, IP routing is incapable in handling high load of traffic which is the need of the hour.
3.8 Forwarding using MPLS TE
In order to understand the MPLS TE forwarding, routers R6 and R7 are attached in front of router R1. Suppose routers R6 and R7 intends to send traffic to R5. If this network is based on IP routing, then this traffic will follow through the path R1-R2-R5 as mentioned earlier because it will follow the shortest path first algorithm, there is no matter what is configured on routers R7 and R6. That is because the IP packet forwarding decision is independent, decision of path is taken at ever hop in the network independently. If R6 wants to send data through path R6-R1-R2-R5 and R7 wants to send data through R7-R1-R3-R4-R5 then it is not possible in the case of simple IP networks. But if MPLS is configured in the network, we can setup the two paths as two dissimilar LSPS so the different labels we use for this purpose, at router R1, the different value of incoming label indicates whether the incoming packet belongs to the Label switch path with R6 as the head end router or the Label switching path with R7 as the head end router. So R1 then forward the packets on related path known as Label switched path. 
3.9 Overview of the Operation of MPLS TE
The main reason to have MPLS TE is the routing traffic according to resources available in the network. These resources are the bandwidth of the links and some attributes of the links that the operator allocate to the network. Instead of creating a new protocol which will carry the information, we advertise OSPF and IS-IS with some extension on LSR. When TE tunnel is configured on an LSR, it becomes the head end LSR of that TE tunnel. Then it includes the identification of the destination LSR of the TE tunnel and the resources it holds on i.e. one can specify the tunnel bandwidth requirements.
3.10 Practical aspects of MPLS TE
Three basic benefits of MPLS traffic engineering are
§ Optimization of network utilization
§ Handling of unexpected congestion
§ Handling link and node failures
3.10.1 Optimizing your network utilization:
In this case, we build a full mesh of TE-LSPS between the given set of routers, then size to those LSPS is allocated according to available bandwidth between the pair of routers, and then LSPs itself find the finest path in the network that meets the required bandwidth demands. By building such full mesh of TE-LSPs in the network, congestion is avoided by spreading LSPS across the network along paths having bandwidth pre-knowledge.
3.10.2 Handling unexpected congestion:
In this case, IGP is configured and IGP itself builds the TE-LSPS when congestion is detected. This is simpler than a full mesh approach of TE-LSPs, but it needs one who works with network congestion as it happens. Suppose if there are major network event that congest the network links and leaves other links empty, then under these circumstances, MPLS TE is deployed. MPLS TE tunnels put off some traffic from the congested links and put it on the uncongested paths. 
3.10.3 Handling link and node failures:
Another most important use of MPLS TE to quickly recovers the network under node and link failures situation. It has a feature called Fast Reroute that allows reduction in packet loss when a link or node failure occurs on the network. MPLS TE can be deployed to just do FRR and even not use MPLS-TE to steer traffic along paths. 
3.11 Requirement of IGP
The Interior Gateway protocol is capable of sending all the topology information to all TE enabled routers. The IGP floods all state of the links of a router in the interior area. In this way, every router in the network/area has knowledge about all optional paths to get the destination. 
3.12 Flooding by the IGP
The IGP floods the TE information in the following cases:
§ Link status change
§ Configuration change
§ Periodic flooding
§ Changes in the reserved bandwidth 
3.13 MPLS TE Tunnel Attributes
The TE tunnel attributes are as follows:
§ Tunnel destination
§ Desired bandwidth
§ Setup and holding priorities
§ Path options 
3.14 Components of MPLS Traffic Engineering
The main components of MPLS traffic engineering are as following:
§ Information distribution
§ Path calculation and setup
§ Packet forwarding
3.14.1 Information Distribution Component:
Traffic engineering required detailed knowledge about the network topology and also dynamic information regarding network loading. So this distributed information in divided as following:
§ TE metric
§ Band width info
§ Administrative group info
126.96.36.199 TE metric:
It is used to construct a TE topology which is different from the IP topology. MPLS Traffic engineering contains dual link metrics.
188.8.131.52.1 Path Calculation (PACL):
Path calculation depends on following algorithms:
In SPF (shortest path first), OSPF and IS-IS are used to calculate the shortest path to the destination. Actually SPF algorithm runs on every router and by using the OSPF or IS-IS databases to collect a routing table. SPF works on the criteria of minimum cost of each IP prefix which remains invalid in MPLS traffic engineering
As SPF is not valid in MPLS traffic engineering so there is need of other of other criteria hence the other criteria that plays a role is resources or constrained base shortest path first calculation so SPF is extended to CSPF. Hence OSPF and IS-IS are also extended in order to calculate these resources, OSPF and IS-IS not only calculate shortest path first but also calculate whether reasonable bandwidth is available or not in the network.
There should be the guarantee that whether required bandwidth is available or not on particular link on which a tunnel is made which guarantees availability of bandwidth and it is possible by a signaling protocol called RSVP. Suppose we want to make a tunnel of 20 mbps then this information whether 20mbps is available or not on link would be supply by RSVP.
RSVP sends and receives two messages for this purpose, Path and RESV message. 
This message is sent by head end router and goes up to tail end router, it checks whether the path is available or not for reservation if desired path is available then it will reserve it for tunneling.
This message is sent in opposite direction on the same path this is actually confirmation message that confirms the availability of path and allows the making of tunnel on the path and transmission of data.
184.108.40.206 Bandwidth info in MPLS TE includes:
Bandwidth info includes the following parameters:
Maximum bandwidth info:
Maximum bandwidth information means total bandwidth of the link in the network.
Maximum reserve-able bandwidth info:
Maximum reserve-able bandwidth information means total reserve-able bandwidth of tunnel on the path.
Unreserved bandwidth info:
The unreserved bandwidth information means that it is used as remainder that this bandwidth is available for TE.
220.127.116.11 Administrative group info:
It is a 32-bit field and it is not extendable. The operator can set each bit of the 32-bit field. Suppose one bit can be a pos link, or intercontinental link, or even a link that has a delay near about 100 ms. 
3.14.2 T.E tunnel Path calculations:
§ The main factors on which tunneling depend are as following:
§ Path setup option
§ Setup and holding priority
§ Attribute flags and affinity bits
18.104.22.168 Path setup option:
Tunnel is set in two ways:
In that way, we have to specify every router on TE tunnel that it must be routed or other words we specify the path to which it has to follow up to end from head end to tail end. The ID of TE router or the IP address of link can also be specified.
In this way, TE tunnel is routed dynamically throughout the network where it feels better from head end to the tail end router. In the dynamic way, the destination of the tunnel is only configured.
22.214.171.124 Setup and holding priority
If there is a case that a tunnel which has more hops or longer path is more important than a shorter path suppose in shorter path bandwidth is not enough relative to longer path so the most important tunnel will be routed later as compare to less important that would be not a optimal way of routing hence TE has priorities to avoid such situations, in this way most important tunnels can be routed optimally by preempting less important tunnel. Two priorities for each tunnel can be configured.
The setup priority is used to show that how much the tunnel is important to preempt the other tunnels. Suppose if the setup priority of one tunnel is less than hold priority of second tunnel than hold priority of second tunnel preempts the second tunnel.
It is used to show that how much the weight of this tunnel is to hold on its links reservations in the network.
If the tunnel is end up with the path that is not an optimum path, suppose at that time some links were down or sufficient bandwidth was unavailable but now these are links are available hence by re-optimization the tunnel can be rerouted in the most optimal way.
3.14.3 Packet-Forwarding down the tunnels:
Once the path is set-up, packet forwarding process begins at the Label Switch Router and is based on the concept of label switching.
TE tunnel is enabled to forward traffic in three ways:
The simplest way to route traffic on MPLS TE tunnel is to configure static routing on tunnel head end routers in the network.
Suppose we have two types of data one is video data and other is text data and we want to send video on different tunnel and data on different tunnel then this is possible by using policy based routing. This can be done by simply configuring the policy route maps on incoming interface. One wants to send video traffic on tunnel then simply using match statement match the video traffic and then use the set interface tunnel to transmit video data on tunnel. By using PBR we can send particular traffic on particular link without modification of routing table of the router.
Auto route Announce:
MPLS traffic-engineering auto route announces the command configured on the tunnel interface on the MPLS TE tunnel's head end router so that the LSR can insert IP destinations into the routing table with the TE tunnel as next hop in the network. Basically auto-route announce modifies the SPF algorithm so that the LSR can insert IP prefixes downstream of the closest TE tunnel tail end router into the routing table of the head end router with that TE tunnel as next hop in the network.
Chapter # 4
IPv6 over MPLS
IPv6 is the next major version of the Internet Protocol. It is the successor to IPv4 which is in continuous use since the 1980s. IPv6 addresses have several limitations of IPv4 and also provide many new features and concepts for the improvement of Internet communications. Some of the key features of IPv6 are large address spacing, efficient addressing, a simplified protocol header, support for end-to-end QoS and improved security in the networks. IPv6 was developed in the mid of 1990's by the Internet Engineering Task Force (IETF). It was primarily engineered to eliminate the fundamental address space limitation of IPv4. IPv6 uses 128 bits for IP addresses versus 32 bits used in IPv4, thus providing a practically unlimited address space that allows any device to have a unique IP address. This removes the need for network address translation (NAT) as a mean to cope with limited address spacing, although today NAT is also viewed as a component of network security. It improves routing efficiency through better address aggregation which results in smaller Internet routing tables and provides better end-to-end security, improved QoS support, and increased mobility. 
Following are the main features of IPv6:
§ IPv6 stack can obtain information about other hosts so that it won't duplicate their IP address, if it needs to use auto-configuration.
§ IPv4 addresses cannot be represented as such in IPv6. A feature called Teredo has been included by the IPv6 committee for tunneling IPv4 inside IPv6.
§ DHCP servers should support DHCPv6 to assign IPv6 addresses.
§ In IPv6, only the sending host fragments the packets. Routers do not, unlike IPv4.
§ IPv6 supports multicasting - The ability to send a single packet to multiple destinations and it is a part of the base specification of IPv6.
§ Unlike IPv4 addresses, which were distributed on a first-come-first-serve basis, IPv6 addresses are expected to be distributed by regional internet registries and that enables the possibility to have a particular range of IP addresses specific to a continent or a country.
IPv6 provides the following advantages to network and IT professionals:
§ Larger address space for global reachability and scalability
§ Simplified header format for efficient packet handling
§ Hierarchical network architecture for routing efficiency
§ Support for widely deployed routing protocols
§ Auto-configuration and plug-and-play support
§ Elimination of need for network address translation (NAT) and application's layered gateway (ALG)
§ Embedded security with mandatory IPSec implementation
§ Enhanced support for Mobile IP and Mobile Computing Devices
§ Increased number of multicast addresses
4.4 Comparison of IPv6 with IPv4 Header Format
The obvious change between two addresses is the length of the addresses. The source and destination address are four times bigger in the IPv6 header than in IPv4. Also, the header is simplified because certain fields in the header have been eliminated. For example, the header does not have a checksum which means recalculation of the checksum of the IP header at every hop is not needed in the network providing faster and simpler forwarding. The burden of checking the IP header now lays completely in the upper layer protocols i.e. TCP and UDP. Besides the checksum, the Fragment Offset field is removed. The routers cannot fragment packets in IPv6, as they did with IPv4. Fragmentation is avoided by the compulsory usage of path MTU discovery present in IPv6. The IHL (Internet Header Length) field has been eliminated, as the IPv6 header is always 40 bytes long. The IPv4 header was 20 bytes long but could be extendable. Options have been replaced by extension headers with IPv6. They are similar options to the IP of IPv4 and are chained after IPv6 header. They are present if there is Next Header field in the IPv6 header which indicates that an extension header follows the IPv6 header, instead of the usual header of TCP or UDP. 
4.5 Address notation:
4.5.1 IPv4 Address Notation:
IPv4 addresses were represented by four octets or 8 bit fields and each field written in standard decimal notation, separated by decimal points.
4.5.2 IPv6 Address Notation:
The preferred convention for IPv6 addresses is represented by eight 16-bit fields written in hexadecimal where each field is separated by colons.
4.5.3 Mixed IPv6 and IPv4 Address:
In this case six hexadecimal segments are used followed by four octet segments.
Example:0:0:0:0:0: FFF: 126.96.36.199
4.6 IPv6 Addressing:
Three types of address are used for IPv4, namely unicast, broadcast, and multicast addresses. In IPv6, a multicast address is used instead of broadcast address. In addition, a new type of address known as anycast address is also utilized in IPv6.
An IPv6 address can be classified into one of three kinds:
A unicast address distinctively determines an interface of an IPv6 node. A packet transmitted to a unicast address is assigned to the interface recognized by that address.
A multicast address determines a set of IPv6 interfaces. A packet transmitted to a multicast address is processed by all the multicast group members.
An anycast address is assigned to multiple interfaces. A packet transmitted to the anycast address is distributed to only one of these interfaces.
4.7 Carrying IPv6 over MPLS Backbone
After the huge success of MPLS VPN, MPLS is running in most of the networks today. If the customers connected to the service provider's network, want to run IPv6 and the service provider wants to carry IPv6 across his network, then IPv6 must be running on his routers. However, this approach has two drawbacks. First, the service provider has to enable a new protocol (IPv6) on all his routers. As both IPv4 and IPv6 are running on the router, then router is running a dual stack. Second, the other customers; who are still running IPv4; are not going away or replacing their network to IPv6 overnight. Therefore, IPv4 and IPv6 have to run in parallel for a long way in the future. If the service provider also needs to run MPLS for IPv6, IPv6 wants LDP support, which is not implemented yet. Multiprotocol Label Switching, as it name indicates, can transport more than just IPv4 just as a payload. As MPLS is running in the networks today, the labeled packets might be IPv6 packets, without the need for the Provider edge routers to run IPv6.
There are three methods of carrying IPv6 over MPLS backbone
§ Any Transport over MPLS (AToM).
The MPLS payload is actually a Layer 2 frame. On the edge LSRs, the frames are labeled and then they are transported across the MPLS backbone through a virtual circuit or pseudo wire. The transported Layer 2 frames could be Ethernet, High-Level Data Link Control (HDLC), ATM, Frame Relay etc. All these solutions have the advantage that the Provider edge routers in the MPLS backbone do not need to run IPv6 because these routers switch only labeled packets. These solutions are more popular than directly running IPv6 across the MPLS backbone. The AToM solution has two drawbacks as compared to 6PE and 6VPE. The first is that the MPLS payload is made up of frames instead of IPv6 packets. An added layer 2 header also transported across the MPLS backbone. The second is that the pseudo wires or virtual circuits are point-to-point in nature, but the 6PE and 6VPE solution are any-to-any. One last method to carry IPv6 over an MPLS backbone is to use the MPLSV VPN. In this case, IPv4 is carried inside VPNs over the MPLS backbone. To carry the IPv6 traffic over IPv4, the Customer Edge routers need tunnels between them. This means that these routers need to be dual-stack routers. These routers are running IPv6, because the Provider Edge routers see only IPv4 packets coming from the Customer Edge routers. In short, the advantage is that MPLS VPN is already deployed in many or most service provider networks, and the routers do not need to run IPv6. The disadvantage is that the Customer Edge routers need to have tunnels configured, and an extra IPv4 header adds overhead. 
4.7.1 6to4 tunnel:
A 6to4 tunnel allows IPv6 domains to be connected over an IPv4 network and allows connections to remote IPv6 networks, such as the 6BONE.The simplest deployment for 6to4 tunnels is to interconnect multiple IPv6 sites, each of which has at least one connection to a shared IPv4 network. This IPv4 network could be the global Internet or could be your corporate backbone. 
The 6to4 tunnel treats the IPv4 infrastructure as a virtual non broadcast link using an IPv4 address embedded in the IPv6 address to find the other end of the tunnel. Each IPv6 domain requires a dual-stack router that automatically builds the IPv4 tunnel using a unique routing prefix 2002::/16 in the IPv6 address with the IPv4 address of the tunnel destination concatenated to the unique routing prefix. The key requirement is that each site has a 6to4 IPv6 address. Each site, even if it has just one public IPv4 address, has a unique routing prefix in IPv6. Figure 24 shows the configuration of a 6to4 tunnel interconnecting 6to4 domains.
We recommend that each site have only one 6to4 address assigned to the external interface of the router. All sites need to run an IPv6 interior routing protocol, such as routing information protocol next generation (RIPng) for routing IPv6 within the site; exterior routing is handled by the relevant IPv4 exterior routing protocol.
Routing and addressing architecture is closely related and not separable in IP-based networks. They are one of the first considerations that must take when planning the migration from IPv4 to IPv6. To take advantage of the new capabilities associated with IPv6, IPv4 routing and addressing approaches will become critical.
PART-II SIMUALTION & RESULTS ON OPNET MODELER 14.5
Chapter # 5
Routers used are following:
Workstations used are following:
Links used are following:
§ DS3 (44.736 Mbps) for edge links
§ DS1 (1.544 Mbps) for core links
Chapter # 6
The topology followed throughout the simulations is shown in the figure 6.1. MPLS is running in the core routers while edge workstations are running merely OSPF. Core links are DS1 while edge links are DS3.
Chapter # 7
Simulations & Results
7.1 Open Short Path First (OSPF)
In this scenario we are analyzing the traffic by configuring Dynamic Routing Protocol i.e. Open Short Path First (OSPF) which is an adaptive routing protocol for Internet Protocol networks. It follows a link state routing algorithm and included into the group of interior routing protocols, operates within a single autonomous system (AS). In conventional IP routing, each router has to make independent routing decisions for each incoming packet in the network. When a packet arrives at any router, the router has to consult its routing table to find the next hop for that packet based on the packets destination address in the packets IP header. In order to build routing tables on each router, each router runs IP routing protocols like Border Gateway Protocol (BGP), Open Shortest Path First (OSPF) or Intermediate System-to-Intermediate System (IS-IS). When a packet passes through the network, each router performs the similar steps of finding the next hop for the packet .
7.1.1 OSPF without Load Balancing
Firstly we are dealing with a simple case that traffic of 500 Kb is sent through three transmitting nodes i.e. W-1, W-2 and W-3 on OSPF enabled network as shown in figure 7.1.
It is obvious that traffic will be completely received at the receiving node because of OSPF link state algorithm and capacity of links which are conducive to the traffic sent as shown in figure 7.2.
Then, we double the traffic sent i.e. 1 Mbps. OSPF routes traffic on the path where cost is minimum so traffic was sent through the preferred path with minimum cost which is
(W-1, W-2, W-3)àER-1àR-1àER-2àW-4 as shown demands in figure 7.1.
Similarly, when 1Mbps traffic was sent, it followed the same path as one link can accommodate only 1.5 Mbps data and so traffic was dropped because it was exceeding the capacity of links and the comparison of graphs 1 & 2 is obvious that OSPF cannot handle such congestion efficiently as second and third paths were available at this time, where network resources were free but due to minimum cost strategy of OSPF, traffic didn't completely sent. It is clear from graph 2 that, W-1 &W-2 & W-3 were utilizing the same path so their traffic was not completely received and there was simultaneous transmission i.e. initially W-1 traffic was completely received and W-3's & W-2's traffic was dropped but after sometime there was little increase in W-3's & W-2's traffic and at this time W-1's &W-2's & W-3's traffic started dropping.
7.1.2 OSPF with Load Balancing
In order to resolve such problem we added load balancing feature in the same OSPF network and then we sent same amount of traffic i.e. 1 Mb as shown in Figure 7.4.
As it is obvious from routes shown in figure 7.4 that the traffic of 1 Mb is divided or load balanced between two paths i.e. via P-1 and P-2 and this uniform distribution enables the routers to deliver traffic simultaneously and without any loss as shown in figure 7.5.
Now this particular topology is able to handle traffic of up to 1 Mb even when all three nodes W-1, W-2 & W-3 simultaneously sending traffic to W-4. As the capacities of links are 1.544 Mbps then this network topology and routing protocol must have capability to handle traffic up to 1.5 Mb. So keeping this thing in mind we sent traffic from W-1, W-2 & W-3 to W-4 simultaneously of 1.5 Mb.
Figure 7.6 demonstrates the failure of OSPF even with load balancing and same thing happened when 1 Mb's traffic was sent with OSPF without load balancing.
Initially, W4 was successfully receiving W-1's & W-2's traffic i.e. traffic is 1.5 Mb which is divided on two paths into 7500 Kb so both node's traffic were completely receiving but during this time as both links were congested with 1.5 Mbps traffic so only 880 Kb links bandwidth is free which W-3 was utilizing, but when W-3's received traffic increases at W-4 to about 1 Mbps then W-1's, W-2's & W-3's received traffic went to approximately 1Mbps. The traffic was dropped because all three transmitting nodes were sending traffic on two load balanced paths.
7.2 MPLS Traffic Engineering
Now in this part, we are analyzing MPLS Traffic Engineering manually and then with RSVP in order to resolve the traffic problems in the network i.e. link failure and congestion. TE is defined as large-scale network engineering for dealing with IP network performance evaluation and optimization .
In this particular topology, we have used same edge and core links as used in previous case. We configured MPLS traffic engineering with improve utilization i.e. if three nodes are sending traffic simultaneously then how this situation is tackled by MPLS protocol?
In MPLS based network, there are different virtual paths called LSPs which deliver different kinds of data tagged with different label values .
7.2.1 MPLS with TE manually
To meet the traffic engineering objectives, we place the demands over the links in such a way that the traffic distribution gets balanced and congestion or hot spot is eliminated in the network. 
Basically we made one FEC to carry traffic for edge router node 4 and then we made a trunk. We added this FEC on the edge router of the MPLS domain and assigned it the administrative rights to route the incoming traffic on three different LSPs such that if traffic comes from node
W-1 then edge router will route the data on LSP1.
W-2 then edge router will route the data on LSP2.
W-3 then edge router will route the data on LSP3.
As it is also clear from the figure 7.8 below that traffic is completely receiving and no traffic is dropped due to congestion as traffic coming from three nodes was using dedicated links i.e. LSPs.
In this way we are handling congestion of traffic on the network and traffic is receiving efficiently. But the problem in this case is that such network requires an administrator to take decisions that which traffic is sent to which path in order to reduce traffic congestion.
7.2.2 MPLS TE with RSVP
Resource Reservation Protocol (RSVP) is a message controlling protocol used to provide the terminals needed bandwidth for the data transfer or any QoS predefined services i.e. Video on Demand (VOD), Voice over IP (VoIP), Network Meeting etc and routers are classified as RSVP supported router and RSVP unsupported router .
As we have resolved the problem of traffic congestion by configuring MPLS TE manually by making LSPs. Now the question arises that if there some failure of node or a link occurs due fiber cut etc. then under these circumstances how we can recover the traffic?
The answer to this problem lies in configuring MPLS Traffic Engineering with Resource Reservation Protocol (RSVP) and by utilizing one of its feature i.e. Local Protection which includes Fast Reroute, link protection and node link protection.
We configured RSVP in the MPLS enabled network and then we sent voice traffic of 120 Kb by creating a situation of failure of link in the path of LSP2 such that after 500 seconds of traffic sent, link between R3 and R4 fails and after 600 seconds it recovers. It has also included the backup LSP which will come into action when failure of link occurs. When the link or node failure takes place then interior gateway protocol may take of the order of 10 seconds to coverage. Fast reroute has done pre-signaling of the backup path alongside with the primary path .
The simulations results are shown in Figure 7.10, 7.11 & 7.12. According to these graphical results first primary LSP traffic in (bits/sec) is same as it was sent but backup LSP comes into action after link failure that shows that RSVP has already knowledge of this path and it has also included in the back up LSP list of both Primary LSPs. Figure 7.11 shows the RSVP traffic sent and receive. This graph shows traffic received is same but a little variation that is due to link failure but it is negligible and third graph is showing the reroute traffic and it also demonstrates the efficiency that in minimum time fast reroute took place and traffic was recovered and the user even cannot realize that some failure in the network has occurred. This is one advantage of using RSVP protocol; it also has some other features, one of which is reserving bandwidth of the paths and again helps to recover the traffic and reduce congestion which will be our next concern.
7.3 IPv6 over MPLS backbone
7.3.1 IPv6 Network
In this scenario we use IPv6 addressing scheme instead of IPv4 addressing. We assign IPv6 addresses to all interfaces and set IPv4 address to “No IP Address”. In order to configure IPv6 in the network, Local Address attribute is set to Default EUI-64 while Global Addresses is set to EUI-64. The first 64 bits are of the address and the remaining 64 bits are of interface ID. In this scenario RIPing routing protocol is configured for IPv6 traffic.
W-1 sends traffic of 1.2Mbps towards W-2 as shown below:
The drawback of this method is that all nodes must support IPv6 and it is not practically possible for service providers and customers to change their whole infrastructure to IPv6 because they already using IPv4 addressing scheme.
7.3.2 Dual Stack Network
In this scenario all nodes are dual stacked that are supporting both IPv4 and IPv6 addressing schemes. We configure RIP and RIPing routing protocols for IPv4 and IPv6 simultaneously.
W-1 sends traffic of 1.2 Mbps towards W-2 as shown below:
There are two drawbacks of this method. First, the service provider needs to enable a new protocol for IPv6, because the routers are running a dual-stack. Second, the other customers still running IPv4 and they are not going away or change their network to IPv6 overnight. Therefore, we have to follow a technique in which IPv4 and IPv6 both are running in parallel.
7.3.3 IPv6 over MPLS using 6to4 Tunnel
Many service providers use tunnels to carry IPv6 traffic over MPLS backbone. Following are main tunneling methods for IPv6 to implement with Cisco IOS.
§ IPv6 over IPv4 GRE tunnels
§ Manual IPv6 tunnels
§ 6to4 tunnels
§ IPv4-compatible IPv6 tunnels
§ ISATAP tunnels
In this scenario W-1 & W-3 are IPv6 stack nodes while W-2 & W-4 are IPv4 stacked. We use 6to4 tunnel to carry IPv6 traffic over IPv4 MPLS backbone while carry traffic from W-2 to W-4 using IGP routing method. CE-1 & CE-2 routers are dual stack means running both IPv6 & IPv4 and MPLS backbone routers are only IPv4 stacked and no need to run IPv6 on PE and P routers.