Disclaimer: This is an example of a student written essay.
Click here for sample essays written by our professional writers.

Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UKEssays.com.

Case Study – OpenStack Networking

Paper Type: Free Essay Subject: Computer Science
Wordcount: 8580 words Published: 18th May 2020

Reference this

Contents1. Introduction

1.1 Introduction to OpenStack Networking

1.2. Objective of case study

1.3. Infrastructure used for case study

1.4. VXLAN Tunnel

1.5. Software Defined Networking (SDN)

1.6. Open Daylight

2. Design and Implementation

2.1. Configuration Steps

2.2. Installing PackStack Utility

2.3. Generating and editing the answer file

2.4. Installing OpenStack using answer-file

2.5. After Installation Steps

3. Creating Cloud Environment

3.1. Creating TenantA

3.2. Creating TenantB

3.3. Creating External Network:

3.4. Creating Tenant Networks:

3.5. Creating Router:

3.6. Adding OS Image To Glance:

3.7. Creating Volume:

3.8. Creating Security Groups:

3.9. Generating Key Pairs:

4. Analysis of Network Traffic Flow

4.1 First Scenario:

4.2 Second Scenario

4.3 Third Scenario

4.4 Fourth Scenario

References


1. Introduction

Cloud computing is a type of internet-based computing that provides shared computer processing resources and data to computers and other devices on demand.

OpenStack is a set of software tool for building and managing cloud computing platforms for public and private clouds. OpenStack software manages large pools of compute, networking resources and storage throughout a datacenter, which is managed by dashboard and OpenStack API.

Figure 1.1 OpenStack Components

Some important component of the OpenStack is as follow.

Dashboard (Horizon):

Dashboard is the overview of OpenStack architecture by which we can easily monitor on all services.

Keystone:

Keystone is an identity service which is responsible for authorization of users and, check what services user authorized to use.

Compute NOVA:

Nova which is knowns as compute service in which all the instances are launched.

Neutron:

Neutron is a networking service focused on delivering Network-as-a-Service(NaaS) in virtual compute environments.

Glance:

Glance service stores the OS images in OpenStack environment. These OS images will be then used to provision virtual machine instances.

Swift:

Swift is a storage service which is used for particular object storage.

Cinder:

Cinder is also a storage service, but this is a plug-in kind of storage which is used for block storage.

1.1 Introduction to OpenStack Networking

OpenStack Networking permits to form and manage network objects, such as networks, subnets, and ports, which other OpenStack services can use. Plug-ins can be executed to put up different networking equipment and software, providing flexibility to OpenStack architecture and deployment.

1.2. Objective of case study

Creating a Cloud based datacenter, or any cloud-based environment involves creating and running VMs over virtualization platform. One of the most important service that should be run to make data communication between these VMs is possible by network service. Network service in OpenStack is done by Neutron component.

In this case study main task is to create 3 hosts which contains 2 compute node and 1 network/controller node. All these VMs are running on virtual environment and operating system which is used in those VMs is CentOS7.

Figure 1.2.1 Topology for case study

Important feature of cloud computing is that it can provide variety of sets of resources to customers, which they are called tenants in cloud terminology. VXLAN tunneling is the technique used by cloud to separate the traffic from different tenants another important thing is vSwitch which is used in this case study to create connectivity among components. Virtual Router on Network node will perform SNAT and assigning floating IP address in order to provide external reachability.

 This case study is divided into 3 important sections which are as follow,

  • Installing and establishing of cloud and OpenStack components which are used in this project.
  • Configuring the underlay and overlay network and neutron configuration in order to make communication among these VMs, which are running in cloud environment.
  • Monitoring and studying the behavior of network by using TCP-dump or wire-shark, in this case study we use wire-shark.

Figure 1.2.2 OpenStack Deployment

 

1.3. Infrastructure used for case study

We will implement the project in VMware vCloud by creating 3 virtual machines connected to three different networks. The specification for each virtual machine is mentioned in below table.

Name

OS

Memory

No. of CPU

Disk

Network Interfaces

VM1 (Compute1)

Centos 7

16 GB

4

16 GB

ens32: 192.168.13.11

ens34: 172.16.10.101

ens35: 192.168.10.101

VM2 (Compute2)

Centos 7

16 GB

4

16 GB

ens32: 192.168.13.12

ens34: 172.16.10.102

ens35: 192.168.10.102

VM3 (Controller)

Centos 7

8 GB

4

16 GB

ens32: 192.168.13.13

ens34: 172.16.10.10

ens35: 192.168.10.10

1.4. VXLAN Tunnel

VXLAN tunneling is a method used to transfer encapsulated packets from one node to another. The packets are encapsulated with VXLAN, UDP and IP headers. VXLAN header contains VNI (VXLAN Network Identifier). VNI is like a VLAN ID, which is used to differentiate the traffic of each segments in the overlay network. The 24 bits VNI expanded the range of virtual networks from 4096 (VLAN) up to 16 million (VXLAN). The routers and switches participating in VXLAN have a special interface called VTEP (Virtual Tunnel End Points). VTEP interface is used to bridge the overlay VNIs with underlay layer 3 networks. The VXLAN frames (encapsulated packets) are delivered between hosts via a VXLAN tunnel created between source and destination VTEP interfaces. The underlay network transmits this encapsulated packet using the ip address of VTEP interfaces. The outer source ip address is the initiating VTEP interface and the destination ip address is the terminating VTEP interface. Upon reaching at the destination VTEP interface, the VXLAN frame is decapsulated and the original frame is forwarded to the destination host based on its MAC address in original frame.

Figure 1.4 VXLAN Packet Format 1

In Figure 1.4, VXLAN Packet Format is shown in detail which includes VXLAN Header, UDP Header, Outer IP Header, Outer MAC Header and FCS.

Original L2 Frame:

This is a Layer2 frame generated by VM with Layer2 header which includes MAC addresses of VMs. Since both communicating VMs must be in the same VXLAN and should have same VNI, Thus, both VMs perceive that they are in the same LAN segment.

VXLAN Header:

VXLAN header contains 24 bits VNI (VXLAN Network Identifier) field which is used to isolate the traffic of one segment from the other. There are two reserved fields i.e. 24 and 8 bits, which are used for future use. There is also a flag field of 8 bits

Outer UDP Header:

 The original frame with VXLAN header is encapsulated in Outer UDP header. VXLAN use this header for transportation. The source port number is provided by the initiating VTEP while the destination port is always an IANA standard UDP port 4789.

 Outer IP Header:

The outer IP header contains the IP addresses of the encapsulating VTEP interface and the decapsulating VTEP interface. These IP addresses are mapped while considering the MAC address of the communicating VMs. This is called MAC-to-IP mapping in VXLAN transmission. The encapsulating VTEP interface IP address is the Outer Source IP address. The decapsulating VTEP interface IP is the Outer destination IP address.

Get Help With Your Essay

If you need assistance with writing your essay, our professional essay writing service is here to help!

Essay Writing Service
Outer Ethernet/MAC Header:

The outer ethernet/ MAC header includes the original ethernet frame encapsulated with in VXLAN in a UDP header and then Outer IP header. The source MAC address is the MAC address of the encapsulating VTEP interface. The source and destination MAC address will change with each hop while routing the traffic. The destination MAC address is the MAC address of the decapsulating VTEP interface or the MAC address of the intermediate Layer 3 router.

As shown in the figure1, due to VXLAN encapsulation the header length is increased by 50 bytes. Therefore, to avoid fragmentation we must allow jumbo frames to transmit these frames in the underlay network.

1.5. Software Defined Networking (SDN)

Software Defined Networking technology is a centralized approach to manage and program individual network devices through centralized SDN Controller and control the behavior of the network. SDN decouples the forwarding and control planes of the network to minimize complexity and enable faster innovation at both planes. The decoupling and intelligence centralization of SDN made programming of the network controls easier. SDN has a centralized programmable controller to program individual network devices dynamically and thus control the whole network. This programmable nature of SDN made it flexible to adopt to variable network requirement. In the traditional network the static architecture is decentralized and complex and therefore hard to troubleshoot. Distributed switching process depends on each switch to make the correct switching decision per packet or flow which add undesired latency and complexity to network. Whereas, with SDN intelligence centralization, troubleshooting, monitoring and programming were made.

1.6. Open Daylight

Open Daylight (ODL) is an open platform for customizing and automating complex networks. ODL is an open source SDN controller platform for network programmability. It is widely used because of its flexibility and reliability. Following statement from OpenDayLight project’s official website introduces OpenDayLight perfectly:

“OpenDaylight is a highly available, modular, extensible, scalable and multi-protocol controller infrastructure built for SDN deployments on modern heterogeneous multi-vendor networks. OpenDaylight provides a model-driven service abstraction platform that allows users to write apps that easily work across a wide variety of hardware and south-bound protocols.”

2. Design and Implementation

For our case study, we have designed a cloud environment with three virtual machines i.e. Compute1, Compute2 and Controller/Network Nodes. In the compute nodes, we had created two network segments and named them Tenant A and Tenant B. For each segment, we have launched three instances. Two instances of tenant A are in Compute1 node while its third instance is running on Compute2. Similarly, two instances of tenant B are running in Compute2 node while its third one is running on Compute1 as shown in the figure below. The Controller/Network node run the Neutron services and is used for networking. The network node is used to provide external access to both tenant’s machines. It is also used to provide inter tenant communication.

For the underlay network, we used Packstack utility to install Openstack components. The configuration steps for installing Openstack using PackStack utilities are explained further in this report.

2.1. Configuration Steps

Pre-installation checklist:

  • Made sure that all the interfaces of the hosts are active, and an appropriate IP addresses are assigned to each interface
  • Configured /etc/hosts file and added three hosts i.e. Controller, Compute1 and Compute2 on all the three nodes

172.16.10.10 controller.example.com  controller

172.16.10.101 compute1.example.com  compute1

172.16.10.102 compute2.example.com  compute2

  • Configured /etc/resolve.conf and in this configuration file we provided specific DNS server address
  • Disabled NetworkManager Service- Network Manager service automates the network’s setting and disrupts Neutron functionality; Therefore, we disable it through commands:

systemctl stop NetworkManager

systemctl disable NetworkManager

  • Disabling Firewall on host machine

systemctl stop firewalld

systemctl disable firewalld

In our case study, we are designing our project virtually in a cloud IaaS environment, therefore, it is acceptable to disable firewall for our project. But in production environment, it is not recommended to disable firewalls as they provide security and control the network. In such environments Proper firewall configurations should take place.

2.2. Installing PackStack Utility

To install PackStack, we followed the following steps

  • On RHEL, we downloaded and installed the RDO repository RPM to set up the OpenStack repository

sudo yum install -y https://rdoproject.org/repos/rdo-release.rpm

  • For CentOS, we installed Extra RPM to set up the OpenStack repository

sudo yum install -y centos-release-openstack-stein

  • It is important to make sure that the repository is enabled

yum-config-manager –enable openstack-stein

  • Also, it is important to update the current packages

sudo yum update -y

  • We installed PackStack utility

sudo yum install -y openstack-packstack

There are two options to deploy Openstack through PackStack. The first deployment method is “All-In-One” and the second method is by using the “answer-file”. PackStack’s “allinone” option installs and configures a single fully-loaded standalone Openstack deployment. In this method, PackStack will use the default configuration to install OpenStack. With “allinone”, OpenStack is installed with all the components on a single machine. This is not the topology of our case study as we have three VM in our project.

The second method for installing OpenStack through PackStack is by using the answer-file. We first generate answer-file in our home directory and then edit it according to our requirement. By using the answer-file, we can deploy a customized OpenStack as mentioned in the answer-file. The answer-file instructs PackStack to install certain components and further provides fields to configure basic functions of each component.

2.3. Generating and editing the answer file

As discussed in above section, we are using answer-file as a method of installing OpenStack. For this reason, we first generated an answer-file and named it “answer.txt”. The answer-file is generated using answer.txt file by the following command in a root:

packstack –gen-answer-file=answer.txt

This answer file is stored in “answer.cfg” which is then used by PackStack to install OpenStack. We used VIM editor to edit the answer file and customized all the specifications and components needed to install OpenStack for all nodes. Please note that the answer file used in our implementation is provided in Appendix B of this document.

In the answer-file, we have customized all the specifications for our project. Some of the important fields of answer-file required are mentioned below

CONFIG_DEFAULT_PASSWORD=pass

CONFIG_NTP_SERVERS=pool.ntp.org

CONFIG_CONTROLLER_HOST=172.16.10.10

CONFIG_COMPUTE_HOSTS=172.16.10.101,172.16.10.102

CONFIG_NETWORK_HOSTS=172.16.10.10

CONFIG_KEYSTONE_ADMIN_PW=pass

Essential Neutron Configurations:

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan,flat

CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan

CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=1001:2000

CONFIG_NEUTRON_ML2_VNI_RANGES=1001:2000

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0

CONFIG_NEUTRON_OVS_EXTERNAL_PHYSNET=extent

CONFIG_NEUTRON_OVN_BRIDGE_MAPPINGS=extnet:br-ex

CONFIG_NEUTRON_OVN_EXTERNAL_PHYSNET=extnet

CONFIG_NEUTRON_OVS_TUNNEL_IF=eth2

The rest of the parameters in the answer-file were configured with the default values. We defined these parameters as per our case study’s requirement. We can modify the parameters for any change needed through configuration file “answer.cfg”.

2.4. Installing OpenStack using answer-file

After generating the answer file with all the parameters required for our project, we started installation of OpenStack using the answer.txt file with the following command:

 packstack –answer-file=answer.txt

When we executed this command from the controller node, PackStack initiated SSH session with the other hosts i.e. Compute1 and Compute2 to install OpenStack using answer-file. As we had configured a password for root login, therefore, the SSH session ask for root password for all nodes. Once the SSH session is established, PackStack will install OpenStack on all the nodes using answer-file.

2.5. After Installation Steps

After installation we confirmed that all API endpoints have been created using the below mentioned command,

 openstack endpoint list

Figure 2.5.1 OpenStack Endpoint List

We made sure Neutron service is running on both compute nodes and verified that network agents are up with following command,

 openstack network agent list

Figure 2.5.2 Neutron Agent List

We also verified that compute service (Nova) is running with following command,

 openstack compute service list

Figure 2.5.3 Compute Service List

 

3. Creating Cloud Environment

After installation part its time to create tenants in our case study scenario we are directed to create two tenants in the VMs. Each tenant will have its own corresponding internal network to communicate with each other, in each tenant there will be 3 corresponding VMs. These two tenants will share an external network in order to reach internet, both networks will be connected by means of a virtual router.

3.1. Creating TenantA

We will create tenant A by using the below mentioned command,

 openstack project create TenantA

3.2. Creating TenantB

Tenant B will be created in our environment by the following command,

openstack project create TenantB

We need to give each user admin role, and this will be done by the following commands,

openstack role add –user tenanta_admin –project TenantA admin

openstack role add –user tenantb_admin –project TenantB admin

 

3.3. Creating External Network:

In our case study it is important that our VMs communicate with internet and should be reachable by outside it will be done by external network, our both tenants sharing the external network in order to communicate to the internet. Some parameters are required for creating the external network and are mentioned below,

neutron net-create External-Network –shared –provider:physical_network extnet –provider:network_type flat –router:external=True

External subnet network should be corresponded with the provider, it will be done by the following command.

neutron subnet-create –name Public_Subnet –enable_dhcp=False –allocation_pool start=192.168.13.200,end=192.168.13.220 –gateway=192.168.13.1 External-Network 192.168.13.0/24

 

3.4. Creating Tenant Networks:

Each tenant will require an internal network which will be used for communication between the instances of the tenant. We also need to assign a subnet to the internal networks and DHCP needs to be enabled on both.

openstack network create –project TenantA –enable –internal –provider-network-type=vxlan TenantA_Network

openstack network create –project TenantB –enable –internal –provider-network-type=vxlan TenantB_Network

 

openstack subnet create –project TenantA –subnet-range 10.1.1.0/24 –allocation-pool start=10.1.1.100,end=10.1.1.200 –dhcp –gateway 10.1.1.1 –network TenantA_Network TenantA_Subnet

openstack subnet create –project TenantB –subnet-range 10.2.2.0/24 –allocation-pool start=10.2.2.100,end=10.2.2.200 –dhcp –gateway 10.2.2.1 –network TenantB_Network TenantB_Subnet

 

3.5. Creating Router:

It is important to create a router in our environment to get communication between internal networks to external network, this will be done by the below commands.

openstack router create –project TenantA TenantA_R1

openstack router create –project TenantB TenantB_R1

openstack router add subnet TenantA_R1 TenantA_Subnet

openstack router add subnet TenantB_R1 TenantB_Subnet

openstack router set –external-gateway External-Network TenantA_R1

openstack router set –external-gateway External-Network TenantB_R1

 

3.6. Adding OS Image To Glance:

Now we need to load image of OS which we are using is “cirros“to Glance, we get the ISO file from the below mentioned link by typing this command.

wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

openstack image create “cirros”

–file cirros-0.3.5-x86_64-disk.img

–disk-format qcow2 –container-format bare

–public openstack image list

 

3.7. Creating Volume:

Now our next step is creating volumes for VMs which we uploaded into our Glance service. Instances will use these volumes for booting up we can create volume by below mentioned command.

openstack volume create –project TenantA –image cirros –size 1 –availability-zone nova VolA1

 

3.8. Creating Security Groups:

By default, security group feature will block all the ingress traffic, so now in our case we make a new security group in order to allow incoming (ingress) traffic. Group A and Group B are the security group created for our case; these will be created by the mentioned commands.

openstack security group create –project TenantA groupA

openstack security group rule create –project TenantA –remote-ip 0.0.0.0/0 –protocol any –ingress groupA

openstack security group create –project TenantB groupB

openstack security group rule create –project TenantB –remote-ip 0.0.0.0/0 –protocol any –ingress groupB

It is important that we need to create security group for both tenants. Also make sure it is not necessary to create group security in production environment it is totally depend upon their current running security policies.

3.9. Generating Key Pairs:

We also need to generate keypair in order to get access from SSH for each tenant after launching their instances. This will be done by the following commands.

ssh-keygen -f id_rsa_tenantA -t rsa -b 2048 -N ”

ssh-keygen -f id_rsa_tenantB -t rsa -b 2048 -N ”

openstack keypair create –public-key id_rsa_tenantA.pub keyA

openstack keypair create –public-key id_rsa_tenantB.pub keyB

After generating keypairs and following above all these steps we were able to launch instance using OpenStack dashboard (Horizon).

Performing all these steps in the same manner we created 3 instances in each tenant. Different instances in same tenant uses same keypairs but there should be at least one keypair for each tenant. After creating instances our next step in case study is to associate floating IPs to VMs. Floating IPs will help to communicate from external access to instances. Virtual router is responsible for mapping each floating IP to VM in the internal network, it uses the DNAT to forwards the packets to their desired location. Below screenshots are showing the three instances created for TenantA & TenantB. Floating IP 192.168.13.215 was also associated with instance VM_A1.

Figure 3.9.1 TenantA Instances

Figure 3.9.2 TenantB Instances

After creation of instances the final topology of both tenant networks is shown in below screenshots.

Figure 3.9.3 TenantA Network

Figure 3.9.4 TenantB Network

4. Analysis of Network Traffic Flow

In this case study, we have tested five different scenarios. The results of each scenario are analyzed, and the role of each component of Compute and Network nodes are observed during traffic flow. During the analysis, the important component of compute and Network nodes taken under consideration are:

  • Linux Bridge
  • OVS Integration Bridge
  • OVS Tunnel Bridge
  • Router Namespace


 

4.1 First Scenario:

In the first scenario, we have tested and analyzed the flow of traffic between two VMs of the same tenant on same compute node. Both the VMs i.e. VM_A1 and VM_A2 are in the same subnet and reside on the same compute node as shown on Fig. 4.1.1.

Figure 4.1.1 Scenario 1

For this scenario, we generated ping to IP address of VM_A2 i-e. 10.1.1.148 from VM_A1 which has an ip address of 10.1.1.119. In this scenario, we are assuming that there’s no flow installed in the flow table.

The steps involved during this scenario from start of ICMP Echo Request up to ICMP Echo Reply are:

  1. VM_A1 instance forwards an ICPM Echo request packet to virtual interface of Linux bridge (TAP). The role of a Linux bridge is to verify the incoming packets if they qualify security rules.
  2. After verification, the ICMP Echo request packets are forwarded to OVS Integration bridge (Br-int). The Br-int then look-up if it has the flow installed for the MAC address with the destination IP address. As in our scenario, there is no flow installed initially, then the Br-int will send an ARP request for the IP of VM_A2. VM_A2 receive the ARP request and reply with an ICMP Echo reply which include the MAC address of VM_A2. Br-int will then add an entry in its flow table for the route.
  3. After updating the flow table, Br-int will note that the destination instance resides on the same compute node and belongs to the same subnet.
  4. The packets are then forwarded to the Linux bridge that has a TAP interface connected to VM_A2.
  5. The Linux bridge again verify if there is any security rule for this packet. If there is no security rule defined, then the packet is delivered to the destination VM.

Figure 1.1.2 Protocol header of ICMP Request packet

Figure 1.1.3 Protocol headers of ICMP Reply packet

 
Observation:

We have used Wireshark to analyze and observe the flow of traffic. In the figures 1.1.2 and 1.1.3, it can be observed that the packet initiated from VM_A1 to VM_A2 was not routed out of the Compute1 node. The highlighted portion in the figure confirms that there was no extra header added. The header shows that it is not encapsulated with other headers to route it out of this compute node. If it were routed out of Compute1 node, then the packet would have been encapsulated with VXLAN, UDP and IP headers.

Find Out How UKEssays.com Can Help You!

Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.

View our services

4.2 Second Scenario

In the second scenario we have traced the route of a packet between VMs of same tenant on different compute nodes. VM_A1 and VM_A3 are the two instances of TenantA on the same subnet. VM_A1 resides on Compute1 node while VM_A3 resides in the Compute2 node as shown in figure 4.2.1. In this scenario, we will analyze the flow of ICMP Echo request and ICMP Echo reply packets between these instances.

Figure 4.2.2 Scenario 2

The steps involved during the ICMP request packet up to ICMP reply are:

  1. VM_A1 forwards an ICPM Echo request packet to virtual interface of Linux bridge (TAP). The Linux bridge will verify the incoming packets if they satisfy security rules.
  2. After verification, the ICMP Echo request packets are forwarded to OVS Integration bridge (Br-int). The Br-int then look-up if it has the flow installed for the MAC address with the destination IP address. If there is no flow installed initially, then the Br-int will send an ARP request for VM_A3. When Br-int does not receive a reply from any instance in the Compute1, then it will add an internal VLAN tag to this packet. If there are multiple tenants on different VLAN, then this VLAN Tag is used to differentiate traffics of each tenant’s VM.
  3. After VLAN tagging, the Br-int forward the packet to OVS Tunnel Bridge (Br-tun). Br-tun encapsulates the packet with VXLAN header. As shown in figure 4.2.3, Br-tun encapsulate the packet using VNI number 1013.
  4. After encapsulating with VXLAN header, this packet is then encapsulated with UDP header. As shown in figure 4.2.3, the source port is selected randomly, here it has selected UDP port 55238. The destination UDP port is always an IANA standard port for VXLAN i.e. 4789.
  5. After UDP header, the last header added to this packet is outer IP header of the overlay network. In the IP header, the source IP address is of Compute1 and the destination IP address is of Compute2.
  6. The packet will then be forwarded to Compute2 node via overlay network.
  7. At the Compute2 node the physical interface for overlay network will forward the packet to the OVS tunnel bridge (br-tun).
  8. The br-tun decapsulates the packet, forwards the packet to Br-int.
  9. The Br-int removes the VLAN tag and forward the packet to Linux bridge.
  10. The Linux bridge will verify the packet against the security rules. If there is no security rule configured for this packet, then the packet is forwarded to the destination instance VM_A3.

The ICMP Echo reply packet repeat the same step in reverse as described above.

Figure 4.2.3 UDP Header, Outer IP Header, VXLAN Header with VNI 1013

From the figures 4.2.3 it can be observed that the original packet is first encapsulated with VXLAN header, then secondly with UDP header and finally with an outer IP header of overlay network.

Observation:

In this scenario, the ICMP Echo request and reply packets were analyzed on Compute1 using Wireshark. We observed that the packet was encapsulated and then routed from one compute node to another using VXLAN tunnel.

4.3 Third Scenario

In the third scenario we have analyzed and observed the results of traffic flow from an instance VM_A1 to internet. We generated ping to the IP address of Google DNS 8.8.8.8 from VM_A1 which resides on Compute1 node. In this scenario, we observe how the ICMP Echo request packet of VM_A1 is transmitted from Compute1 node to internet through Network node. To forward the packet to an external network, the router performs SNAT (Source Network Address Translation) at the virtual router on Network node. Figure 4.3.1 shows the overview of our topology.

Figure 4.3.1 TenantA Network

Figure 4.3.2 shows the OpenStack nodes connectivity for this scenario.

Figure 4.3.2 Scenario 3

The steps involved during the transmission of ICMP Echo request from VM_A1 to google address are described under. The first six steps explained in second scenario 4.2 are similar with this scenario.

The steps involved during the ICMP request packet up to ICMP reply are:

  1. VM_A1 forwards an ICPM Echo request packet to virtual interface of Linux bridge (TAP). The Linux bridge will verify the incoming packets if they satisfy security rules.
  2.  After verification, the ICMP Echo request packets are forwarded to OVS Integration bridge (Br-int). The Br-int then look-up if it has the flow installed for the MAC address with the destination IP address. If there is no flow installed initially, then the Br-int will send an ARP request for 8.8.8.8. When Br-int does not receive a reply from any instance in the Compute1, then it will add an internal VLAN tag to this packet. If there are multiple tenants on different VLAN, then this VLAN Tag is used to differentiate traffics of each tenant’s VM.
  3. After VLAN tagging, the Br-int forward the packet to OVS Tunnel Bridge (Br-tun). Br-tun encapsulates the packet with VXLAN header. As shown in figure 4.3.3, Br-tun encapsulate the packet using VNI number 1013.
  4. After encapsulating with VXLAN header, this packet is then encapsulated with UDP header. As shown in figure 4.3.3, the source port is selected randomly, here it has selected UDP port 37695. The destination UDP port is always an IANA standard port i.e. 4789.
  5. After UDP header, the last header added to this packet is outer IP header. In the IP header, the source IP address is of Compute1 and the destination IP address is of Network node.
  6. The interface of Compute1 connected with overlay network will then forward this packet to the Network node through VXLAN tunnel (1013).
  7. When the packet is received at the interface of Network node, it forwards it to the Br-tun where it gets decapsulated. The Br-tun then forwards it to Br-int and Br-int forwards it to the router namespace.
  8. The router namespace has two interfaces i.e. “qr” and “qg”. The interface “qr” is connected with internal network (10.1.1.0/24) and has IP address 10.1.1.1. The interface “qg” is connected with external network and has IP address 192.168.13.214.
  9. The router performs SNAT and translate the source IP address of the packet with the external network IP address i.e. 192.168.13.214. The router updates its NAT table and then forwards this packet to Br-int.
  10. Br-int forward it to the interface connected to external network where it is forwarded to the next hop and will ultimately reach the destination 8.8.8.8.

The ICMP Echo reply packet take the reverse steps:

  1. When the ICMP Echo reply packet arrive at the physical interface of network node, it forwards it to Br-int.
  2. Br-int forward the packet to router where the router will look-up in its NAT table and will change the source address of packet again back to internal network IP.
  3. Router will first forward it to Br-int and Br-int will forward it to Br-tun where the ICMP Echo reply packet is encapsulated with VXLAN, UDP and IP headers. The encapsulated packet will be forwarded to Compute1 node through VXLAN tunnel.
  4. The interface of Compute1 node receive the packet and forward it to Br-tun. The Br-tun decapsulates the IP, UDP and VXLAN headers and forward the packet to Br-int.
  5. The Br-int removes the VLAN tag and forward the packet to Linux bridge.
  6. The Linux bridge verify the packet against the security rules. If there is no security rule configured for this packet, then the packet is forwarded to the destination instance VM_A1.

Figure 4.3.3 Packet Encapsulated, UDP Port, VNI Number, ICMP Type 8

Observation:

In this scenario, the instance of TenantA on Compute1 node generated ping to Google DNS at 8.8.8.8. The ICMP Echo request and reply packets were analyzed on Compute1 node using Wireshark. We observed that the packets were encapsulated and then routed from one compute node to network node using VXLAN tunnel. From the figures 4.3.3 it can be observed that the original packet was encapsulated. The packet is encapsulated with VXLAN, UDP header and IP headers of overlay network. The packet is forwarded to Network node where the router performed SNAT so that it can reach the destination via external network.

4.4 Fourth Scenario

In the fourth scenario, we have analyzed the traffic flow in case of a network service running on an instance of TenantA is accessed from external network. To observe the results, we have started an SSH session and connected to VM_A1 from an external network.

Figure 4.4.1 Scenario 4

The steps involved during the SSH session from external network to VM_A1 instances are:

  1. For establishing SSH connection with VM_A1 we used a Workstation connected to same external network. We used floating IP assigned to VM_A1 i-e. 192.168.13.215 to establish SSH connection. The floating IP address is assigned to the virtual router for TenantA on the Network node. So, the SSH connection request will be received by Network node as shown in figure 4.4.2. In the packet, the source port will be random while the destination port will be IANA standard TCP port 22.
  2. At the network node the physical interface connected to external network (ens32) will receive the packet and will forward it to the OVS provider bridge (br-ext).
  3. Br-ext will forward the packet to OVS integration bridge (br-int)
  4. Br-int will forward the packet to router namespace. The router performs DNAT and translate the destination IP address of the packet. The destination IP address of the packet is the Floating IP address i.e. 192.168.13.215 for VM_A1 of Compute1 node as shown in figure 4.4.3. Due to DNAT the destination of the packet will become internal IP address of VM_A1 i-e. 10.1.1.119. The packet will then be forwarded back to br-int.
  5. Br-int inserts VLAN tag and then forward it to Br-tun where this packet is encapsulated with VXLAN, UDP and IP headers. The encapsulated packet will be forwarded to Compute1 node through VXLAN tunnel.
  6. The interface of Compute1 node will receive the packet and forward it to Br-tun. The Br-tun decapsulates the IP, UDP and VXLAN headers and forward the packet to Br-int.
  7. The Br-int removes the VLAN tag and forward the packet to Linux bridge.
  8. The Linux bridge verify the packet against the security rules. If there is no security rule configured for this packet, then the packet is forwarded to the destination instance VM_A1.

Figure 4.4.2 Packet headers for SSH request received at Network Node

Figure 4.4.3 Packet headers for the SSH packet sent from Network node to Compute1 node

Observation:

In this scenario, an SSH session is established from external network with VM_A1 of Compute1 node. In this scenario, we observed that when the packet arrives at the Network node physical interface, the router performs DNAT. With DNAT, the router translate the Floating IP address configured for VM_A1 instance of Compute1 node. The destination address of the packet is replaced with the internal IP address of Compute1 node. The translated packet is then routed to the next hop towards the destination. From the Wireshark packet dump we also observed the same scenario that the packet is encapsulated and then forwarded through VXLAN tunnel from Network node to Compute1 node as shown in figure 4.4.2.

References

 

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

DMCA / Removal Request

If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: