Conventional Networks Runs Routing Computer Science Essay

Published:

Conventional networks runs routing algorithms on special purpose device to provide access control on the data, routing protocols, monitoring of flow of data in the networks and discovery of the topology of the devices in the network i.e. how they are arranged. These rules are embedded in hardware in the form of Application Specific Integrated Circuits (ASIC). A simple example is that of packet forwarding. When a packet is received by a switch belonging to a conventional network, the switch uses a set of rules embedded into its firmware to decide where and how to forward that specific packet. Packets belonging to the same destination are normally treated the same way and directed through the same packet irrespective to the type of the data packets. Some expensive switches/devices can be used to differentiate between different types of packets and based on their nature deal with them differently. This can severely limit conventional networks in their abilities to cope with ever-growing traffic in current age networks. The increasing demand for scalability, reliability, security and speed can hinder the performance of the conventional networks. Conventional networks suffer the non-existence of flexibility to handle different types of data differently because of the hardware implementation.

Lady using a tablet
Lady using a tablet

Professional

Essay Writers

Lady Using Tablet

Get your grade
or your money back

using our Essay Writing Service!

Essay Writing Service

A possible solution is to implement data handling rules in software rather than hardware. This gives more flexibility to the administrators in terms of their control over the network traffic handling and can enormously improve the performance of the network. One such concept is called Software-defined Networking (SDN). In this approach the control for data handling is decoupled from hardware and is implemented in a piece of software called the controller. SDN is an innovative idea that decouples the control in the networking stack from hardware to software to improve the performance of small specialized networks in terms of data, control and management of the networks. SDN is gaining popularity for their potential use in data centers for Bid Data, could computing and workload optimized systems.

In software-defined network the administrator has the power to control the flow of data and change the properties of the switching devices from a central location by using a software control application without the need of individually interacting with each device. The administrator has the ability to change each network switching device rules table when deemed important where administrators can prioritize, de-prioritize or even block certain types of packets in the network with different levels of control over the traffic. This allows a more efficient control over the network traffic and can help in managing traffic loads in the network. This approach is really helpful in applications like cloud computing multi-tenant architecture as it provide administrators more control over the traffic load in a reliable and more efficient way. This enables the administrator to use less expensive, commodity switches and enables them to have more control over network traffic flow as compared to the conventional network. SDN is also called the "Cisco killer" since the designer to use multi-vendor hardware and ASICs for the purpose of switching fabric. One of the most important and widely used standards for implementation of the SDN is called OpenFlow which enables network administrators remotely control the routing tables in the switches to control the routing of traffic in networks.

SDNs enable data center operators to control specific data flows so that data can be directed to move through the network more efficiently, using paths with more available bandwidth or fewer hops. Some of the main advantages of SDNs are listed below

Speed and Intelligence: SDNs provides a new way to improve the routing traffic flow and speed by optimizing the workload distribution and enabling the end devices to be more intelligent.

Patterns that Connect: A remote central software based control provides the administrator with the ability to alter the network connectivity and services based on workload-optimized patterns that enable instant configuration and rapid provision of application-aware networks.

Multi-tenancy: SDN lets administrators expand the concept of software-defined networking across the data center and the cloud, so multiple groups of users can safely share resources and data center operators can predictably scale network resources.

Virtual Application Networks: A Distributed Overlay Virtual Network (DOVE) enables administrators to implement Virtual Application Networks (VANs) with network services that are transparent for cross-data center orchestration, automation and mobility of virtualized workloads.

Open-Flow:

Lady using a tablet
Lady using a tablet

Comprehensive

Writing Services

Lady Using Tablet

Plagiarism-free
Always on Time

Marked to Standard

Order Now

One of the most important protocols used by the SDNs is called Open-Flow, which is the first open standard interface that enables the implementation of SDNs in both hardware, and software based solutions. Open-Flow standard enables the researchers to design experimental protocols for network analysis and is now available as an extra feature in many of the commercially available routers, access points and switches. This enables researchers to develop new protocols over the existing hardware switches without exposing the vulnerabilities of the hardware itself.

In conventional switch or router the data path (fast packet forwarding algorithms) and the control path (i.e. routing algorithms) take place in the same device whereas Open-Flow decouples these functionalities. A separate controller called the standard sever is used for controlling the routing decisions whereas the packet forwarding algorithms are still the part of the routing device. The Open-Flow Switch and Controller communicate via the Open-Flow protocol, which defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats.

The data path of an Open-Flow Switch presents a clean flow table abstraction; each flow table entry contains a set of packet fields to match, and an action (such as send-out-port, modify-field, or drop). When an Open-Flow Switch receives a packet it has never seen before, for which it has no matching flow entries, it sends this packet to the controller. The controller then makes a decision on how to handle this packet. It can drop the packet, or it can add a flow entry directing the switch on how to forward similar packets in the future.

Given below is the summary of the papers I have read so far:

SDN has the capability to program multiple switches simultaneously but still it is a distributed system and therefore it suffers from conventional complexities such as dropping packets, delaying of the control packets etc. Currently used platforms for SDN such as NOX and Beacon enables this programming but not much therefore it is really hard to program in a low-level distributed programming.

Figure Difference between the Network architecture of traditional networks and SDN

Language Abstractions for Software-Defined Networks:

Software Defined Networks (SDNs) support an event-driven programming model where the application reacts to the occurrence of an event (e.g. packets for which there are no rules for forwarding, topology changes etc.) by using set of rules implemented in the switch or the routing device. This can cause complications. One of the problems is decoupling of the control into two parts i.e. the controller with the program and the set of rules implemented on the routing devices. Another reason is that this has an implication of making the programmer worry about the low level details which also includes switch hardware. The NetCore programmers write specification that captures the intending forwarding behavior of the network instead of writing programs dealing with the low level details such as the events and the forwarding rules of the network. This enables interaction between the controllers and switches. A compiler transforms these specifications into codes segments for both controller and switches. A prominent feature of the NetCore allows description of the network rules and policies in terms of the simple specifications which can't be implemented or realized directly on the switches. Another important fact about NetCore is that it has a clear formal set of rules that provides a basis for reasoning about programs.

Network Query Abstractions:

In SDNs each switch stores counters for different forwarding rules and the count of the total number packets and data segments processed using those rules. For traffic monitoring the controller has the ability to check different counters associated with different sets of forwarding rules. This makes the programmers worry about the fine details of implementation on the switches. This is a tedious job and makes the program complicated. Therefore an added level of abstraction will help the programmers. To support applications whose correct operation involves a monitoring component, Frenetic includes an embedded query language that provides effective abstractions for reading network state. This language is similar to SQl and includes segments for selecting, filtering, splitting, merging and aggregating the streams of packets flowing through the network. Another special feature of this language is that it enables the queries to be composed with each other and with forwarding policies. Compliers and the real time systems make it possible. Compiler produces the control messages and needed to query and tabulate the counters on switches.

Lady using a tablet
Lady using a tablet

This Essay is

a Student's Work

Lady Using Tablet

This essay has been submitted by a student. This is not an example of the work written by our professional essay writers.

Examples of our work

Consistent Update Abstractions:

Since SDNs are event driven networks therefore some programs in SDNs need to change their policy from one state to another such as because of the changes in the network topology, failures in the network etc. An ideal solution is the automatic update of all the network switches but practically it is not possible to implement. A possible solution is to allow some level of abstraction and then propagate these changes from one device to another. An example is the per-packet consistency which ensures that each packet going through the network just uses one version of the policy and not a combination of both the old and new policy. This preserves all properties that can be expressed in terms of individual packets and the paths they take through the network-a class of properties that subsumes important structural invariants such as basic connectivity and loop-freedom, as well as access control policies. Going a step further, per-flow consistency ensures that sets of related packets are processed with the same policy. Frenetic provides an ideal platform for exploring such abstractions, as the compiler and run-time system can be used to perform the tedious bookkeeping related to implementing network updates.

Languages for Software-Defined Networks

Current computer networks perform a number of tasks which ranges from routing and traffic monitoring to access control and server load balancing. Because of the heterogeneous nature of the network (i.e. a combination of routers, switches, firewalls and middle boxes etc.) this makes the management of the network very complicated and cumbersome. SDN provides a simple solution towards the network management by providing a nice and easy interface between different devices and the controlling software. OpenFlow standard protocol is widely used to program the devices in the SDNs which provide a low-level API's which mimics the underlying switch hardware. A higher level abstraction for creating applications for SDNs is required. In the Frenetic project, we are designing simple and intuitive abstractions for programming the three main stages of network management: (i) monitoring network traffic, (ii) specifying and composing packet-forwarding policies, and (iii) updating policies in a consistent way. Overall, these abstractions make it dramatically easier for programmers to write and reason about SDN applications.

Three main important parts of the Frenetic project are given below:

Querying network state: Frenetic offers a high-level query language for subscribing to streams of information about network state, including traffic statistics and topology changes. The run-time system handles the details of polling switch counters, aggregating statistics, and responding to events.

Expressing policies: Frenetic offers a high-level policy language that makes it easy for programs to specify the packet-forwarding behavior of the network. Different modules may be responsible for (say) topology discovery, routing, load balancing, and access control. Individual modules register these policies with the run-time system, which automatically composes, compiles, and optimizes them with programmer-specified queries.

Reconfiguring the network: Frenetic offers abstractions for updating the global configuration of the network. These abstractions allow a programmer to reconfigure the network without having to manually install and uninstall packet-forwarding rules on individual switches-a tedious and error-prone process. The run-time system ensures that during an update, all packets (or flows) are processed with the old policy or the new policy, and never a mixture of the two. This guarantee ensures that important invariants such as loop freedom, connectivity, and access control are never violated during periods of transition between policies.

Final words on the topic: Frenetic language is a collection of simple and powerful abstraction for programmer to write control application for controlling switches in SDNs. These abstractions are implemented on compiler and run time system and make sure the efficient execution of the code on the network devices. This work focuses on the three stages of managing a network i.e. monitoring network state, computing new policies and reconfiguring the network.

Auto-Slice: Automated and Scalable Slicing for Software-Defined Networks

This paper presents a new virtualization layer that automates the development and operation of SDN slice on top of shared network infrastructures. Auto-Slice enables substrate providers to resell their SDN to multiple tenants while minimizing operator intervention. At the same time, tenants are given the means to lease programmable network slices enabling the deployment of arbitrary services based on SDN principles.

Network virtualization comprises a viable solution for the concurrent deployment and operation of isolated network slices on top of shared network infrastructures. The emerging SDN paradigm facilitates the deployment of network services, by combining programmable switching hardware, such as Open-Flow, centralized control and network-wide visibility. These salient properties of SDNs can enable network tenants to take control of their slices, implementing custom forwarding decisions, security policies and configuring access control as needed.

A fundamental building block for SDN virtualization is FlowVisor, which enables slicing of the flow table in Open-Flow switches by partitioning it into so-called flow-spaces. As a result, switches can be manipulated concurrently by multiple controllers. Nevertheless, the instantiation of an entire vSDN topology is non-trivial, as it involves numerous operations, such as mapping virtual SDN (vSDN) topologies, installing auxiliary flow entries for tunneling and enforcing flow table isolation. Since these operations require considerable planning and management resources, we aim to develop a transparent virtualization layer, or SDN hypervisor, which automates the deployment and operation of arbitrary vSDN topologies with minimal intervention by the substrate operator. In contrast to previous SDN virtualization efforts, we focus on the scalability aspects of the hypervisor design. Furthermore, Auto-Slice optimizes resource utilization and mitigates flow-table limitations by monitoring flow-level traffic statistics.

In this case a network infrastructure provider is considers which offer vSDN topologies for a number of tenants. Each tenat's vSDN contains a set of nodes and links with network requirements such as link bandwidth, location and switching capacity etc. It is assumed that each client (tenant) has an Open-Flow switches which has a flow table that can be partitioned into a number of logical segments. Distributed hypervisor architecture is proposed which is capable of handling a large number of flow tables belonging to different clients (tenants). Two important parts of the hypervisor are Management module (MM) and multiple controller proxies (CPX) which are used evenly distribute the control load over all the clients. When a request is received the MM maps vSDN topology to the resources available in each SDN domain and assign a subset of logical resources to each CPX. Subsequently, every CPX instantiates the allocated topology segment by installing infrastructure flow entries in its domain, which unambiguously bind traffic to a specific logical context using tagging. Since isolation between tenants is essential, each CPX performs policy control on the flow table accesses and ensures that the resulting flow entries are mapped onto non-overlapping flowspaces. All control communication between a tenant's controller and the forwarding plane is redirected through the CPX responsible for the corresponding switch. Before installing a tenant flow entry to a switch, the proxy rewrites the control message, such that all references to virtual resources are replaced by the corresponding physical entities, and appropriate traffic tagging actions are appended. The state of each virtual node in a given SDN domain is maintained solely by the corresponding proxy. Consequently, each CPX can independently migrate virtual resources (e.g., nodes, links) within its domain in order to optimize intra-domain resource utilization. Global optimizations are coordinated by the MM. The transport control message translation enables the clients to install arbitrary packet processing rules within an assigned SDN slice without disturbing the concurrent users.

FORWADING PLANE

In multi-tenant environment a large number of logical flow tables must be mapped onto the memory of a single substrate switch. The CPX ensures the isolation of all virtual flow tables and also guarantees that all packet processing actions are applied in the correct sequence in-case a connected group of virtual nodes is mapped to the same switch ( e.g., using a loopback interface). The scalability of the platform would be severely restricted by the limited flow table size in OpenFlow switches, which is typically in the order of several thousand of entries. To overcome this limitation, we deploy so-called auxiliary software data paths (ASD) in the substrate network. Each SDN domain is assigned an ASD consisting of a software switch, running on a commodity server. In contrast to an OpenFlow switch, the available main memory in a server is sufficient for storing a full copy of all logical flow tables required by the corresponding ASD. However, despite the recent advantages in software-based data-path architectures and commodity servers, the divide between commodity and specialized hardware still remains, with the latter offering at least in order of magnitude larger switching capacity.

To circumvent these limitations, the Zipf property of aggregate traffic i.e. the fact that a small fraction of flows account for most of the traffic volume. This approach uses ASDs for handling low-volume traffic flows (mice), while caching a small number of high-volume flows in the dedicated switches. To this end, a set of low priority, infra-structure entries route traffic from the domain edge to the ASD, when no high priority flow entry has been cached at the domain switch. The CPX selects the flow entries that will be cached, so that most of the traffic is offloaded from the ASD. In addition, CPX ensures that cached flow entries do not alter the semantic integrity of the flow table rules by re-encoding flow entries as needed. The varying traffic demands and diverse forwarding rules deployed by tenants in their vSDNs introduce additional complexity on flow caching decisions.

Quantitatively Evaluating (and Optimizing) Software-Defined Networks

Building SDN's is becoming more prevalent these days but the question of making them more efficient, optimization across all the possible sets and different tradeoff are yet to be answered. A quantitative approach of evaluating the performance of the SDN's is required and discussed. In SDN's the controllers manages the packet forwarding task. There has been a number of practical application which use SDN's and a number of commercial products has been built based on SDN's but the question remains that how to quantify their performance and how to choose which one is the best and therefore it is not know which are the right tradeoffs to look for.

Most important concerns about the SDN's are the latency, scalability and availability. This issue must be address before deciding which SDN's are the right choice. Moving to SDN's from traditional network doesn't require a worry about cost, performance and reliability. The questions are unanswered and to make operators move towards SDN's some questions need to be answered. The paper summarized here tries to find answer these questions. We might instead find that the worries are unjustified, and that correct decisions made immediately with consistent state not only kill flaps but reduce delays. The ideal outcome from this analysis would not just be comparison methodology, or even optimization methods, but instead, guidelines for the techniques that will yield the best control network, given a specific topology and specific goals.

Why this is hard to find which SDN's is good one

There are a number of factors which makes the situation of taking the decision much more complicated. Some of the factors are mentioned below.

Topologies vary: Networks differs in their number of nodes, edges, distance between nodes, and connectivity. Simply obtaining a large set of reliable network graphs is itself a research area.

Finding relevant metrics: What metrics are most relevant to operators? For example, is guaranteeing delay bounds more important than minimizing the average across the set of nodes?

Combining metrics: The right solution is likely to be a combination of metrics. How can we specify a combination of metrics, or a multiple constraints on a "good enough" solution?

Computational complexity: Optimizing every metric we've considered is an NP-Hard problem, including latency, availability, fairness of state distribution, and control channel congestion.

Design space size: Spreading an application across multiple nodes for scalability and fault tolerance presents many options, including the number of controllers, placement of controllers, number of state replicas, method of distributing processing, and even how many controllers each switch should connect to.

These factors are addressable and they can be addressed by repeating analysis on large number of topologies to find out the embedded trends. Others can be addressed by using some approximation algorithms and the rest could be addressed through simplified models of distributed systems communication.

A MOTIVATING EXAMPLE:

Two important questions are addressed in this paper. These questions are:

How many controllers are needed and where they should be put in the network topology?

Every aspect of the CBA is influenced by the placement of the controller in the network. One of the major aspects of the wide-area network highly influenced by the placement of the controller is the propagation latency; this is less significant in the data center. Specifically, it bounds the control reactions with a remote controller that can be executed at reasonable speed and stability. For simplicity, we consider only partitioned controllers, whose delays equal the node-to-controller lower bounds, and ignore any delays added by controller to controller coordination.

In this paper an example of the Internet2 is given which is a 34-node nationwide production networks. This paper shows an example of the Figure 1, this figure shows the placement of the controllers k = 1 to 5, the higher the density of the nodes in the northeast relative to the west leads to metric specific optimal location combinations. For example the figure shows that to reduce the latency for k = 1, the controller should be placed in the Chicago, which will account for high density of east coast cities with the lower density of the cities in the west. . To minimize worst-case latency for k = 1, the controller should go in Kansas City instead, which is closest to the geographic center of the United States. CDFs showing the full set of controller placements, for each value of k, are shown in Figure 2. This example demonstrates that even simple variations of a metric can yield different placements, with their own tradeoffs. Initial results show that (1) Random placement is a poor strategy. The difference between a random placement and a carefully optimized one is often a factor of 2, and in some cases much larger. (2) Surprisingly, one controller is often enough to meet control response deadlines, such as restoring a link in a SONET ring. (3) Most (75%) of the topologies show tradeoffs between metrics; the graph shows a long tail, with some metrics being off by more than a factor of 2.

Figure Optimal placements for 1 and 5 controller in the Internet2 OS3E deployment [1].

Figure Latency CDFs for all possible controller combinations for k = [1,5]: average latency (left), worst-case latency (right) [1]

REFRENCES:

[1] Heller, Brandon. "Quantitatively Evaluating (and Optimizing) Software-Defined Networks."

[2] Martin Casado, Michael J. Freedman, Justin Pettit, Jianying Luo, Natasha Gude, Nick McKeown, and Scott Shenker. Rethinking enterprise network control. IEEE/ACM Transactions on Networking, 17(4), August 2009.

[3] Nick McKeown, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, and Jonathan Turner. Openflow: Enabling innovation in campus networks. SIGCOMM CCR, 38(2):69-74, 2008.

[4] Monsanto, Christopher, and Alec Story. "Language Abstractions for Software-Defined Networks."

[5] Foster, N.; Guha, A.; Reitblatt, M.; Story, A.; Freedman, M.J.; Katta, N.P.; Monsanto, C.; Reich, J.; Rexford, J.; Schlesinger, C.; Walker, D.; Harrison, R., "Languages for software-defined networks," Communications Magazine, IEEE , vol.51, no.2, pp.128,134, February 2013 doi: 10.1109/MCOM.2013.6461197

[6] Bozakov, Zdravko and Papadimitriou, Panagiotis. "AutoSlice: Automated and Scalable Slicing for Software-Defined Networks ." ACM CoNEXT Student 2012 Proceedings (2012):

[7] G. Scharath, et al., Network Virtualization Architecture: Proposal and Initial Prototype, Proc. ACM SIGCOMM VISA 2009.

[8] N. Sarrar, et al., Leveraging's Zipf's Law for Traffic Offloading, ACM SIGCOMM CCR, 2012.