Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Networking development and research have historically focused on increasing network throughput and path resource utilization. And that have particularly helped bulk applications such as file transfer and video streaming. The use of interactive applications like interactive web browsing, audio/video conferencing, multi- player online gaming and financial trading applications have been facilitated by the recent over-provisioning in the core of the Internet. Although the bulk applications depend on transferring data as fast as the network permits, interactive applications use rather little bandwidth, depending instead on low latency. As the responsiveness of interactive applications directly influences the quality of experience there has been an increasing concern in reducing latency in networking research, recently. We need to understand their traffic pattern and quantify their prevalence, to appreciate the significance of latency-sensitive applications for today’s Internet. Latency is the amount of time it takes a packet to travel from one node to another. The speed and capacity of a network is defined by latency and bandwidth together. Packets need to be quickly processed, queued, serialized, and propagated across the network to get to this.
Low-latency communications benefits most of the internet applications in the form of faster setup and response times as well as higher bandwidth communications that is enabled by transport protocol behavior. The total time it takes a data packet to travel from source to destination is described the networking term Latency. In other words, the total time for the round trip when a data packet is transmitted and returned back to its source is known as latency. It refers to time interval or delay when a system component is waiting for another system component to do something. This duration of time is called latency. Any kind of delay that happens in data communication over a network is known as network latency. Low-latency networks are the network connections in which small delays occur whereas high-latency networks are network connections which suffer from long delay. In any network communication, high latency creates bottlenecks. It always prevents the data from taking full advantage of the network pipe and it effectively decreases the communication bandwidth. Based on the source of the delays the influence of latency on network bandwidth can be temporary or persistent.
Latency would play an even more significant role in many other potential applications. For instance, for embedding communications technology in automation or traffic systems, or for consumer applications such as augmented reality or virtual reality where Variability in latency may not be feasible due to the human brain’s perceptual limits, i.e., render the service unusable due to motion sickness caused. Many of the Internet-of-Things and critical services requires low latency and high reliability for communications.
Latency is generally measured in milliseconds (ms).
In formal network transmission, the following elements are involved in latency:
Delay in Storage: As we write data on to hard disks and other storage devices, a delay occurs when reading and writing to and from totally different blocks of memory. Processors often take a lot of time for finding the exact location for reading and writing data. Sometimes intermediate devices like switches or hubs also cause these types of delays.
Device Processing: Not only in case of storage devices, network devices can also cause latency. For example, on receiving a data packet a router keeps that packet for a few seconds, it read its information and it also writes some extra information.
Transmission: The transmission media can be of many types with different limitations. For transmitting one packet from a source to a destination, each medium, from fiber optics to coaxial cables, takes some time. Transmission delays depend on packet size that means smaller packets will take less time to reach their destination than larger packets.
Propagation: It represents the delays that occur even when packets travel from one node to another node at the speed of light. This is purely the effect of the distance between the client and the server.
As discussed earlier, network latency is an expression of how much time it takes for a packet of data to get from one point to another destination point. In some environments (for example, AT&T), the round-trip time is considered the latency which is measured by sending a packet that is returned to the sender. Ideally, latency should be as close to zero as possible.
For individuals latencies are noticeable and it increases user annoyance and it impacts productivity when the level increases above 30 ms.
We have mentioned the about the contributors to latency above. There are different types of latencies. They are:
- Operational latency
- WAN latency
- Mechanical latency
- Audio latency
- Internet latency
- Interrupt latency
- Computer and operating system latency
Operational latency: When performed in linear workflows,it can be defined as the sum time of operations. The latency is determined by the slowest operation performed by a single task worker, in parallel workflows.
WAN latency: These are often a vital thing about determinant internet latency. A WAN that’s busy directing alternative traffic can turn out a delay, whether or not a resource is being requested from a server on the LAN, another computer thereon network or elsewhere on the web. LAN users also will expertise delay once the WAN is busy. In either of those examples the delay would still exist although the remainder of the hops –including the server wherever the required knowledge was situated — was entirely freed from tie up.
Mechanical latency: It is the delay from input into a mechanical system or device to the desired output. This delay is set by Newtonian physics-based limits of the mechanism (excepting quantum mechanics). An example that matches this condition would be the delay in time to shift a gear from the time the shift lever of a gear box or bicycle shifter was motivated.
Audio latency: It is the delay between sound being created and detected. In sound created within the physical world, this delay is set by the speed of sound, which varies slightly counting on the medium the acoustic wave travels through. Sound travels quicker in denser mediums: It travels quicker through solids, less quickly through liquids and slowest through air. we tend to usually check with the speed of sound as measured in dry air at temperature, that is 796 miles-per-hour. In physics, audio latency is the accumulative delay from audio input to audio output. How long the delay is depends on the hardware & the package used, like the software package and drivers utilized in laptop audio. Latencies of thirty milliseconds are usually detected by a private as a separate production and arrival of sound to the ear.
Internet latency: It is simply a special case of network latency – the internet could be a terribly massive wide-area network (WAN). The latency on the internet can be determined by the same factors as above. However, distances within the transmission medium, the quantity of hops over instrumentality and server’s area unit are all larger than for smaller networks. This latency measuring would usually begin at the exit of a network and end on the coming back of the requested information from a web resource. The response time of “ping” is a good pointer of the latency in this situation.
Interrupt latency: It is the amount of time that it takes for a system to act on an interrupt, which may be a signal telling the operating system to stop till it will decide what it ought to do in response to some event.
Computer and operating system latency: It is the combined delay between an input or command and the desired output. In a computer system, any delay or waiting that increases real or perceived response time beyond what is desired is referred as latency. Specific contributors to computer latency include, mismatches in data speed between the microprocessor and input/output devices, insufficient data buffers and the performance of the hardware involved, in addition to its drivers. The processing load of the computer can also add more significant latency.
Ultra-low latency refers to a subset of low latency. Nowadays, ultra-low latency is measured in the hundreds of nanoseconds with only speeds under 1 millisecond qualifying as ultra-low latency. The requirements of low-latency for mission critical applications, such as tele-surgery, virtual reality over networks, high-frequency trading, cloud computing, autonomous control of vehicles and smart grids amongst others is truly revolutionised various fields of human endeavour.
In the financial industry for example, the rapid expansion of automated and algorithmic trading (AKA high-frequency trading) has dramatically increased the critical role of network and server technology in market trading – first in the requirement for low latency and second in the need for high throughput – in order to process the high volume of transactions. Ultra-low-latency technology enables direct market access to dozens of futures, fixed income and FX exchanges in North America, South America, and Europe, ensuring the fastest execution of market orders occurring well below the one-millisecond barrier. All of these are powered by state-of-the-art technologies such as low latency switches with built-in FPGAs (field-programmable gate array), application acceleration middleware, 10-400GbE server adapters, and single-mode and multimode optical fibre. These ultra-low-latency switches also offer low-latency multicast and unicast routing and in-hardware network address translation (NAT).
Cloud networking is another market segment that will greatly benefit from high throughput, low latency and real-time application performance. The rapidly increasing deployment of both public and private clouds is creating the increased implementation of social networking and Web 2.0 applications. These cloud applications are enabled by real-time media and video distribution requiring low latency for both business-to- consumer (B2C) and business-to-business (B2B) needs.
- Low latency networks & Challenges faced
Despite the recent advances that made in the race to zero latency, certain sources of delay still pose a challenge to low latency technologies. These are:
- Fiber delay – The longer the physical route, the greater the time it takes for traffic to get from one end to the other. Eliminating eight inches of optical fiber between two connected locations, cuts a nanosecond of transport latency between them. The competitive advantage of reducing fiber delay can be huge for companies conducting transactions in major cities, owing to the fact that fiber routes within cities go up and down manholes, across streets, etc. Enterprises requiring ultra-low latency must thoroughly evaluate the directness or circuitousness of the routes available for dedicated infrastructures from dark-fiber providers.
- Proximity delay – The space near a fibre access point is limited, and the firm that can locate its equipment closer to the fibre access point than can another will—all else being equal—realize competitive advantage in the speed with which transactions can be processed. Given that the real estate near fibre-junction points is in such high demand, some enterprises reach out to lease space from co-location providers.
- Equipment delay – This refers to the actual speed of the communication equipment across the end-to-end transport network. Every device introduces some amount of delay, and, therefore, any non-optimized transport equipment across the information path can thwart business models that are based on the most challenging network latency limits. Eliminating these sources of equipment delay is tricky in that different devices carry out key, common network functions in very different ways, yielding very different degrees of latency.
- Key Industrial Organizations & their case study
Below mentioned are the case studies of few of the most well-known organizations that have gone the extra mile to achieve ultra-low latency in the networks.
- Arista: Arista has tackled the problem of latency with their architectural design of cloud networks. Their design enables flexible cloud networks in transporting media, storage, data and compute traffic. The architecture uses ultra-low latency & non-blocking 1 GB/ 10GbE switches, extensible Operating Systems along with their impressive wire-rate.
Advantages of Arista’s 7xxx series switches in cloud networks are:
- Reduced network node latency:
1. Intra-rack latency – 8μs
1. Intra-rack latency – 0.6μs
2. Intra-rack latency – 36μs
2. Intra-rack latency – 0.24μs
- Reduced or almost negligible network congestion.
- Implementation of standard Ethernet (since it is widely used & understood).
Reduced transport protocol latency.
- Fujitsu: Fujitsu’s FLASHWAVE 7420 WDM is a platform that provides breakthrough performance for trading applications. The techniques used for inline amplification, signal regeneration and transparent wavelength conversion eliminate the causes of optical transport latency. Fujitsu’s solution allows 80 transmission channels over 2 fibers which is a cost effective & flexible way of expansion. Flashwave also delivers cost effective solutions for all the leading services required in financial networks (such as, InfiniBand, Ethernet, Fiber channel, Fiber channel over Ethernet etc.)
- Mellanox Technology: Solutions provided by Mellanox are widely used & deployed. These are mostly used for large-scale simulations with high-speed interconnects. The solution provides ultra-low latency; high message & bandwidth rate along with a transport offload that result in extremely less CPU overhead. Remote direct access memory (RDMA) & advanced communication offloads are also offered. Mellanox’s solutions are high performance computing scalable interconnects with very high efficiency.
- Myricom: Myricom’s solution for low latency networks is known as MYRI-10G. It is used for high performance computing along with the benefits of Ethernet. The architecture of Myri-10G consists of 1.2 GBPS MPI data rates, network adapters for HPC support and their bonding boosting with a rate of 4.5 GBPS using four network links. It can also carry TCP or UDP traffic at line rate. The zero copy implementation decreases host-CPU utilization so it can focus more on the computing.
- Solarflare: This Company is at the fore-front in the development of ultra-low-latency technologies. A market leader in connecting trading servers to Ethernet networks, Solarflare designs ultra-high performance, ultra-low-latency networking chips, adapter cards, software and turnkey systems that are deployed in Telco, enterprise and cloud data centres for some of the most demanding networking applications. It was founded in 2001 and is one of the pioneers of 10GbE standards and technology.
It recently partnered with Arista networks to address the growing need for low-latency, high-performance 10GbE switch-to-server solutions for high-frequency trading and other demanding applications, such as public clouds, virtualization, and big data . Majority of the major exchanges, commercial banks, and trading Institutions, in over 82 Countries make use of Solarflare Technology.
- Low Latency Networks & Security
The threats low latency networks are vulnerable to are:
- Timing attacks: Low latency networks are prone to timing attacks. The main goal of the attack is to not change or modify the data being transferred but alter the traffic’s shape; i.e., the packets size along with the latency between them. These attacks exploit the timing specifications of the networks which are only slightly modified each time. The packet size also changes in a certain predictable way. This allows the re-parsing of traffic for packets with an already known size & latency sequence.
- MITM attack: This stands for “Man in the Middle”. This attack tries to listen to the conversation between 2 parties. It is also known as “eavesdropping”. Networks are prone to this attack as well.
Achieving low- latency in networks is a necessity. This will allow the data rates to increase and help a few sectors (such as financial trading) tremendously. There are companies like Arista, Mellanox etc. that are trying to acquire “ultra-low latencies” in their networks by building up on or modifying the existing legacy network architectures.
Although there are a number of organizations working towards this goal of eliminating latency or at least a negligible amount of it and have shown it possible theoretically, practically implementing this is still a difficult task.
- ‘What is Ultra Low Latency: Definition | Informatica US”, Informatica.com, 2018. [Online]. Available: https://www.informatica.com/services-and-training/glossary-of-terms/ultra-low-latency-definition.html. [Accessed: 03- Nov- 2018].
- A. Frino, V. Mollica, R. Webb and S. Zhang, “The Impact of Latency Sensitive Trading on High Frequency Arbitrage Opportunities”, SSRN Electronic Journal, 2015.
- A. Networks, Arista.com, 2018. [Online]. Available: https://www.arista.com/assets/data/pdf/Whitepapers/Arista_Solarflare_Low_Latency_10GbE__1_.pdf. [Accessed: 03- Nov- 2018].
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
Cite This Work
To export a reference to this article please select a referencing style below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: