Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays.
Are we ready for QUIC?
Table of Contents
Welcome to the Zettabyte Era, an era where the world revolves not only around gas prices and water quality but instead also around the rate and quality of information transfer. According to the Cisco Visual Networking Index, by 2021 the annual global IP traffic will hit 3.3 ZB, of which 63% will be in the form of wireless and mobile traffic . As the number of web and mobile application grow to build an Internet of Things (IoT) powered by 5G technology, IP traffic is bound to increase exponentially. Thus, it is the need of the hour to establish networking protocols to handle the rapid growth in IP traffic usage.
The Internet today has undergone a drastic change since the publication of HTTP 1.1 in RFC 2616, 1999. Though several adjustments in protocols (like HTTP pipelining, TCP Fast Open), algorithms (data compression techniques), browser changes and faster link accesses have helped to speed up the web, the core protocol has resisted efforts for a major upgrade until very recently . To accomplish this, Google developed a web transfer protocol SPDY, using multiplexed TCP streams and header compressions over HTTP 1.1 intending on significantly boosting Page Load Time (PLT) . In 2015 the Internet Engineering Task Force (IETF) released the largely SPDY based HTTP/2 protocol in RFC 7540  as a successor to the HTTP 1.1 protocol. HTTP/2 are application layer protocols running on top of Transmission Control Protocol (TCP) in the transport layer, which has its own limitations. Its connection-oriented service with three-way handshake increases the Round-Trip Time (RTT) for an HTTP request, which degrades twofold if SSL/TLS encryption is also present. An alternative to TCP, the connectionless User Datagram Protocol (UDP), was not given much merit due to its lack of reliability and QoS guarantees. Google in 2013 proposed Quick UDP Internet Connections (QUIC) Protocol  over UDP in the transport layer, as an alternative to HTTP/2 over TCP/IP. This protocol is relatively unexplored, especially in the wireless connection domain. As the standardization process of QUIC is open to community input, its potential is truly infinite and exciting as some of the brightest minds of our era have come forward to contribute to its design and development.
Since QUIC was first born, great attentions have been drawn towards its various attractive features such as higher security, efficiency, and reliability. QUIC is built atop UDP, incorporating the prime features of protocols such as TCP, UDP, TLS, and HTTP/2. QUIC strives to overcome connection latency, by directly sending data during connection establishment (albeit the “0-RTT” approach) . It also provides multiplexing features picked up from HTTP/2 and richer feedback data that facilitates advanced congestion control techniques. Moreover, QUIC can be easily implemented as an application update in any userspace, which enables faster deployment as part of application update cycles. Although QUIC is considered as a single protocol, its functionalities permeate several layers in the ISO seven-layer reference model . The UDP-based connection management and the congestion control algorithms belong to the transport layer, while the multiplexing mechanism using streams can be seen as in the session and presentation layers.
Figure 1 depicts the design of QUIC. Like TCP, QUIC also provides for loss recovery and congestion control while maintaining superior signaling capabilities. Furthermore, QUIC decreases latency setting up connections with fewer RTTs. QUIC subsumes the key negotiation features of TLS 1.3, which requires all connections to be encrypted. The primary reason for compulsory encryption isn’t just to ensure security and privacy of data, but also to prevent man in the middle attacks on packets in transit, which can hinder the future evolution of the QUIC protocol. Like HTTP/2, QUIC also deploys certain techniques such as multi-streaming, while avoiding problems such as head-of-line blocking occurring in case TCP had been used because all packets (of potentially different HTTP/2 streams) must be delivered in order .
Almost all internet applications today require a private and secure connection, and TCP+TLS are widely used for fitting this purpose. The problem with TCP+TLS (1.2) is that it requires a minimum of two RTTs to set up a secure connection thus resulting in a significant overhead. Adhering strictly to TLS13, QUIC tackles the overhead and enjoys a minimum of zero RTTs to establish an encrypted connection. Therefore, if a previous session is continued, payload data can be transmitted over the first packet itself.
Figure 1. Architecture of QUIC
Once the version is successfully negotiated, QUIC combines the transport and crypto handshake signals to establish the first-time connection in a single RTT, which is two RTTs lesser than the widely used TCP+TLS 1.2 and an RTT lesser than TCP+TLS 1.3 (Refer Figure 2a). Thus, QUIC minimizes the connection delay, by sending both the TLS handshake and the relevant transport setup parameters on the connection’s first packet. Initially, the QUIC client attempting to connect to the server, sends the Client Hello message to the server for key negotiation, along with some basic QUIC options and parameters such as the connection identifier as well as the preferred version number. It’s to be noted that handshake is encoded based on the version number. The client undergoes an additional version negotiation process if the server cannot handle the version request. Else, the server responds with the Server Hello message, certificate, and session information that the client can use the next time it connects to the server. The client can then send its encrypted requests to the server, taking a total of one RTT for connection setup. The first connection’s parameters are stored as a cookie on the client, which can be used for its authentication the next time it connects to the same server.
If the server can recognize a client after many previous connection establishments, negotiation latency can be reduced by a huge margin. The QUIC client can immediately send data to the server by resuming a previous session of encryption, thereby reducing an RTT from the handshake process (Figure 2b). Resuming an encrypted session, the client sends its cached cryptographic cookie and Diffie-Hellman value to the server along with the encrypted payload. The client is authenticated based on the cookie’s content. Once the server authenticates the client, it proceeds to calculate the encryption key using the Diffie-Hellman value stored in the cookie and the value sent by the client. The server is now able to decrypt the encrypted data and send its encrypted response immediately back to the client .
Figure 2.Round-trip time (RTT) for handshake in different protocols. (a) First-time connection establishment. (b) Subsequent connections.
Through this process, utilizing a single connection, many streams of data can be sent. Since HTTP1.1 can only request for a single resource at a time, web browsers have been designed to open several connections for a website access (Figure 3a), where each connection contained small data transfers. The inherent flaw in this technique is the complications of maintaining several connections and the associated delays of doing so. Attempts to counter this problem, resulted in HTTP/2 utilizing a single TCP connection to carry multiplexed stream in case multiple requests where directed to the same server. Though this technique might work perfectly fine in most scenarios, as it still adheres to the ordered delivery criteria independent data streams may suffer from head-of-line blocking when data is missing on one stream while data might be successfully sent and received on the other streams (Figure 3b).
Figure 3. Comparison of multiplexing by sending multiple streams of data over a single transport connection using (a) HTTPI.I, (b) HTTP/2, and (c) QUIC.
QUIC overcomes head-of-line blocking by not placing any ordered packet criteria while multiplexing concurrent data streams over a single TCP connection (Figure 3c). QUIC manages to maintain reliable packet delivery. One QUIC packet carries multiple frames of the same or different streams. For any stream. if frames are present they will be delivered in order, however, if frames are missing the protocol ensures that the delivery of the frames of the other streams is not blocked.
At the moment, the QUIC design group has been tasked by IETF to only use standardized congestion control such as NewReno1 and Cubic.2, as its default mechanism for congestion control. QUIC strives towards an open congestion control interface, like TCP, that permits varying degrees of experimentation with different congestion control algorithms to find an optimal fit. However, unlike TCP QUIC provides a different and more hospitable environment for congestion control. First, it inherently adopts modern loss-recovery mechanisms such as F-RTO3 and Early Retransmit. It also offers more precise feedback aiding in loss detection. Although QUIC employs a steadily increasing packet number, it doesn’t retransmit on the packet level instead it does so, on a per-frame base. This allows QUIC to distinguish retransmissions from the originally sent packets, avoiding retransmission ambiguities, similar to the idea of the TCP Recent ACKnowledgment (RACK) algorithm.5 QUIC has also been designed to include information about the delay between packet reception and ACK transmission. With this data, the original sender can paint a better picture of the network congestion through path RTT. Furthermore, QUIC also incorporates the TCP’s selective acknowledgment mechanism,6 and supports up to 255 ACK ranges, making it less prone packet loss and reordering .
QUIC employs several experimental mechanisms, like the CID which identifies a connection and can achieve connection migration without service interruption, and packet pacing intending on reducing any data loss or network congestion. QUIC enables the innovative method of packet pacing, which is the process of injecting a wait time between consequent UDP datagrams, as a default, besides utilizing the traditional method of congestion window adjustment as a congestion control technique. The scheme is shown in Fig. 3. Packet pacing acts as a preventive method, that ensures that the network latency does not take a drastic hit in case of congestion.
Figure 4. Packet pacing mechanism.
In any network, when packet loss occurs, regardless of whether congestion caused it or transmission error caused it, congestion control techniques are deployed. Thus, the congestion window is reduced and as a result, throughput takes a hit. Although this method of preventing network overload might seem proactive, we cannot prevent the additional delays that arise due to potentially slow recovery mechanisms, based on either duplicate acknowledgments or even retransmission timeouts. Through forward error correction (FEC), codes containing redundant information are deployed to provide higher resilience towards loss and enables faster recovery. However, the time required for the coding and decoding process hamper the low-latency design principle of QUIC. Furthermore, though coding can be used to improve network performance in environments with massive packet loss (wireless networks), it is still viewed as a burden as the resultant energy required cannot be idealistically met by, already power-constrained mobile devices. Thus, coding is still a major point of discussion in the design process for QUIC.
Following the steps of TLS, QUIC strives to maintain secured connections by mandatorily encrypting all connection data. However, like TLS1.3 QUIC might also suffer from new security attacks, due to its modeling around 0 RTT session resumption. A typical attack would involve the manipulation or the replay of packets remnant from a previous crypto session. In case the client and server are forced to initiate a full handshake, consuming computational resources and memory space, these attacks could also be used as an additional DoS attack vector . Thus, further analysis and research is valuable in this space. Furthermore, unencrypted information in a connection such the connection identifier is susceptible to the threat of pervasive monitoring attacks. Thus, to bring in an element of security in processes like IP address mobility, QUIC is being designed to utilize unique identifiers for each encrypted communication session to avoid any linkability. This tussle is the subject of an ongoing discussion in the QUIC working group and will be even more relevant when work is extended to include multipath support for QUIC.
The objects in any web application usually have inter-dependencies, which places a bottleneck on performance. Since QUIC has been designed to multiplex the individual, independent data streams for these objects, it should also be given the capability to prioritize between streams based on these dependencies. Nevertheless, assigning the priority levels accurately while taking into consideration the dynamic object load time and current network status, is an open field for additional research.
There is no doubt that QUIC is a promising protocol which breaks the routine, although it needs more all- sided considerations to deploy widely. Different from traditional transport protocol, QUIC can be faster iterated and deployed, we can also regard it as a platform to debug some new ideas, not just a protocol.
It has been seen that QUIC performs better in a poor environment, thus it can be used to design transport protocols for space networks, as space links are generally prone to high latency, high bit error rate, connection interruption, etc., As we know, safety is an eternal topic. The key feature of QUIC is 0-RTT connections, maybe we can take advantage of it to handle the low-rate attack  in TCP, however, if adversaries succeed in forcing QUIC to fall back to TCP/TLS or view the handshake between client and server erratically, then high degrees of latency and inconsistency can be introduced in the network. Thus, QUIC has room to improve and grow before it can be established as a standard transport protocol.
- Kharat, P. K., Rege, A., Goel, A., & Kulkarni, M. (2018, April). QUIC Protocol Performance in Wireless Networks. In 2018 International Conference on Communication and Signal Processing (ICCSP) (pp. 0472-0476). IEEE.
- Cui, Y., Li, T., Liu, C., Wang, X., & Kühlewind, M. (2017). Innovating transport with QUIC: Design approaches and research challenges. IEEE Internet Computing, 21(2), 72-76.
- Elkhatib, Y., Tyson, G., & Welzl, M. (2014, June). Can SPDY really make the web faster? In Networking Conference, 2014 IFIP (pp. 1-9). IEEE.
- ” Hypertext Transfer Protocol version 2 – RFC 7540”, Retrieved:Jul., 2017. [Online] Available: https://tools.ietf.org/html/rfc7540.
- R. Hamilton, J. Iyengar, I. Swett, and A. Wilk, ”QUIC: A UDP-Based Secure and Reliable Transport for HTTP/2”, [Online], Available: https://tools.ietf.org/html/draft-tsvwg-quic-protocol-02.
- Langley, A., Riddoch, A., Wilk, A., Vicente, A., Krasic, C., Zhang, D., … & Bailey, J. (2017, August). The QUIC transport protocol: Design and Internet-scale deployment. In Proceedings of the Conference of the ACM Special Interest Group on Data Communication (pp. 183-196). ACM.
- Cook, S., Mathieu, B., Truong, P., & Hamchaoui, I. (2017, May). QUIC: Better for what and for whom? In IEEE International Conference on Communications (ICC2017).
- Shade, R., & Warres, M. (2016). HTTP/2 Semantics Using The QUIC Transport Protocol. IETF Draft, IETF, Draft, July.
- Megyesi, P., Krämer, Z., & Molnár, S. (2016, May). How quick is QUIC?. In Communications (ICC), 2016 IEEE International Conference on (pp. 1-6). IEEE.
- J. Iyengar, et al., QUIC Loss Recovery and Congestion Control, draft-iyengar-quicloss- recovery-01, IETF Internet draft, 31 Oct. 2016; https://tools.ietf.org/html/draftiyengar- quic-loss-recovery-01.
- Lychev, R., Jero, S., Boldyreva, A., & Nita-Rotaru, C. (2015, May). How secure and quick is QUIC? Provable security and performance analyses. In Security and Privacy (SP), 2015 IEEE Symposium on (pp. 214-231). IEEE.
- Kuzmanovic, A., & Knightly, E. W. (2003, August). Low-rate TCP-targeted denial of service attacks: the shrew vs. the mice and elephants. In Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications (pp. 75-86). ACM.
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please: