If you have ever worked on a farm or have done construction, you probably got used to carrying things. And you might have heard your supervisor say at some point, “Get it all in one trip if you can.” That’s the idea behind HTTP/2. Earlier versions of HTTP required multiple TCP streams to transfer the burgeoning data that accompany today’s web pages, including images, formatting instructions, and other resources. HTTP/2 allows the multiplexing of HTTP conversations on a single TCP connection.
When Tim Berners-Lee designed the first HTTP protocol (HTTP 0.9), he wanted to keep it very simple. After all, he was only looking for a way to retrieve hypertext (HTML) pages from a server and make them appear on the computer that he was using. So HTTP, or Hypertext Transfer Protocol, was created as a one-line protocol. The original documentation was also simple. Using the telnet protocol, an HTTP command might look something like this:
$> telnet google.com 80
Google engineer Ilya Gregorik has written a brief but informative history of the HTTP protocol for O’Reilly Media. He traces the protocol’s development from Berners-Lee’s original proposal to the advent of HTTP/2. It is another example of the innovative ways that technical professionals have worked together as an open collective to benefit the entire community.
One important facet of HTTP history that Gregorik left out here, however, is the adoption of Google’s SPDY protocol as the model for the HTTP/2. The relationships of SPDY to HTTP/2 is briefly outlined in an FAQ provided by the IETP HTTP Working Group. They write that SPDY/2 was chosen as the basis for the new protocol after a call for proposals. Core SPDY developers Mike Belshe and Roberto Peon have been involved in the project.
Google has announced they are transitioning from SPDY to HTTP/2, although some web browsers and tools may still be using SPDY. An analysis of the two protocols will show many similarities, and in many ways they may be identical. The primary focus is on making web browsing more efficient.
“The Hypertext Transfer Protocol (HTTP) is a wildly successful protocol.” That’s the pronouncement of the authors of its latest version in the IETF standard for HTTP/2, RFC 7540. And in an Intro to HTTP/2, Gregorik says that “HTTP is one of the most widely adopted application protocols on the Internet”. So what’s the problem? Why change it?
The new RFC actually leaves the previous version HTTP/1.1 in place. The two protocols can work in parallel or together in the same networking environment. But the new version addresses problems that have arisen out of the exponential growth of the internet and the complexity of its pages. The considerations outlined by the IETF subgroup httpbis included:
Just as the farm hand may have to make several trips if he can’t carry everything in his arms or in whatever container he is using, HTTP/1.1 needs multiple TCP connections to retrieve the data of a complex website. HTTP/2 overcomes that by setting up a sort of session that stays open until the data is retrieved. And data is encapsulated in frames reminiscent of the old Frame Relay protocol.
Head of line blocking is a congestion problem that occurs when a packet sent earlier holds up the packets that follow. HTTP/2 solves that and other issues.
It’s not necessary to know all the details to have an understanding of what HTTP/2 does. By compressing headers and reducing TCP connections, the new protocol works in ways that its predecessor never could. As Grigorik writes in his history, “The primary focus of HTTP/2 is on improving transport performance and enabling both lower latency and higher throughput.”
“The primary focus of HTTP/2 is on improving transport performance and enabling both lower latency and higher throughput.”
HPACK is the format the IETF developers chose to compress headers in HTTP/2. RFC 7541 describes the HPACK standard, which was selected over the DEFLATE format used with SPDY. The specification says that HPACK is “intentionally simple and inflexible”. It treats headers differently from HTTP/1.1, and also addresses security concerns.
The inclusion of framing in HTTP/2 is interesting. Gregorik’s intro gives a good visual representation of the binary framing layer. Anyone familiar will protocol stacks will gain something from this image.
If you’ve ever complained that it’s taking much too long for a web page to load (of course, you have), the help you’ve been looking for may be found in HTTP/2. While it’s not yet broadly used, wider adoption of the protocol is expected. Look here for a list of HTTP/2 implementations that is periodically updated. The site also tracks a list of tools for HTTP/2. You can be thankful that the “wildly successful” HTTP protocol is still with us, and that continued improvements by the IETF community will enable many more years of happy internet browsing.
P.S. Want to enable HTTP/2 for your web apps via the Total Uptime load balancer? Just edit the public facing port options for the SSL or SSL_TCP protocol!
Are you looking to create an active/passive server failover configuration using our Cloud Load Balancer? It’s easier than you think. This video will walk you through the entire configuration process, taking a standard active/active load balancing scenario and changing it to active/passive, active/active/passive and even active/passive/passive with a tertiary failover group setup. Total Uptime […]
Controlling traffic is a key facet of internet management. Sometimes primary connections will go down. Or too much traffic may cause congested links or overwhelmed devices to become unusable. We wrote about the implementation of load balancing in the cloud in a 2017 blog post. When people think of load balancing, they usually think about traffic that […]
Total Uptime’s DNS Service along with our DNS Failover solution are often compared to Amazon Route 53, and for good reason. Organizations are increasingly looking for a reliable DNS provider in light of frequent outages at various Domain Registrars like Network Solutions. IT experts understand that because DNS is the first link in the chain, it must be the […]
A service provider that offers software-as-a-service or another cloud-based solution should understand what customers are looking for and what compels those very customers to choose an off-premise, “cloud-based” solution vs. the more traditional on-premise, self-hosted solution. As a cloud service provider ourselves, we set out to understand how our customers went about choosing one service […]