HTTP/2 Makes the Internet Faster

HTTP/2 Makes the Internet Faster

If you have ever worked on a farm or have done construction, you probably got used to carrying things. And you might have heard your supervisor say at some point, “Get it all in one trip if you can.” That’s the idea behind HTTP/2. Earlier versions of HTTP required multiple TCP streams to transfer the burgeoning data that accompany today’s web pages, including images, formatting instructions, and other resources. HTTP/2 allows the multiplexing of HTTP conversations on a single TCP connection.

A Brief History

When Tim Berners-Lee designed the first HTTP protocol (HTTP 0.9), he wanted to keep it very simple. After all, he was only looking for a way to retrieve hypertext (HTML) pages from a server and make them appear on the computer that he was using. So HTTP, or Hypertext Transfer Protocol, was created as a one-line protocol. The original documentation was also simple. Using the telnet protocol, an HTTP command might look something like this:

$> telnet google.com 80

Google engineer Ilya Gregorik has written a brief but informative history of the HTTP protocol for O’Reilly Media. He traces the protocol’s development from Berners-Lee’s original proposal to the advent of HTTP/2. It is another example of the innovative ways that technical professionals have worked together as an open collective to benefit the entire community.

One important facet of HTTP history that Gregorik left out here, however, is the adoption of Google’s SPDY protocol as the model for the HTTP/2. The relationships of SPDY to HTTP/2 is briefly outlined in an FAQ provided by the IETP HTTP Working Group. They write that SPDY/2 was chosen as the basis for the new protocol after a call for proposals. Core SPDY developers Mike Belshe and Roberto Peon have been involved in the project.

Google has announced they are transitioning from SPDY to HTTP/2, although some web browsers and tools may still be using SPDY. An analysis of the two protocols will show many similarities, and in many ways they may be identical. The primary focus is on making web browsing more efficient.

Overcoming Inefficiencies

“The Hypertext Transfer Protocol (HTTP) is a wildly successful protocol.” That’s the pronouncement of the authors of its latest version in the IETF standard for HTTP/2, RFC 7540. And in an Intro to HTTP/2, Gregorik says that “HTTP is one of the most widely adopted application protocols on the Internet”. So what’s the problem? Why change it?

The new RFC actually leaves the previous version HTTP/1.1 in place. The two protocols can work in parallel or together in the same networking environment. But the new version addresses problems that have arisen out of the exponential growth of the internet and the complexity of its pages. The considerations outlined by the IETF subgroup httpbis included:

  • Header compression
  • High latency
  • Multiplexing of requests
  • Fixing head of line blocking
  • Better negotiation
  • Server push

Just as the farm hand may have to make several trips if he can’t carry everything in his arms or in whatever container he is using, HTTP/1.1 needs multiple TCP connections to retrieve the data of a complex website. HTTP/2 overcomes that by setting up a sort of session that stays open until the data is retrieved. And data is encapsulated in frames reminiscent of the old Frame Relay protocol.

Head of line blocking is a congestion problem that occurs when a packet sent earlier holds up the packets that follow. HTTP/2 solves that and other issues.

HTTP/2 Highlights

It’s not necessary to know all the details to have an understanding of what HTTP/2 does. By compressing headers and reducing TCP connections, the new protocol works in ways that its predecessor never could. As Grigorik writes in his history, “The primary focus of HTTP/2 is on improving transport performance and enabling both lower latency and higher throughput.”

HPACK is the format the IETF developers chose to compress headers in HTTP/2. RFC 7541 describes the HPACK standard, which was selected over the DEFLATE format used with SPDY. The specification says that HPACK is “intentionally simple and inflexible”. It treats headers differently from HTTP/1.1, and also addresses security concerns.

The inclusion of framing in HTTP/2 is interesting. Gregorik’s intro gives a good visual representation of the binary framing layer. Anyone familiar will protocol stacks will gain something from this image.

Server push is the way that an HTTP/2 protocol allows a server to send data that it thinks a client might need. It anticipates requests that might be associated with a webpage. Everyone knows that html pages are never fully complete until they are populated by attached images, JavaScript, and so much more. Server push gets all that going before even being asked.

Conclusion

If you’ve ever complained that it’s taking much too long for a web page to load (of course, you have), the help you’ve been looking for may be found in HTTP/2. While it’s not yet broadly used, wider adoption of the protocol is expected. Look here for a list of HTTP/2 implementations that is periodically updated. The site also tracks a list of tools for HTTP/2. You can be thankful that the “wildly successful” HTTP protocol is still with us, and that continued improvements by the IETF community will enable many more years of happy internet browsing.

P.S. Want to enable HTTP/2 for your web apps via the Total Uptime load balancer? Just edit the public facing port options for the SSL or SSL_TCP protocol!

 

Ready to make your applications highly available?
Access to our platform starts at just $23/month. Continue learning about Cloud Load Balancing or jump right in and try it free now.

 


Other posts you might like...

The True Costs of Downtime for IT

Downtime is a dirty word in the IT business. Unplanned outages are unacceptable and should not be tolerated. In a universe where customers expect services to be available 99.999% of the time, any time your IT service offering is down is costly to your business.

read more

The Need for Increased Availability is Now

Our predictions for the last half of 2017: Ransomware will keep evolving, the rise of IoT will pave way for increased DDoS Attacks, IPv6 Traffic will continue to grow exponentially, Machine Learning and AI will be applied to enhance security, and the need for increased availability is now.

read more

5 Ways to Increase Application Availability

A service provider that offers software-as-a-service or another cloud-based solution should understand what customers are looking for and what compels those very customers to choose an off-premise, “cloud-based” solution vs. the more traditional on-premise, self-hosted solution.

read more