One of the main reasons why the HTTP protocol was originally adopted and its usage grows is its simplicity. Nevertheless, the web experienced many changes in the last more than twenty years. Today, almost every device is connected to the Internet and uses HTTP as its main communication protocol.
The type of devices varies from embedded devices, smartphones and tablets to personal computers and servers. Very often, one of the most important requirements is near real-time or even real-time responsiveness. Additionally, as applications grow and become more complex, the number of requests per page increases. Thus making HTTP/1.1 even more inappropriate for usage. Consequently, HTTP/2 is trying to solve problems and disadvantages of HTTP/1.1 and make applications faster, more robust and safer.
A brief history of the HTTP protocol:
- HTTP/0.9 – released in 1991
- HTTP/1.0 – released in 1996
- HTTP/1.1 – released in 1997, the latest update in 1999
- HTTP/2 – based on Google’s experimental protocol SPDY (2009), proposed as a standard in February 2015, specification was published in May 2015 (RFC 7540 and RFC 7541)
The main goal of HTTP/2 is to improve performance, reduce latency, enable higher throughput and more efficient usage of network resources, and of course, ensure easy upgrade from HTTP/1.1 to HTTP/2. In HTTP/2, application semantics, i.e. URIs, methods, status codes and headers remain the same. What is changed is how data is formatted and transported from source to destination. The server and client manage this process, so upgrading to HTTP/2 won’t affect existing applications. That's a good news for web developers.
Binary Framing Layer defines how messages are framed and exchanged between the server and client:
- only one TCP connection per origin is used and can carry multiple bidirectional streams
- each stream carries bidirectional messages
- message (HTTP request or response) includes one or more frames
- frame is the smallest communication unit, HTTP/2 standard defines following types:
- HEADERS – carries HTTP message headers
- DATA – carries HTTP message body
- PRIORITY – for communicating stream priority
- RST_STREAM – stream terminating
- SETTINGS – setting connection’s configuration (i.e. if server push is enabled)
- PUSH_PROMISE – signals that server wants to push specified resource
- PING – measuring a minimal round-trip time from the sender, checking if an idle connection is still functional
- GOAWAY – initiates connection shutdown, also signals serious error
- WINDOW_UPDATE – setting flow control window size
- CONTINUATION – continuing a sequence of header block fragments
Figure 1. HTTP/2 connection, streams, messages and frames
Figure 2. In HTTP/2, HEADERS frame contains message headers and DATA frame message body
Multiplexing and concurrency in HTTP/2 allow messages to be divided into frames before sending them. Frames from different streams are multiplexed and interleaved within a shared, persistent connection and reassembled on a destination. This allows processing multiple requests and responses in parallel, without blocking on other requests and responses.
Figure 3. HTTP/2 request and response multiplexing within a shared connection
Request prioritizing assigns to each stream its weight (a number between 1 and 256, 256 meaning the highest priority) and dependency on the other streams. Clients use that information for building “priority tree” telling in which order client would like to receive responses. On the other side, servers use the same tree for prioritizing stream processing and delivery by efficient allocation of critical resources (CPU, memory, bandwidth, etc.). Stream priority is communicated by sending PRIORITY frame.
Flow control is a mechanism that prevents the sender from overwhelming the receiver (e.g. if the receiver doesn't want or is not able to process data). Flow control is based on WINDOW_UPDATE frames that are used by the receiver to advertise how many bytes it wants or is able to receive on the individual stream or on the whole connection. The initial value is 65535 bytes and it’s always reduced when the sender sends DATA frame. Flow control is applicable only to DATA frames. Control frames can't be blocked.
Header compression in HTTP/2 reduces the size of the headers using HPACK compression format. Headers are encoded via a static Huffman code. Additionally, the client and server maintain an indexed table of previously transferred headers, which allows already seen values to be encoded and only index values transferred. On destination, the table ensures that the original headers are reconstructed correctly. The static table is predefined in the specification and contains commonly used HTTP headers. Dynamic table is initially empty and when a new header is exchanged, it is added to the table (if it is not already there).
Figure 4. HTTP/2 header compression
Server push allows the server to send multiple responses for a single request, even if the client hasn't requested the resource yet. Server signals that it wants to push a resource by sending a PUSH_PROMISE frame. The client can then accept the stream or decline it by sending an RST_STREAM frame (i.e. if this resource is already cached). Server push is controlled via SETTINGS frames (i.e. can be disabled, etc).
The switch from HTTP/1.x to HTTP/2 can’t be done overnight because a large number of servers and clients must be updated to support it. HTTP/1.x will be supported for at least next decade and during this period servers and clients will have to support both HTTP/1.x and HTTP/2 standards. The good news is that changes in HTTP/2 will be handled by the server and client, so changing the existing web applications won’t be required.