A more complex mechanism involves the use of a TCP handoff protocol among front-end and back-end nodes. The handoff protocol allows the front-end to transfer its end of an established client connection to a back-end node. Once the state is transferred, the back-end transmits response data directly to the client, bypassing the front-end. Data from the client (primarily TCP ACK packets) are forwarded by the front-end to the appropriate back-end node in an efficient manner.
In previous work, we have designed, implemented, and evaluated a handoff protocol for HTTP/1.0 [23]. This single handoff protocol can support persistent connections, but all requests must be served by the back-end node to which the connection was originally handed off.
The design of this handoff protocol can be extended to support HTTP/1.1 by allowing the front-end to migrate a connection between back-end nodes. The advantage of this multiple handoff protocol is that it allows content-based request distribution at the granularity of individual requests in the presence of persistent connections. Unlike front-end relaying, the handoff approach is efficient and scalable since response network traffic bypasses the front-end.
The handoff approach requires the operating systems on front-end and back-end nodes to be customized with a vendor-specific loadable kernel module. The design of such a module is relatively complex, especially if multiple handoff is to be supported. To preserve the advantages of persistent connections - reduced server overhead and reduced client latency - the overhead of migrating connections between back-end nodes must be kept low, and the TCP pipeline must be kept from draining during migration.