Check out the new USENIX Web site. next up previous
Next: Multiple TCP connection handoff Up: Mechanisms for content-based request Previous: Mechanisms for content-based request

   
Relaying front-end


  
Figure 2: Mechanisms for request distribution
\begin{figure*}
\centerline{\psfig{figure=fig/mechanisms.eps,width=5in}}\end{figure*}

A simple client-transparent mechanism is a relaying front-end. Figure 2 depicts this mechanism and the other mechanisms discussed in the rest of this section. Here, the front-end maintains persistent connections (back-end connections) with all of the back-end nodes. When a request arrives on a client connection, the front-end assigns the request, and forwards the client's HTTP request message on the appropriate back-end connection. When the response arrives from the back-end node, the front-end forwards the data on the client connection, buffering the data if necessary.

The principal advantage of this approach is its simplicity, its transparency to both clients and back-end nodes, and the fact that it allows content-based distribution at the granularity of individual requests, even in the presence of HTTP/1.1 persistent connections.

A serious disadvantage, however, is the fact that all response data must be forwarded by the front-end. This may render the front-end a bottleneck, unless the front-end uses substantially more powerful hardware than the back-ends. It is conceivable that small clusters could be built using as a front-end a specialized layer 4 switch with the ability to relay transport connections. We are, however, not aware of any actual implementations of this approach. Furthermore, results presented in Section 6.1 indicate that, even when the front-end is not a bottleneck, a relaying front-end does not offer significant performance advantages over more scalable mechanisms.


next up previous
Next: Multiple TCP connection handoff Up: Mechanisms for content-based request Previous: Mechanisms for content-based request
Peter Druschel
1999-04-27