Next: NFS Storage Router
Up: Experimental Results
Previous: Layer-4 Informed ALRMs
Anypoint offers three advantages relative to the TCP proxy:
- Efficiency. Memory use is bounded independent of traffic
rates, and scales with the number of connections independent of the number
of active servers. The Anypoint switch also avoids processing overheads to
terminate the protocol.
- End-to-end guarantees. Anypoint offers end-to-end reliability.
In contrast, a proxy acks data that it has not yet delivered to the
receiving end node. Also, note that the proxy's delivery order is the same
as Anypoint's, which is not the ordering specified by its transport (TCP).
- Layer integration. Anypoint allows a continuum of redirection
policies that consider Layer 4 state, e.g., for speed-sensitive
steering during periods of unbalanced load.
TCP splicing is one technique to reduce the runtime overheads for a
proxy [19], and is amenable to switch-based implementations. This
technique is related to Anypoint's sequence number translations to
short-circuit protocol processing. However, the Anypoint transport model
is fundamentally different.
Figure:
Slite latency and switch CPU utilizations as a function of
offered load for varying intermediary configurations.
|
Interestingly, inbound Anypoint flows in our prototype may slow down
relative to a TCP proxy as the ensemble size grows, due to an interaction
between the transport's congestion control and acknowledgments from the ensemble. The
Anypoint switch merges acks from the ensemble nodes and sends cumulative
acks to the peer. If the servers return acks out of order, the switch must
delay them to avoid inciting a fast-recovery reaction on the
peer, causing it to reduce the congestion window (TCP Reno and later presume
that duplicate acknowledgments indicate lost data). Delaying these acks can
negatively impact the acknowledgment clocking, lowering throughput.
After these experiments we can make a number of observations about
desirable features for Anypoint-compatible transports:
- Explicit rate control. Outbound Anypoint flows should share
link bottlenecks in a TCP-friendly manner. An outbound flow is an
aggregate of n ensemble sources; in our prototype this flow is likely to
be more aggressive than a competing TCP flow. To ensure fairness, the
switch must coordinate ensemble sources through explicit congestion control
signals.
- Selective acks (SACK). The transport must divorce
congestion behavior from reliability. Triple-duplicate cumulative acks are common
and meaningless as congestion indicators for Anypoint communication.
- Flexible flow control. The switch can optimistically or
conservatively manage the flow windows as described in
Section 4.3. If the switch conservatively distributes the
peer's advertised receive window across the ensemble sources, it should be
able to revoke unused window allocations and redistribute them to active
sources. Alternatively, ensemble members could bid for the peer's receive
window by advertising to the switch the amount of data they wish to send.
Next: NFS Storage Router
Up: Experimental Results
Previous: Layer-4 Informed ALRMs
Kenneth G. Yocum
2003-01-20