Check out the new USENIX Web site.



next up previous
Next: Availability
Up: High-Performance Local Area Communication
Previous: 5 Related Work

6 Conclusions

 

In this paper we have presented Fast Sockets, a communications interface which provides low-overhead, low-latency and high-bandwidth communication on local-area networks using the familiar Berkeley Sockets interface. We discussed how current implementations of the TCP/IP suite have a number of problems that contribute to poor latencies and mediocre bandwidth on modern high-speed networks, and how Fast Sockets was designed to directly address these shortcomings of TCP/IP implementations. We showed that this design delivers performance that is significantly better than TCP/IP for small transfers and at least equivalent to TCP/IP for large transfers, and that these benefits can carry over to real-life programs in everyday usage.

An important contributor to Fast Socket's performance is receive posting, which utilizes socket-layer information to influence the delivery actions of layers farther down the protocol stack. By moving destination information down into lower layers of the protocol stack, Fast Sockets bypasses copies that were previously unavoidable.

Receive posting is an effective and useful tool for avoiding copies, but its benefits vary greatly depending on the data transfer mechanism of the underlying transport layer. Sender-based memory management schemes impose high synchronization costs on messaging layers such as Sockets, which can affect realized throughput. A receiver-based system reduces the synchronization costs of receive posting and enables high throughput communication without significantly affecting round-trip latency.

In addition to receive posting, Fast Sockets also collapses multiple protocol layers together and reduces the complexity of network buffer management. The end result of combining these techniques is a system which provides high-performance, low-latency communication for existing applications.