Check out the new USENIX Web site. next up previous
Next: Micro-benchmark Results Up: Evaluation Previous: Evaluation

Benchmark Variables

  In order to understand the performance trade-offs of using IPsec as well as how it compares to other approaches we designed a set of performance benchmarks. Our experiments were designed in such a way as to explore a multitude of possible setups.


  
Figure 2: Host-to-Host topology.
\begin{figure}
\begin{center}

\epsfig {file=fig/t1.eps,width=1.3in}
\end{center}\end{figure}


  
Figure 3: Host-to-Gateway-to-Gateway-to-Host topology. In this case experiments that use IPsec form a tunnel between gateways.
\begin{figure}
\begin{center}

\epsfig {file=fig/t2.eps,width=2.5in}
\end{center}\end{figure}


  
Figure 4: 3 Hosts-to-Gateway-to-Host topology. We use two IPsec tunnel configurations, end-to-end (where the 3 hosts form tunnels to the end host) and gateway-to-host (H4).
\begin{figure}
\begin{center}

\epsfig {file=fig/t3.eps,width=2.0in}
\end{center}\end{figure}

Our experiments take into consideration five variables: the type of utility used to measure performance, the type of encryption/authentication algorithm used by IPsec (or other applications), the network topology, use of cryptographic hardware accelerators, and the effects that the added security has on the performance of the system. For the IPsec experiments, we use manually configured SAs; thus, the performance numbers do not include dynamic SA setup times. For SSL, scp, and sftp, bulk data transfers include the overhead of session setup; however, that overhead is negligible compared to the cost of the actual data transfer.

Large filetransfer experiments were repeated 5 times, all other experiments were repeated 10 times and the mean was taken. Error bars in our graphs represent one standard deviation above and below the mean. Graphs presenting ttcp measurements do not show error bars to avoid clutter, however the standard deviation is small in all cases.

We will go into more detail about each experiment in the following section.


next up previous
Next: Micro-benchmark Results Up: Evaluation Previous: Evaluation
Stefan Miltchev
4/17/2002