In Figures 5, 6, 7 and 8, we explore different network configurations using the ttcp benchmarking tool. We explore how the various encryption algorithms affect performance and how much benefit we get out of hardware cryptographic support. The host-to-host topology is used as the base case, and should give us the optimal performance of any data transfer mechanism in all scenarios. The other two topologies map typical VPN and ``road warrior'' access scenarios.
The key insight from our experiments is that even though the introduction of IPsec seriously worsens performance, our crypto hardware improves its performance (relative to pure-software IPsec) by more than 100%, especially in the case of large packets. For the host-to-host experiment, we see that throughput over IPsec varies from 40% of the unencrypted transfer (for small packet sizes) to 30% (for 8KB packets ). We notice a similar situation in the VPN configuration (host-gateway-gateway-host). In the last two scenarios, the difference in performance is less marked between the unencrypted and the hardware-accelerated cases, since the aggregate throughput of the three hosts on the left is limited to at most 300 Mbps (due to the topology).
In our experiments, we also noticed some anomalous behavior with 512 byte packet sizes, we believe that this has to do with buffer mis-alignments in the kernel and will investigate further in the future using profiling.
In our previous experiments we stress-tested IPsec by maximizing network traffic using ttcp. In our next set of experiments, we investigate how IPsec behaves under ``normal'' network load and how it compares with other secure network transfer mechanisms like scp(1) and sftp(1). Our tests measure elapsed time for a large file transfer in two different network configurations, host-to-host and host-to-gateway-to-gateway-to-host. In the first case, IPsec is used in an end-to-end configuration; in the second case, IPsec is done between two gateways.
Figures 9 and 10 present our results. Since we are doing large file transfers, we easily amortize the initialization cost of each protocol. Comparing the two figures, we notice that most of the time is actually spent by the file system operations, even after we normalize the file sizes. Another interesting point is that when we use IPsec the file transfer is quicker in the gateway network topology compared to the direct link. At first this might seem counter-intuitive, however it is easily explained: in the gateway case, the IPsec tunnel is located between the gateways, therefore relieving some processing burden from the end hosts that are already running the ftp program. This leads to parallel processing of CPU and I/O operations, and consequently better performance, since the gateway machines offload the crypto operations from the end hosts. Note that IPsec is not used for the plaintext ftp, scp, and sftp measurements.
Figures 11 and 12, compare IPsec with ssl(3) as used by HTTPS, the network configuration is host-to-host. We used curl(1) to transfer a large file from the server to the client. Once again IPsec proves to be a more efficient way of ensuring secure communication.
In our final set of experiments, we explore the impact IPsec has on the operation of the system. We selected a CPU-intensive job, Sieve of Eratosthenes , which we run while constantly using the network. We tested the impact of a number of protocols to the performance of other jobs (in this case, the sieve) running on the system. In Figure 14, we present the execution times of our CPU intensive job while there is constant background network traffic. To understand the results of Figure 14, one needs to understand how the BSD scheduler works. In BSD, CPU intensive jobs that take up all their quanta have their priority lowered by the operating system. When executing the sieve while using ftp, the sieve program gets its priority lowered and therefore ends up taking more time to finish. In the case where it is run with scp(1) or sftp(1), which are themselves CPU intensive because of the crypto operations, the sieve finished faster. When the sieve is run with IPsec traffic, the crypto operations are performed by the kernel and therefore the sieve gets fewer CPU cycles. With hardware cryptographic support, the kernel takes up less CPU which leaves more cycles for the sieve. In the case of HTTPS background network traffic, the CPU cycles spent in crypto processing were not enough to affect the priority of the sieve.