In the experiments, two copies of Apache were executed on the same host, each listening on its own IP address using IP aliasing for the Ethernet interface. Running two copies of Apache, each instance can be controlled by adjusting the MaxClients directive, which limits the number of concurrent sessions for each site. This is an effective means of performance insulation if the average work per HTTP operation is known for each site. Since both sites have their own copy of similar content, we could achieve a good division of resources by setting the MaxClient directive for the Apache servers to the same value. To test VS-based insulation, the Apache servers were launched as if they were run on their own physical hosts (using very large process limits). We created two VSs, www1 and www2, for which we specified the fork classification rules:
(fork, www[1|2]) => (www[1|2])
Each site's initial httpd process was explicitly added to its corresponding VS via a simple command line utility:
$> svcaddprocess <VSID><PID>
Each site was given a 50% CPU share.
In the measurements that are discussed here, Site A was offered a constant load of 40 simultaneous connections while Site B was offered between 10 and 60 simultaneous connections. We chose these parameters because the server saturates -- diminishing HTTP throughput gains -- when offered 80 simultaneous connections.
Without insulation between the sites, A's performance degrades significantly once the server is offered a total of 70 simultaneous connections (A=40, B=30) [see Figure 11(a)]. From this point on, B begins to steal resources from A, thus contaminating the file cache to A's disadvantage. The lack of insulation can be fixed in Apache itself by restricting the maximal number of concurrent processes. This comes at the expense of some loss of aggregated performance under peak load [Figure 11(b)]. This loss is due to the fact that incoming requests must be rejected when the process limit is reached. This queuing phenomenon -- for M/M/m/c systems described by the Erlang-loss formulas [23] -- is especially evident when looking at the smaller process count limit (20:20). VS CPU shares eliminate this problem.
Apache's process limits also fail when background activities compete for CPU time, e.g., monitoring. To simulate the effects of background load, ten background load generators were invoked. As expected, aggregated performance and A's performance drop significantly if Apache's process limits are used for site insulation. In contrast, the VS abstraction keeps A's performance stable since only non-dedicated resource slots (beyond A's and B's resource limits) are used to process background load. Therefore, VS-based insulation performs better than Apache's own support for VHs.
One may argue that a modified, CPU-share-aware Apache could achieve the same quality of insulation. VSs obviate the need for modifying applications to get a better handle on performance management.
Since this experiment did not involve access to any shared services and work is relayed only from a parent process to its child, Eclipse or RC's could probably be tuned to perform just as well as VSs. Beyond establishing the competitiveness of the VS approach, the next set of experiments focuses on its main contribution.