Check out the new USENIX Web site. next up previous
Next: Performance Conclusions Up: Performance Previous: Revocation Operations

Macrobenchmark

A macrobenchmark evaluation of the Flask prototype is difficult to perform. Since Flask is a research prototype, it has only limited POSIX support and many of the servers are not robust or well tuned. As a result, it is difficult to run non-trivial benchmark applications. Nevertheless, we performed a simple comparison, running make to compile and link an application consisting of 20 .c and 4 .h files for a total of 8060 lines of code (including comments and white space), about 190KB total.

The test environment included three object managers (the kernel, BSD filesystem server and POSIX process manager) along with a shell and all the GNU utilities necessary to build the application (make, gcc, ld, etc.). The Flask configuration of the test includes the security server with the three object managers configured to include the security features described in Section 5.3 and Appendix A. For each configuration, we ran make five times, ignored the first run, and averaged the time of the final four runs (the initial run primed the data and metadata caches in the filesystem). To give a sense of the absolute performance of the base Fluke system, we also ran the test under FreeBSD 2.1.5 on the same machine and filesystem. Table 5 summarizes the experiment.


 
Table: Results of running make to compile and link a simple application in various OS configurations. BSD is FreeBSD 2.1.5, Flask-FFS-PM is the Flask kernel with the unmodified Fluke filesystem server and process manager, and the memfs entries use a memory-based filesystem in place of the disk-based filesystem. Percentages are the slowdowns vs. the appropriate base Fluke configurations.
OS Config Time (sec)
BSD 18.6
Fluke 39.9
Flask 41.7 (4.5%)
Flask-FFS-PM 40.9 (2.5%)
Fluke-memfs 24.7
Flask-memfs 27.4 (11%)
 

The slowdown for Flask over the base Fluke system is less than 5%. By running the Flask kernel with unmodified Fluke object managers (Flask-FFS-PM), we see that the overhead is pretty evenly divided between the kernel and the other object managers (primarily the filesystem server). However, this modest slowdown is against a Fluke system which is over twice as slow on the same test as a competitive Unix system (BSD). The bulk of this slowdown is due to the prototype filesystem server which does not do asynchronous or clustered I/O operations. To factor this out, we reran the tests using a memory-based filesystem which supports the same access checks as the disk-based filesystem. The last two lines of Table 5 show the results of these tests. Note that the Flask overhead has increased to 11%, as less is masked by the disk I/O latency.

Table 6 reports the number of security decisions that were requested by each object manager during testing of the Flask configuration and how those decisions were resolved. The numbers include all five runs of make as well as the intervening removal of the object files. These results reaffirm the effectiveness of caching security decisions, with well over 99% of the requests never reaching the security server.


 
Table: Resolution of requested security decisions during the compilation benchmark. Numbers are from the Flask configuration of Table 5 and includes all five runs of make and make clean.
    Resolution
Object Total using using calling
Manager queries hint cache SS
Kernel 603735 175585 428121 29
FFS 76708 N/A 76700 8
PM 892 N/A 890 2
 


next up previous
Next: Performance Conclusions Up: Performance Previous: Revocation Operations
Stephen D. Smalley
1999-07-13