Compressed virtual memory appears quite attractive on current machines, offering an improvement of tens of percent in virtual memory system performance. This improvement is largely due to increases in CPU speeds relative to disk speeds, but substantial additional gains come from better compression algorithms and successful adaptivity to program behavior.
For all of the programs we examined, on currently available hardware, a virtual memory system that uses compressed caching will incur significantly less paging cost. Given memory sizes for which running a program suffers tolerable amounts of paging, compressed caching often eliminates 20% to 80% of the paging cost, with an average savings of approximately 40%. As the gap between processor speed and disk speed increases, the benefit will continue to improve.
The recency based approach to adaptively resizing the compression cache provides substantial benefit at nearly any memory size, for many kinds of programs. In our tests, the adaptive resizing provided benefit over a very wide range of memory sizes, even when the program was paging little. The adaptivity is not perfect, as small cost may be incurred due to failed attempts to resize the cache, but performs well for the vast majority of programs. Moreover, it is capable of providing benefit for small, medium, and large footprint programs.
The WK compression algorithms successfully take advantage of the regularities of in-memory data, providing reasonable compression at high speeds. After many decades of development of Ziv-Lempel compression techniques, our WKdm compressor fared favorably with the fastest known LZ compressors. Further research into in-memory data regularities promises to provide tighter compression at comparable speeds, improving the performance and applicability of compressed caching for more programs.
It appears that compressed caching is an idea whose time has come. Hardware trends favor further improvement in compressed caching performance. Although past experiments failed to produce positive results, we have improved on the components required for compressed caching and have found that it could be successfully applied today.