To perform well, a compressed caching system should adapt to the working set sizes of the programs it caches for. If a program's working set fits comfortably in RAM, few pages (or no pages) should be kept compressed, so that the overwhelming majority of pages can be kept in uncompressed form and accessed with no penalty. If a program's working set is larger than the available RAM, and compressing pages would allow it to be kept in RAM, more pages should be compressed until the working set is ``captured''. In this case, the reduction in disk faults may greatly outweigh the increase in compression cache accesses, because disk faults are many times more expensive than compression cache faults.
Douglis observed in his experiments that different programs needed compressed caches of different sizes. He implemented an adaptive cache-sizing scheme, which varied the split between uncompressed and compressed RAM dynamically. Even with this adaptive caching system, however, his results were inconsistent; some programs ran faster, but others ran slower. We believe that Douglis's adaptive caching strategy may have been partly at fault. Douglis used a fairly simple scheme in which the two caches competed for RAM on the basis of how recently their pages were accessed, rather like a normal global replacement policy arbitrating between the needs of multiple processes, keeping the most recently-touched pages in RAM. Given that the uncompressed cache always holds more recently-touched pages than the compressed cache, this scheme requires a bias to ensure that the compressed cache has any memory at all. We believe that this biased recency-based caching can be maladaptive, and that a robust adaptive cache-sizing policy cannot be based solely on the LRU ordering of pages within the caches.