Our own adaptive cache-sizing mechanism addresses the issue of adaptation by performing an online cost/benefit analysis, based on recent program behavior statistics. Assuming that behavior in the relatively near future will resemble behavior in the relatively recent past, our mechanism actually keeps track of aspects of program behavior that bear directly on the performance of compressed caching for different cache sizes, and compresses more or fewer pages to improve performance.
This system uses the kind of recency information kept by normal replacement policies, i.e., it maintains an approximate ordering of the pages by how recently they have been touched. Our system extends this by retaining the same information for pages which have been recently evicted. This information is discarded by most replacement policies, but can be retained and used to tell how well a replacement policy is working, compared to what a different replacement policy would do.
We therefore maintain an LRU (or recency) ordering of the pages in memory and a comparable number of recently-evicted pages. This ordering is not used primarily to model what is in the cache, but rather to model what the program is doing.