Check out the new USENIX Web site. next up previous
Next: Sequential Prefetching Up: Introduction Previous: Sequential Prefetching

Our Contributions

We make several contributions towards design and implementation of a self-tuning, low overhead, high performance cache replacement algorithm for a real-life system that deploys sequential prefetching. For a seemingly much studied, well understood, and ``simple'' technique, design and analysis of a good sequential prefetching algorithm and associated cache replacement policy turns out to be surprisingly tricky. We summarize our main novel contributions, and outline the paper:

  1. (Section II) For a class of state-of-the-art sequential prefetching schemes that use LRU, we point out an anomaly akin to Belady's anomaly that results in more misses when larger cache is allocated. We propose a simple technique to eliminate the anomaly.

  2. (Section III) It is common to separate sequential data and random data into two LRU lists. We develop elegant analytical and empirical models for dynamically and adaptively trading cache space between the two lists with the goal of maximizing the overall hit ratio and, consequently, minimizing the average response time. Towards this end, we synthesize a new algorithm, namely, Sequential Prefetching in Adaptive Replacement Cache ( SARC), that is self-tuning, low overhead, and simple to implement. SARC improves performance for a wide range of workloads that may have a varying mix of sequential and random data streams and may possess varying temporal locality of the random data streams.

  3. (Section IV) Shark is IBM's flagship high-end storage controller [31]. We implemented SARC and two commonly used LRU variants on Shark (Model 800) hardware. On a widely adopted SPC-1 storage benchmark, SARC convincingly outperforms the two LRU variants. As anecdotal evidence, at the same throughput, SARC has average response time of $ 5.18$ms as compared to $ 33.35$ms and $ 8.92$ms for the two LRU variants.


next up previous
Next: Sequential Prefetching Up: Introduction Previous: Sequential Prefetching
Binny Gill 2005-02-14