Next: Endnotes
Up: SARC: Sequential Prefetching in
Previous: Adaptive nature of SARC
Conclusions
We have designed a powerful sequential prefetching strategy that
combines virtues of synchronous and asynchronous prefetching while
avoiding the anomaly that arises when prefetching and caching are
integrated, and is capable of attaining zero misses for sequential
streams.
We have introduced a self-tuning, low overhead, simple to
implement, locally adaptive, novel cache management policy
SARC that dynamically and adaptively partitions the cache space
amongst sequential streams and random streams so as to reduce the
read misses. SARC is doubly adaptive in that it adapts not
only the cache space allocated to each class but also the rate at
which the cache space is transferred from one class to another. It
is extremely easy to convert an existing LRU variant into
SARC.
We have implemented SARC along with two popular
state-of-the-art LRU variants on Shark hardware. By using
the most widely adopted storage benchmark, we have demonstrated
that SARC consistently outperforms the LRU variants,
and shifts the throughput versus average response time curves to
the right thus fundamentally increasing the capacity of the
system. Furthermore, SARC deliver better performance to a
client, without unduly stressing the server.
We believe that the insights, analysis, and algorithm presented in
this paper are widely applicable. Due to its adaptivity, we expect
SARC to work well across (i) a wide range of workloads that
may have a varying mix of sequential and random clients and may
possess varying temporal locality of the random clients and
varying number of sequential and random streams with varying think
times; (ii) different back-end storage configurations; and (iii)
different data layouts.
Next: Endnotes
Up: SARC: Sequential Prefetching in
Previous: Adaptive nature of SARC
Binny Gill
2005-02-14