Next: Results
Up: Experimental Set-up
Previous: The Competitors
The ultimate goal of any cache management algorithm is to improve the
shape of the throughput-response time curve for the system by lowering the
response times and increasing the throughput across all workloads.
Most caching research has focused on
minimizing miss ratios (or maximizing hit ratios) which at best is a good
heuristic for improving performance of a system. To be fair it
is not just the miss ratio but also the average cost of misses that
impacts the aggregate response time. For example, an aggressive
prefetching algorithm can potentially reduce the miss ratio but
suffer a severe increase in the average cost of misses as it overloads the
disks.
In fact, with prefetching, the concept of a read miss itself is nebulous
because a read that happens after a prefetch request for the page has been issued and
before the prefetch actually completes is somewhere between a hit and a miss, but technically
neither.
Even in the absence of prefetching, some disks might be less busy than others leading to
smaller miss penalties on those disks. Even on a single disk reading from an area that is not visited
often by the disk head tends to be more expensive.
In short, it is prudent to measure performance in terms of aggregate read response times
and throughput whenever possible.
Another quantity which is useful is the stall time. It is
the total time for which application had to wait because the requested data was not present in the cache.
This is very closely related to the aggregate throughput as a lower stall time results in correspondingly
higher throughput.
We however choose to report in terms of throughput as it is more immediately relevant to performance.
Next: Results
Up: Experimental Set-up
Previous: The Competitors
root
2006-12-19