Check out the new USENIX Web site. next up previous
Next: SARC Up: SARC: Sequential Prefetching in Previous: Marginal Utility of RANDOM

Equalizing Marginal Utilities

Figure 2 depicts the structure of our algorithm. Fix the size of RANDOM bottom $ \Delta L$ to a small constant fraction of the cache size. The essence of the algorithm is to compare the absolute values of

$\displaystyle \frac{\Delta s}{\Delta L}
\left( \approx \frac{2 \cdot s}{L} \right)$    and $\displaystyle \frac{\Delta r}{\Delta
L}.
$

If $ (2s)/L$ is larger than the magnitude of $ \Delta r/\Delta L$, then it is more advantageous to transfer cache space from the bottom of RANDOM to the bottom of SEQ and hence we increase the desired size of SEQ; otherwise, we decrease the desired size of SEQ.

So far, the time interval for sampling the rates $ \Delta r$ and $ s$ has not been fixed. The smaller the time interval the more adaptive the algorithm, while larger the time interval the smoother and slower the adaptation. Our algorithm implicitly selects a time interval based on cache hits. Thus, the rate of adaptation is derived from and adapts to the workload itself. Specifically, we select the time interval to be the time difference between two successive hits in the bottom of the RANDOM list. In this case, $ {\Delta r}$ is just a constant $ 1$, and we measure $ s$ during the same interval. Thus, we now need to simply compare

$\displaystyle \frac{2 \cdot s }{L}$    and $\displaystyle \frac{1}{{\Delta L}},$    or, equivalently, $\displaystyle \frac{2 \cdot s \cdot \Delta L}{L}$    and $\displaystyle 1.$ (6)


next up previous
Next: SARC Up: SARC: Sequential Prefetching in Previous: Marginal Utility of RANDOM
Binny Gill 2005-02-14