The simplest sequential read-ahead strategy is synchronous
prefetching which on a sequential miss on track simply brings
tracks
through
into the cache, where
is known as the
degree of sequential read-ahead and is a tunable parameter
[33]. As mentioned in the introduction, the
additional cost of read-ahead on a seek is becoming progressively
smaller. The number of sequential misses decrease with increasing
, if (i) all the read-ahead tracks are accessed and (ii) not
discarded before they are accessed. Consider the well known OBL
(one block look-ahead) scheme [20] that uses
. This
scheme reduces the number of sequential misses by
.
Generalizing, with
track look-ahead, number of sequential
misses decrease by
. To eliminate misses purely by using
synchronous prefetching,
needs to become infinite. This is
impractical, since prefetching too far in advance, will cause
cache pollution, and will ultimately degrade performance. Also,
depending upon the amount of cache space available to sequential
data, not all tracks can be used before they are evicted. Hence,
by simply increasing
, it is not always possible to drive the
number of sequential misses to zero. The behavior of sequential
misses for synchronous prefetching is illustrated in the left-hand
panel of Figure 1.
![]() |
To effectively eradicate misses, asynchronous prefetches can be used [29,32]. An asynchronous prefetch is carried out in the absence of a sequential miss, typically, on a hit. The basic idea is to read-ahead a first group of tracks synchronously, and after that when a preset fraction of the prefetched group of tracks is accessed, asynchronously (meaning in the absence of a sequential miss) read-ahead next group of tracks, and so forth and so on. Typically, asynchronous prefetching is done on an asynchronous trigger, namely, a special track in a prefetched group of tracks. When the asynchronous trigger track is accessed, the next group of tracks is asynchronously read-ahead and a new asynchronous trigger is set. The intent is to exploit sequential structure to continuously stage tracks ahead of their access without incurring a single additional miss other than the initial sequential miss. A good analogy is that of an on-going relay race where each group of tracks passes the baton on to the next group of tracks and so on.
To summarize, in state-of-the-art sequential prefetching, synchronous prefetching is used initially when the sequence is first detected. After this bootstrapping stage, asynchronous prefetching can sustain itself as long as all the tracks within the current prefetched group are accessed before they are evicted. As a corollary, the asynchronous trigger track will also be accessed, and in turn, will prefetch the next group of tracks amongst which will also be the next asynchronous trigger.