Prefetching and caching have long been implemented in modern file systems and servers. However, until recently, prefetching was restricted to sequential lookahead and caching was mostly based on the LRU replacement policy. Unfortunately, sequential access is not necessarily the dominant access pattern for certain important data-intensive applications, such as volume visualization applications [27] and video playbacks involving fast forwards and rewinds. This observation has spawned research in three different directions.
Predictive Prefetching: Early file prefetching systems [6,9,10,26] attempted to deduce future access patterns from past reference history and performs file prefetching based on these inferred access patterns. This approach is completely transparent to application programmers. However, incorrect prediction might result in unnecessary disk accesses and poor cache performance due to the interference between current working sets and predicted future working sets. Some different prediction policy is also possible. For example, the work by Duchamp et al. [7], in the context of Web page browsing, proposed to prefetch data based on the popularity of a Web page.
Application Controlled Caching and Prefetching: Patterson et al. [15,17] modified two UNIX applications, make and grep, to provide hints to the file system regarding the files that applications are going to touch. The hints are given to the file system by forking off another process that just accesses all the files that are going to be accessed by the original process ahead of time. This study showed a 13% to 30% reduction in execution time. In a later paper, they modified a volume visualization application to provide hints and obtained reduction in execution time by factor of 1.9 to 3.7 [18].
While Patterson et al. were performing application-controlled prefetching, Cao et al. [2,3] were investigating the effects of application hints on file system caching. Traditionally, the file system cache implements the LRU replacement policy, which is not adequate for all applications. Cao et al. proposed a two-level cache management scheme in which the kernel allocates physical pages to individual applications (allocation), and each application is responsible for deciding how to use its physical pages (replacement). This two level strategy, along with a kernel allocation policy of LRU with Swapping and Placeholders (LRU-SP), reduced disk block I/O by up to 80% and execution time by up to 45% for different applications.
Prefetching and caching are complimentary to each other. Cao et al. [4] first proposed an integrated prefetching and caching scheme which showed an execution time reduction of up to 50% through a trace-driven simulation study. Patterson et al. [19] showed another prefetching and caching scheme based on user-supplied hints which classifies buffers into three types: prefetching hinted blocks, caching hinted blocks for reuse, and caching used blocks for non-hinted access. Later, they extended their scheme to support multiple processes where the kernel allocates resources among multiple competing processes [25]. Joint work by both groups later on arrived at an adaptive approach that incorporates the best of both schemes [8,24].
Tait et al. [23] adopted the idea of file prefetching, hoarding, and caching in the context of mobile computing. Rochberg et al. [21] implemented a network file system extension of the Informed prefetching scheme [16] that uses modified NFS clients to disclose the application's future disk accesses. The result showed a 17-69% reduction in execution time and demonstrated the scheme can be extended to network file systems quite easily. Pai et al. [14] applied the prefetching idea to manually construct the decoupled architecture where the computation performed by the Web server never blocks on I/O. Similarly, it was also adopted by the Unix backup (/sbin/dump) program (originally from Caltech), where the main dump program should never block on I/O.
Compiler Directed I/O: Instead of requiring application programmers to enter the disk access hints, Mowry et al. [13] described a compiler technique to analyze a given program and to issue prefetch requests to an operating system that supports prefetching and caching. This fully automatic scheme significantly reduces the burden on the application programmer and thus enhances the feasibility of application-specific prefetching and caching. However, this approach is restricted to loop-like constructs, where the disk access addresses usually follow a regular pattern. Recently, Chang and Gibson [5] developed a binary modification tool called SpecHint that transforms Digital UNIX application binaries to perform speculative execution and issue disk access hints. They showed that this technique can perform quite close to Informed Prefetching without any manual programming efforts. Chang's work is most similar to AASFP but is different in two ways: