Next: The DEAR Scheme Up: An Implementation Study of Previous: Abstract
The speed gap between processors and disks continues to increase as VLSI technologies advance at an enormous rate. This speed gap has resulted in disk I/O becoming a serious performance bottleneck for many computer systems [1,2]. Hence, the role of the buffer cache located in main memory and managed by the operating system is becoming increasingly important. Judicious use of the buffer cache can improve the response time of individual applications and also the throughput of the system by reducing the number of disk I/Os. To this end, study of effective block replacement policies has been the focus of much research both in the systems and database areas [3,4,5,6,7,8].
There also have been a number of studies on predicting future access for prefetching purposes [9,10,11]. They make use of past access history to predict files or blocks that are likely to be referenced in the near future. This prediction information is used to issue prefetch requests in order to hide disk I/O latency.
Recently, buffer management schemes based on user-level hints such as application-controlled file caching  and informed prefetching and caching  have been proposed. User-level hints in these schemes provide information about which blocks are good candidates for replacement, allowing different replacement policies to be applied to different applications.
However, to obtain user-level hints, users need to accurately understand the characteristics of block reference patterns of applications. This requires considerable effort from users limiting its applicability. For a sequential reference pattern, a simple heuristic method can be used to detect the pattern and the MRU replacement policy can be used to improve the buffer cache performance . Such hints can also be obtained by the compiler for implicit I/Os to manage paged virtual memory . As we will see later, our proposed approach is complementary to this approach since our approach is targeted for explicit I/Os.
In this paper, we propose a new buffer management scheme that we call DEAR (DEtection based Adaptive Replacement). Without any user intervention, the DEAR scheme detects the reference pattern of each application and classifies the pattern as sequential, looping, temporally-clustered, or probabilistic. After the detection, the scheme applies an appropriate replacement policy to the application. As the reference pattern of an application may change during its execution, the DEAR scheme periodically detects the reference pattern and applies a different replacement policy, if necessary.
We implemented the DEAR scheme in FreeBSD 2.2.5 and evaluated its performance with several real applications. The scheme is implemented at the kernel level without any modification to the system call interface, so the applications may run as-is. Performance measurements with real applications show that in the case of single application executions the DEAR scheme reduces the number of disk I/Os by up to 51% (with an average of 23%) and the response time by up to 35% (with an average of 12%), compared with the LRU buffer management scheme in the FreeBSD. For multiple applications, the reduction in the number of disk I/Os is by up to 20% (with an average of 12%) while the reduction in the overall response time is by up to 18% (with an average of 8%).
We also compared the performance of the DEAR scheme with that of application-controlled file caching  through trace-driven simulations with the same set of application traces used in . The results showed that the DEAR scheme without any use-level hints performs comparably to application-controlled file caching for the traces considered.
The rest of the paper is organized as follows. In Section 2, we explain the DEAR scheme in detail. Then, we describe the implementation of the DEAR scheme in FreeBSD in Section 3. In Section 4, we evaluate the performance of the DEAR scheme. Finally, we conclude with a summary and a discussion of future work in Section 5.