Check out the new USENIX Web site. next up previous
Next: The Design of WOLF Up: Introduction Previous: File Access Locality

Related Work

Many papers have tried to improve the LFS performance since the publication of Sprite LFS [15]. Seltzer [16] presented an implementation of LFS for BSD. Several new cleaning policies have also been presented [2,20,9]. In traditional cleaning policies [15], including greedy cleaning and benefit-to-cost cleaning, the live blocks in several partially empty segments are combined to produce a new full segment, freeing the old partially empty segments for reuse. These policies perform well when the disk space utilization is low. Wilkes et al. [20] proposed the hole-plugging policy. In their scheme, partially empty segments are freed by writing their live blocks into the holes found in other segments. Despite the higher cost per block, at high disk utilizations, hole-plugging does better than traditional cleaning because it avoids processing so many segments. Recently, Matthews et al. [9] showed how adaptive algorithms can be used to enable LFS to provide high performance across a wider range of workloads. These algorithms, which use hybrid policies of the above two methods, improved write performance by modifying the LFS cleaning policy to adapt to the changes in disk utilization. The system switches to a different method based on the cost-benefit estimates. They also used cached data to lower cleaning costs. Blackwell et al. [2] presented a heuristic cleaning to run without interfering with normal file access. They found that 97% of cleaning on the most heavily loaded system was done in the background. We proposed a scheme called PROFS which incorporates the knowledge of Zone-Bit-Recording into LFS to improve both the read and write performance. It reorganizes data on the disk during LFS garbage collection and system idle period. By putting active data in the faster zones and inactive data in the slower zones, PROFS can achieve much better performance for both reads and writes [19]. Lumb et al. applied a new technique called freeblock scheduling to the LFS cleaning process. They claimed an LFS file system could maintain ideal write performance when cleaning overheads would otherwise reduce performance by up to a factor of three [13].

In this paper, our strategy has a distinctive difference compared with above methods: WOLF works with the initial writes in the reordering write buffers which reduce the cleaning overhead before writes go to disk. This scheme finds a new ``free'' time to solve the same garbage collection problem for LFS. WOLF can be easily combined with other strategies to improve LFS performance. More importantly, it helps LFS provide high performance even in heavy loads and full disks.

Several researchers tried to improve the file system performance without using LFS. Ganger and Patt [4] proposed a method called ``Soft Updates'' that can eliminate the needs of 95% of synchronous writes. File system performance can be significantly improved because most writes become asynchronous and can be cached in RAM. Hu et al. proposed the Disk Caching Disk [8,11] which can improve the performance of both synchronous and asynchronous writes. WOLF and Soft-Updates are complementary approaches: The latter improves disk scheduling in traditional file systems through aggressive caching, while WOLF addresses what to do in write caching before the data go to media.

The remainder of the paper is organized as follows. Section 2 describes our design of WOLF. Section 3 describes our experimental methodology. Section 4 shows the simulation results and analysis. Section 5 summarizes our new strategy.


next up previous
Next: The Design of WOLF Up: Introduction Previous: File Access Locality
Jun Wang 2001-10-31