Check out the new USENIX Web site. next up previous
Next: Design of the Storage Up: A 3-tier RAID Storage Linux Previous: Motivation for a Layered

Related Work

Loge [#!bib:loge!#] and Mime [#!bib:mime!#] disk controllers use a level of indirection to adaptively alter the physical location of data to improve performance. [#!bib:akyurek!#] present a device driver implementation of a related idea.

HP AutoRAID [#!bib:autoraid!#] is a firmware implementation of the idea. It is implemented at the controller level, communicating with the host over the SCSI bus. It has a separate on-board processor to do operations such as parity computation, maintaining various data structures, tracking access patterns and effecting migrations between the RAID1 tier and the RAID5 tier. It uses on-board NVRAM to improve writes and does log-structured updates to RAID5. It maintains logical to physical translation tables and two other tables for RAID1 and RAID5. The logical to physical translation makes the migrations of data between RAID1 and RAID5 transparent to the user.

[#!bib:hotmirroring!#] presents orthogonal placement of data to improve the performance of RAID5 in both normal and degraded mode.

[#!bib:chained-decl!#] discusses chained declustered RAID1. Declustered RAID1 differs from RAID1 in that the data is striped across the disks and two physical stripes constitute a RAID1 logical stripe. Our driver's RAID1 tier is implemented as a more flexible version of declustered RAID1. Since our experimental setup consists of a single controller, we observed poorer performance for declustered RAID1 than RAID1. On a multi-controller configuration, it might provide better performance than RAID1.

[#!bib:predict!#] explains how compression algorithms can be used to predict future accesses with high probability by the use of access patterns and perform prefetching. But there is a sizable memory requirement for maintaining the required data structures, and an in-kernel implementation would pin down most of main memory. We show how it can be implemented in a user process1 that uses the application interface provided by our driver to keep track of accesses and effect migrations.

Linux has an implementation for RAID personalities including RAID0, RAID1, RAID4, and RAID5 as md (multiple disk device driver). It can be called a device driver as it occupies a major number, but it actually never services any I/O requests. All the I/O requests are mapped to the respective underlying device driver even before they reach the strategy routine of md. The implementation is a kind of hack in the kernel (necessitating changes in the kernel code only for md), and thus does not follow the framework of a standard Linux device driver. As explained below, this has been necessary until 2.1 kernels as it was not possible to use concurrency amongst multiple devices managed by a single device driver.


next up previous
Next: Design of the Storage Up: A 3-tier RAID Storage Linux Previous: Motivation for a Layered
Dr K Gopinath
2000-04-25