Check out the new USENIX Web site. next up previous
Next: Conclusions and Future Work Up: A 3-tier RAID Storage Linux Previous: Interface to Applications

Experimental Results

We present experimental results on the Solaris platform with 2 layers (RAID1 and RAID5). We expect to provide results for Linux with 3 layers (RAID1, RAID5, cRAID5) by end of this year.

Setup The Experimental setup consists of a Sun SPARC5 workstation, with 32MB RAM. Five 1.2GB disks are connected to the machine over a 10Mbps SCSI bus. The specifications of the disks is given in Table 1.


 
Table 1: The specifications of disks used
Make Quantum Fireball 1280S  
Formatted Size 1,281,982,464B  
Drive Config Disks 2
  Heads 4
  Tracks per surface 4,142
  Sectors per track 95-177
  Bytes per sector 512
Perf Specs Average seek(ms) <11
  Rotational speed(RPM) 5,400
  Avg. rotational latency(ms) 5.56
  Internal Data Rate(MB/sec) 5.8-10.4
  Cache buffer size(KB) 128


  
Figure 10: Total Delay vs. Number of Accesses for /usr1 disk
\begin{figure}\centerline{\psfig{file=fig/usr1.ps,width=3in}}
\end{figure}


  
Figure 11: Total Delay vs. Number of Accesses for /usr2 disk
\begin{figure}\centerline{\psfig{file=fig/usr2.ps,width=3in}}
\end{figure}


 
Table 2: The average device access times
Disk# Average Access times (ms)
  RAID0 int0 int30 int60 int90 RAID5
/usr1 7.70 9.25 10.32 12.87 12.73 24.37
/usr2 8.67 9.36 11.42 13.77 14.81 21.87

Results and Analysis We have used HP disk traces [#!bib:unix-diskpatterns!#] to evaluate the performance of our driver in comparison to RAID5 and RAID0. The trace that we used has been generated on a departmental server (snake) with accesses for two filesystems /usr1 and /usr2.

Both the integrated driver and RAID5 use 5 disks. RAID0 configuration uses only 4 disks, as it does not maintain any parity. The integrated driver is configured with 25 percent of the physical storage under RAID1. The stripe length for all configurations is 64 sectors.

Before applying the traces to our driver, we mounted a filesystem on it and populated it to various degrees. The first 50000 accesses were used as warmup, as the way we laid out the filesystem and populated it need not reflect the use of the traced system. The results given are for the remaining accesses. In all the figures and tables, the integrated driver is denoted by INTx where x is the degree (percentage) to which the device was populated.

Figure 10 and figure 11 show the total delay against the number of accesses for the two traces. The integrated driver performed 30-50 percent better than RAID5 even when populated to 90 percent.

Table 2 shows the average access times for the integrated device populated to various degrees along with RAID0 and RAID5. The fresh device (just ran mkfs) performed far better than the one that was populated. This was expected as initially all the stripes are of invalid type and writes result in the allocation of physical stripes that are contiguous which results in less seeking.


  
Figure 12: Number of Migrations vs. Number of Accesses for /usr1 disk
\begin{figure}\centerline{\psfig{file=fig/usr1-mig.ps,width=3in}}
\end{figure}


  
Figure 13: Number of Migrations vs. Number of Accesses for /usr2 disk
\begin{figure}\centerline{\psfig{file=fig/usr2-mig.ps,width=3in}}
\end{figure}

The traces exhibit high degree of locality. Figure 12 and figure 13 show the number of migrations against the number of accesses for the traces on an integrated device populated to 90 percent. The number of migrations remained very small compared to the number of accesses (<1%). Despite such high hit rates (>99%), the integrated driver access times increased with population, probably due to the following reasons:

Suboptimal data placement The present implementation has a very simple data placement policy. We simply select the next RAID1 stripe on the victim list and steal one of its physical stripes when a RAID5 stripe or an invalid stripe needs to be migrated. A better policy should take seek distances into consideration.

High migration cost The current implementation does not maintain a pool of free physical stripes. Once the entire physical space is used up, any more migrations need to demote existing RAID1 stripes which involves updating parity, and writing metadata synchronously. This severe penalty on migrations can be reduced by making sure that there exist enough free physical stripes at all times.


next up previous
Next: Conclusions and Future Work Up: A 3-tier RAID Storage Linux Previous: Interface to Applications
Dr K Gopinath
2000-04-25