Check out the new USENIX Web site. next up previous
Next: Conclusion Up: DMFS - A Data Previous: truncate(2)

Performance

 One drawback to the layered file design model is that it adds a certain amount of overhead to each file operation, even if the operation is merely bypassed to the underlying layer. We have performed only simple performance testing, but we have found little to no noticeable performance degradation due to the DMFS layer.

One operation which will be potentially slower on the DMFS layer is in-kernel vnode allocation, such as is the result of a lookup operation. The allocation of the in-kernel DMFS node requires reading the metadata for this node from the metadata file.

We have performed extensive usage testing of our DMFS layer. We have deployed two internal beta-test systems using two different metadata storage formats (the one described here and a predecessor which stored metadata in a large-inode FFS). In this testing, which included generating over two million files on a 155 GB file system, none of the users complained that the DMFS layer felt slow. Obviously an attempt to access a non-resident file caused a delay, but that was due to the mechanical delay of the robotics retrieving a tape.

We have performed a limited amount of quantitative testing to compare the performance of reading and writing a fully-resident file both with and without the DMFS layer mounted. I performed three tests, all using dd to transfer a file either from /dev/zero or to /dev/null. The difference between the tests was the block size used for the operations.

All tests were performed on an IDE disk in an x86-based computer running a modified version of NetBSD 1.4. The file system was built using the default parameters of 8k blocks and 1k fragments. One test was to write a 640 MB file using a write size of 64k. I observed a strong disparity of new-file creation performance over time. Initially file creation took 72 seconds, while later creations took 82 seconds. As a comparable decrease was observed for creation with and without the DMFS layer, I attribute this degradation to disk and file system performance. The exact origin is not important, but this observation motivated all further testing to consist of overwriting an existing file.

I timed three creations without the DMFS layer, and two with. The average times are shown on the first line of Table 1. I do not believe that the presence of the DMFS layer actually improved the FFS performance, but that the variability of times reflects the simplicity of the tests. However the tests indicate that with 64k writes, the extra overhead of the DMFS layer was not noticeable and that I/O scheduling and device performance will add an amount of variability to the tests.

As shown on line two of Table 1, creating a 1000 MB file with 1024k byte writes took the same amount of time both with and without the DMFS layer. This example is similar to the previous one in that the extra overhead of the DMFS layer was not noticeable.

Both of the above tests used large write sizes to maximize performance. As such they minimized the number of times the DMFS layer was called and thus the impact of its additional computations. To better measure the call overhead, I also tried writing with a smaller block size. Line three of Table 1 shows the average times I observed when using 8k writes. Here too, no statistically significant difference was observed.

As I am using layered file system technology, I expect a certain amount of overhead when accessing fully resident files. The rule of thumb estimate I am familiar with is that this overhead should be on the order of one to two percent. My simple tests measured less, and I believe that the rule of thumb one to two percent is a good upper bound.

 
Table 1: Average operation times with standard deviations (seconds)
  Without DMFS With DMFS
Creating with 10,000 64k writes 72 $\pm$ 1 71 $\pm$ 1.4
Overwriting with 1000 1024k writes 116 116
Overwriting with 100,000 8k writes 88.4 $\pm$ 1.7 88.75 $\pm$ 0.5



next up previous
Next: Conclusion Up: DMFS - A Data Previous: truncate(2)
Bill Studenmund
2000-04-24