Check out the new USENIX Web site. next up previous
Next: 10. Acknowledgments Up: Fast Indexing: Support for Previous: 8. Conclusions

   
9. Future Work

We are investigating methods of improving the performance of writes in the middle of files by decoupling the order of the bytes in the encoded file from their order in the original file. By decoupling their order, we could move writes in the middle of files elsewhere--say the end of the file (similar to a journal) or an auxiliary file. Another alternative is to structure the file differently internally: instead of a sequential set of blocks, it could be organized as a B-tree or hash table where the complexity order of insertions in the middle is sub-linear. These methods would allow us to avoid having to shift bytes outward to make space for larger encoded units. However, if we begin storing many encoded chunks out of order, large files could get fragmented. We would need a method for compaction or coalescing all these chunks into a single sequential order.

An important optimization we plan to implement is to avoid extra copying of data into temporary buffers. This is only needed when an encoded buffer is written in the middle of a file and its encoded length is greater than its decoded length; in that case we must shift outward some data in the encoded data file to make room for the new encoded data. We can optimize this code and avoid making the temporary copies when files are appended to or being newly created and written sequentially.


next up previous
Next: 10. Acknowledgments Up: Fast Indexing: Support for Previous: 8. Conclusions
Erez Zadok
2001-05-03