Check out the new USENIX Web site. next up previous
Next: 2.2 Compressing Cached Pages Up: 2 General Ideas & Previous: 2 General Ideas &

2.1 Caching

It has already been proved that caching is a good way to increase the performance of disk operations [13]. In our scenario, a cache for swapped pages should also increase the swapping performance if a few problems can be solved. One such cache would decrease the number of disk reads as some of the requested pages might be found in the cache. Swapping out pages could also take advantage of the cache as a swapped-out page might be freed before reaching the disk. Furthermore, if the pages have to go to the disk, the system could write many of these pages together in a single request. If we can write all of them sequentially in the disk, we will only have to pay the seek/search latency once per write instead of once per page.

Before we continue, it is a good time to go though some terminology that will be helpful throughput the rest of the paper.

Page:
The virtual memory of applications is divided into portions of 4Kbytes. Each of these portions is know as a page.

Buffer:
A buffer or cache buffer is a portion of 4Kbytes of memory where pages are stored before they are sent to the disk.

Disk block:
This term refers the disk portion where the information of a buffer is stored. This means that disk blocks will also be 4Kbytes in size. We should take in mind that this term does not refer to sectors nor file-system blocks.



Toni Cortes
Tue Apr 27 17:43:22 MET DST 1999