Check out the new USENIX Web site. next up previous
Next: 2 General Ideas & Up: Improving Application Performance through Previous: Abstract

1 Introduction

 

There are many applications that use large amounts of memory. These large applications take advantage of the swapping mechanism to run on the system as the available physical memory is not enough for them to run [12, 10]. The same problem appears when we try to run, on a laptop, the same applications we run on a desktop computer. These applications will relay on the swapping mechanism as laptop computers usually have less physical memory than desktop ones. Finally, multi-user environments tend to be very loaded and their applications have to swap out part of their memory so that all applications can run concurrently [16]. In all these cases, the performance of the applications is much lower than the one they would achieve if no swapping was needed. This happens because the swapping mechanism has to access the disk to keep the pages that do not fit in memory. It is clear that these applications, and the whole system, would benefit from a faster swapping system.

If we examine the same problem from a different point of view, we observe that increasing the number of pages that fit in the swap space without increasing the number of blocks in the swap partition would also be quite beneficial. We could run the same applications on a laptop than on a desktop system. Remember that laptops also have smaller disks if compared to desktop ones. This increase in swap space would also help multi-user systems to avoid getting out of memory. Finally, out-of-core applications could be programmed more easily as the global-memory restriction would not be so important.

Now a days it is quite normal to continue the office work at home. This usually means the use of large applications on a Linux box. These large applications fit well in the office machines but are too large to run efficiently on a smaller Linux box. In these cases, a fast swapping mechanism would be very beneficial as those applications would run faster and working at home would be less "painful". Furthermore, increasing the swap space at no cost would allow these kind of users to run applications that would normally not fit in their home machines.

These performance and space problems have motivated this work and its objectives. The first, and most important, objective is to speedup the swap mechanism. This will increase the performance of the applications that, for whatever reason, have to keep part of their memory in the swap space. It is also an objective of this paper to increase the size of the memory offered to the applications without increasing the number of disk blocks in the swap partition. It is important to notice that should these two objectives be in conflict, we will favor performance over capacity. Finally, we want to achieve both improvements with the minimum number of changes in the original Linux kernel.

The main idea used to accomplish both objectives consists of compressing the pages that have to be swapped out. This will increase the number of pages that can be placed in the swap partition. Furthermore, it will also allow us to build a cache of compressed pages that will decrease the number of times the system has to access the swap device. It is important to notice that previous studies show that good compression ratios can be achieved when compressing memory pages [7]. The idea we present in this paper is similar, in essence, to the one proposed by Douglis [4], but some improvements and modifications have been done (see Section 5). We believe that now is a good time to reevaluate the results obtained in this previous work as the technology has improved significantly which means that compressing and decompressing pages can be done much more efficiently.

This paper is divided into 6 sections. In Section 2, we describe the concepts and ideas in which this work has been based. In this section, we also present some preliminary results that will lead the final design. Section 3 gives a detailed overview of the way the mechanism works. Section 4 presents the benchmarks used and the results obtained while running them on our system. In Section 5, we present the most significant work already done in the area. Finally, Section 6 presents the main conclusions that can be extracted from this paper.


next up previous
Next: 2 General Ideas & Up: Improving Application Performance through Previous: Abstract

Toni Cortes
Tue Apr 27 17:43:22 MET DST 1999