Check out the new USENIX Web site. next up previous
Next: Combining Policing and Priority Up: Experimental Results Previous: Impact of Burst Size


Prioritized Listen Queue: Simple Priority

With TCP SYN policing, one must limit the greedy non-preferred clients to a meaningful rate during overload. In most cases it is relatively simpler to just give the preferred clients a higher absolute priority. We demonstrate next that the prioritized listen queue provides service differentiation, especially with a large listen queue length.

In our experiments we classify clients into three priority levels. Clients belonging to a common priority level are all created by a Webstone benchmark that requests an 8 KB file. A separate Webstone instance is used for each priority level. We measure client throughput for each priority level while varying the total number of clients in each class. Each priority class uses the same number of clients.

In the first experiment, the Apache server is configured to spawn a maximum of 50 server processes. The results in Figure 7 show that when the total number of clients is small, all priority levels achieve similar throughput. With fewer clients, server processes are always free to handle incoming requests. Thus, the listen queue remains short and almost no reordering occurs. As the number of clients increases, the listen queue builds up since there are fewer Apache processes than concurrent client requests. Consequently, with re-ordering the throughput received by the high priority client increases, while that of the two lower priority clients decreases. Figure 7 shows that with more than 30 Webstone clients per class only the high-priority clients are served while the lower-priority clients receive almost no service.

Figure 8 illustrates the effect on response times observed by clients of the three priority classes. It can be seen that as the number of clients increases across all priority classes the response time for the lower priority classes increases exponentially. The response time of the high priority class, on the other hand, only increases sub-linearly. When the number of high priority requests increases, the lower priority ones are shifted back in the listen queue, thereby, increasing their response times. Also as more high priority requests get serviced by the different server processes running in parallel and competing for the CPU their response times increase.

We also observed that when the number of high priority requests was fixed and the lower priority request rate was steadily increased, the response time of the high priority requests remained unaffected.

{\figurename}: {\dimen0=\fontdimen6\the\font
\lineskip=1\dimen0
\advance\lineskip.5\fontdimen...
...s with 50 Apache processes. The number of clients in each class remains equal}.}
\begin{figure}
\begin{center}
\epsfig {file=figures/prio_50.eps, width=0.5\textwidth}\end{center}\end{figure}

The priority-based approach enables us to give low delay and high throughput to preferred clients independent of the requests or request patterns of other clients. However, one may need many priority classes for different levels of service. The main drawback of a simple priority ordering is that it provides no protection against starvation of low-priority requests.

{\figurename}: {\dimen0=\fontdimen6\the\font
\lineskip=1\dimen0
\advance\lineskip.5\fontdimen...
...s with 50 Apache processes. The number of clients in each class remains equal.}}
\begin{figure}
\begin{center}
\epsfig {file=figures/prio_resp50.eps, width=0.5\textwidth}\end{center}\end{figure}


next up previous
Next: Combining Policing and Priority Up: Experimental Results Previous: Impact of Burst Size
Renu Tewari
2001-05-01