Check out the new USENIX Web site. next up previous
Next: Translating to real pathogens Up: Computing cores Previous: Metrics

Heuristics

We begin by using simulation to evaluate a naive heuristic called Random that we use as a basis for comparison. It is not a greedy heuristic and does not reference the advertised configurations. Instead, $h$ simply chooses at random a subset of $\cal H$ of a given size containing $h$.

The first row of Table 3 shows the results of Random using one run of our simulator. We set the size of the cores to 5, i.e., Random chose 5 random hosts to form a core. The coverage of 0.977 may seem high, but there are still many cores that have uncovered attributes and choosing a core size smaller than five results in even lower coverage. The load is 12, which is significantly higher than the lower bound of 5.2

Table 3: A typical run of the heuristics.
Core size Coverage Load
Random 5 0.977 12
Uniform 2.56 0.9997 284
Weighted 2.64 0.9995 84
DWeighted 2.58 0.9997 91


Our first greedy heuristic Uniform (``uniform'' selection among operating systems) operates as follows. First, it chooses a host with a different operating system than $h.os$ to cover this attribute. Then, for each attribute $a \in h.\mbox{\emph{apps}}$, it chooses both a container $c \in \mathcal{C} \setminus \{m_c(h.os)\}$ and a sub-container $sc \in m_s(c) \setminus \{m_h(c, a)\}$ at random. Finally, it chooses a host $h^\prime$ at random from $sc$. If $a
\not\in h^\prime.apps$ then it includes $h^\prime$ in Core$(h)$. Otherwise, it tries again by choosing a new container $c$, sub-container $sc$, and host $h^\prime$ at random. Uniform repeats this procedure diff_OS times in an attempt to cover $a$ with Core$(h)$. If it fails to cover $a$, then the heuristic tries up to same_OS times to cover $a$ by choosing a sub-container $sc \in m_c(h.os)$ at random and a host $h^\prime$ at random from $sc$.

The goal for having two steps, one with diff_OS and another with same_OS, is to first exploit diversity across operating systems, and then to exploit diversity among hosts within the same operating system group. Referring back to Figure 1, the set of prevalent services among hosts running the same operating system varies across the different operating systems. In the case the attribute cannot be covered with hosts running other operating systems, the diversity within an operating system group may be sufficient to find a host $h^\prime$ without attribute $a$.

In all of our simulations, we set diff_OS to 7 and same_OS to 4. After experimentation, these values have provided a good trade-off between number of useless tries and obtaining good coverage. However, we have yet to study how to in general choose good values of diff_OS and same_OS.

Pseudo-code for Uniform is as follows.


\begin{figure}\begin{tabbing}
{\bf Al}\={\bf gorithm Uniform} on input $h$:\\
i...
...\\
\>\> $i \leftarrow i + 1$\ \\
return \emph{core}
\end{tabbing}\end{figure}

The second row of Table 3 shows the performance of Uniform for a representative run of our simulator. The core size is close to the minimum size of two, and the coverage is very close to the ideal value of one. This means that using Uniform results in significantly better capacity and improved resilience than Random. On the other hand, the load is very high: there is at least one host that participates in 284 cores. The load is so high because $h$ chooses containers and sub-containers uniformly. When constructing the cores for hosts of a given operating system, the other containers are referenced roughly the same number of times. Thus, Uniform considers hosts running less prevalent operating systems for inclusion in cores a disproportionately large number of times. A similar argument holds for hosts running less popular applications.

This behavior suggests refining the heuristic to choose containers and applications weighted on the popularity of their operating systems and applications. Given a container $c$, let $N_c(c)$ be the number of distinct hosts in the sub-containers of $c$, and given a set of containers $C$, let $N_c(C)$ be the sum of $N_c(c)$ for all $c \in C$. The heuristic Weighted (``weighted'' OS selection) is the same as Uniform except that for the first diff_OS attempts, $h$ chooses a container $c$ with probability $N_c(c)/N_c({\cal C} \setminus \{m_c(h.os)\})$. Heuristic DWeighted (``doubly-weighted'' selection) takes this a step further. Let $N_s(c, a)$ be $\vert m_h(c, a)\vert$ and $N_s(c, A)$ be the size of the union of $m_h(c, a)$ for all $a \in A$. Heuristic DWeighted is the same as Weighted except that, when considering attribute $a \in h.apps$, $h$ chooses a host from sub-container $m_h(c,
a')$ with probability $N_s(c, a')/N_s(c, {\cal A} \setminus \{a\})$.

In the third and fourth rows of Table 3, we show a representative run of our simulator for both of these variations. The two variations result in comparable core sizes and coverage as Uniform, but significantly reduce the load. The load is still very high, though: at least one host ends up being assigned to over 80 cores.

Another approach to avoid a high load is to simply disallow it at the risk of decreasing the coverage. That is, for some value of $L$, once a host $h^\prime$ is included in $L$ cores, $h^\prime$ is removed from the structure of advertised configurations. Thus, the load of any host is constrained to be no larger than $L$.

What is an effective value of $L$ that reduces load while still providing good coverage? We answer this question by first establishing a lower bound on the value of $L$. Suppose that $a$ is the most prevalent attribute (either service or operating system) among all attributes, and it is present in a fraction $x$ of the host population. As a simple application of the pigeonhole principle, some host must be in at least $l$ cores, where $l$ is defined as:

$\displaystyle l = \left\lceil \frac{\vert\mathcal{H}\vert\cdot x}{\vert\mathcal{H}\vert\cdot (1-x)}\right\rceil = \left\lceil \frac{x}{(1-x)}\right\rceil$     (1)

Thus, the value of $L$ cannot be smaller than $l$. Using Table 2, we have that the most prevalent attribute (port 139) is present in 55.3% of the hosts. In this case, $l = 2$.

Using simulation, we now evaluate our heuristics in terms of core size, coverage, and load as a function of the load limit $L$. Figures 4-7 present the results of our simulations. In these figures, we vary $L$ from the minimum 2 through a high load of 10. All the points shown in these graphs are the averages of eight simulated runs with error bars (although they are too narrow to be seen in some cases). For Figures 4-6, we use the standard error to determine the limits of the error bars, whereas for Figure 7 we use the maximum and minimum observed among our samples. When using load limit as a threshold, the order in which hosts request cores from $\cal H$ will produce different results. In our experiments, we randomly choose eight different orders of enumerating $\cal H$ for constructing cores. For each heuristic, each run of the simulator uses a different order. Finally, we vary the core size of Random using the load limit $L$ to illustrate its effectiveness across a range of core sizes.

Figure 4: Average core size.

Figure 4 shows the average core size for the four algorithms for different values of $L$. According to this graph, Uniform, Weighted, and DWeighted do not differ much in terms of core size. The average core size of Random increases linearly with $L$ by design.

Figure 5: Average coverage.

In Figure 5, we show results for coverage. Coverage is slightly smaller than $1.0$ for Uniform, Weighted, and DWeighted when $L$ is greater or equal to three. For $L=2$, Weighted and DWeighted still have coverage slightly smaller than $1.0$, but Uniform does significantly worse. Using weighted selection is useful when $L$ is small. Random improves coverage with increasing $L$ because the size of the cores increases. Note that, to reach the same value of coverage obtained by the other heuristics, Random requires a large core size of $9$.

There are two other important observations to make about this graph. First, coverage is roughly the same for Uniform, Weighted, and DWeighted when $L > 2$. Second, as $L$ continues to increase, there is a small decrease in coverage. This is due to the nature of our traces and to the random choices made by our algorithms. Ports such as 111 (portmapper, rpcbind) and 22 (sshd) are open on several of the hosts with operating systems different than Windows. For small values of $L$, these hosts rapidly reach their threshold. Consequently, when hosts that do have these services as attributes request a core, there are fewer hosts available with these same attributes. On the other hand, for larger values of $L$, these hosts are more available, thus slightly increasing the probability that not all the attributes are covered for hosts executing an operating system different than Windows. We observed this phenomenon exactly with ports 22 and 111 in our traces.

This same phenomenon can be observed in Figure 6. In this figure, we plot the average fraction of hosts that are not fully covered, which is an alternative way of visualizing coverage. We observe that there is a share of the population of hosts that are not fully covered, but this share is very small for Uniform and its variations. Such a set is likely to exist due to the non-deterministic choices we make in our heuristics when forming cores. These uncovered hosts, however, are not fully unprotected. From our simulation traces, we note the average number of uncovered attributes is very small for Uniform and its variations. In all runs, we have just a few hosts that do not have all their attributes covered, and in the majority of the instances there is just a single uncovered attribute.

Figure 6: Average fraction of uncovered hosts.

Finally, we show the resulting variance in load. Since the heuristics limit each host to be in no more than $L$ cores, the maximum load equals $L$. The variance indicates how fairly the load is spread among the hosts. As expected, Random does well, having the lowest variance among all the algorithms and for all values of $L$. Ordering the greedy heuristics by their variance in load, we have Uniform $\succ$ Weighted $\succ$ DWeighted. This is not surprising since we introduced the weighted selection exactly to better balance the load. It is interesting to observe that for every value of $L$, the load variance obtained for Uniform is close to $L$. This means that there were several hosts not participating in any core and several other hosts participating in $L$ cores.

Figure 7: Average load variance.

A larger variance in load may not be objectionable in practice as long as a maximum load is enforced. Given the extra work of maintaining the functions $N_s$ and $N_c$, the heuristic Uniform with small $L$ (L > 2) is the best choice for our application. However, should load variance be an issue, we can use one of the other heuristics.


next up previous
Next: Translating to real pathogens Up: Computing cores Previous: Metrics
Flavio Junqueira 2005-02-17