Check out the new USENIX Web site. next up previous
Next: Modeling Diversity Up: Host Diversity Previous: Host Diversity

Diversity and Core Sizes

If one knew the probability of attack for each vulnerability, then given a target system resilience one could enumerate cores with that target resilience. In our case, it is not clear how one would determine such probabilities. Instead, we define a core $c$ for a host $p$ to be a minimal set of hosts with the following additional properties: 1) $p \in c$; 2) for every attribute a$\in$A, either there is a host in $c$ that differs from $p$ in the value of $a$ or there is no host in the system that differs from $p$ in the value of $a$. Such a subset of hosts is a core for a host $p$ if we assume that, in any Internet catastrophe, an attack targets a single attribute value. Although it is not hard to generalize this definition to allow for attacks targeted against multiple attribute values, in the rest of this paper we focus on attacks against a single attribute value.

Smaller cores means less replication, which is desirable for reducing storage overhead. A core will contain between 2 and $\vert A\vert+1$ hosts. If the hosts' attributes are well distributed, then the cores will be small on average: for any host $p$, it is likely that there is a host $q$ that has different values of each of the attributes, and so $p$ and $q$ constitute an orthogonal core. That is, a fair number of orthogonal cores are likely to exist. If there is less diversity, though, then the smallest cores may not be orthogonal for many hosts, thus increasing storage overhead.

A lack of diversity, especially when trying to keep core sizes small, can lead to a more severe problem. Suppose there are $k$ hosts $\{p_1, p_2, \ldots  p_k\}$ and an attribute $a$ such that all have the same value for $a$. Moreover, there is only one host $p$ that differs in the value of $a$. A core for each host $p_i$ hence contains $p$, meaning that $p$ will maintain copies for all of the $p_i$. Since the amount of disk space $p$ donates for storing backup data is fixed, each $p_i$ can only use $1/k$ of this space. In other words, if $p$ donates $F$ bytes for common storage to the system, then each $p_i$ can back up only $F/k$ bytes. Note that $k$ can be as large as the number of hosts, and so $F/k$ can be minuscule. In Example 3.1, host $H_1$ is the only one to have a different value for attribute ``Operating System'', and hence has to store copies for all the other hosts.

Characterizing the diversity of a set of hosts is a challenging task. In particular, considering all possible distributions for attribute configurations is not feasible. Instead, we define a measure $f$ that condenses the diversity of a system into a single number. According to our definition, a system with diversity $f$ is one in which a share $f$ of the servers is characterized by a share $(1-f)$ of the combinations of attributes. Although this metric is coarse and does not capture all possible scenarios, it is expressive enough to enable one to observe how the behavior of a backup system is affected by skewed diversity. Note that $f$ is in the interval $[0.5, 1)$. The value $f=0.5$ corresponds to a uniform distribution, and a value of $f$ close to 1 indicates a highly skewed diversity.

We use this metric to study how storage overhead, storage load, and resilience vary with skew in diversity. We define the storage overhead of a host $p$ as the size of the core that $p$ uses to backup its data. Host $p$ maintains copies for $k$ other hosts. We define the storage load of $p$ to be such a value $k$. Note that storage load may vary among the hosts. Thus, we define the storage load of the system as the maximum value of $k$ across all the hosts. In the remainder of this paper, we refer to the storage load of the system as just storage load. Finally, resilience depends on the number of attributes covered in a core, and it decreases as the number of non-covered attributes increases. We then define the resilience of the system for a host $p$ as the percentage of attributes covered by the core $c$ that $p$ uses to backup its data.

The problem of finding a smallest core given a set of hosts and an attribute configuration for each host, however, is NP-hard (reduction from SET-COVER). For this reason, we used a randomized heuristic to find cores. This heuristic finds a core for a host $p$ as follows:


  1. It tries to find other hosts that have a fully disjoint set of attributes. If there is more than one host, then it picks one randomly;

  2. If it finds no host in the previous step, then it randomly chooses hosts that have at least one different attribute until a core is constructed or there are no hosts left to choose.

This heuristic may not be the best; We have not yet done a thorough study of heuristics for finding cores. The results we present below, however, indicates that it is efficient in terms of storage overhead and resilience.



next up previous
Next: Modeling Diversity Up: Host Diversity Previous: Host Diversity
2003-06-20