The CLVL abstraction provides the bundle an available bandwidth, , which varies with time and guarantees the entire bundle a target loss rate, . If the traffic arrival rate of the bundle is larger than , the extra traffic is dropped at the entry overlay node. The overlay node can employ any QoS scheduling discipline to distribute and the losses across the flows in the bundle. In particular, in a Diffserv-like model, if every packet is associated with a priority, then the overlay node can use these priorities to preferentially drop packets and allocate bandwidth to different flows.
While in general the available bandwidth, , of a CLVL bundle varies with time, it might be possible to statistically bound the minimum bandwidth of the bundle to offer bandwidth guarantees to a fraction of OverQoS traffic. Given a small probability value, , one can capture the variations of the available bandwidth on a CLVL using a distribution and determine a value such that the probability, where represents the probability of not meeting the bandwidth guarantee, . If the corresponding is a significant fraction of , then OverQoS can provide statistical bandwidth guarantees by allocating bandwidth to flows within a CLVL as long as the total allocated bandwidth is less than . Table 1 tabulates all the variables we use in expressing the properties of a CLVL.
In practice, we notice that the value of across overlay links can be reasonably high implying that OverQoS can indeed be used to provide meaningful statistical bandwidth guarantees to applications. Figure 2 shows the distribution of for three different overlay links traversing international links and broadband networks: Lulea (Sweden)-Korea, Mazu (Boston)- Cable Modem (SF), Netherlands-Intel (SF). The values of across these links to provide a guarantee are 160 Kbps, 420 Kbps, and 269 Kbps respectively. Statistical bandwidth guarantees can be provided only to a subset of the OverQoS flows, potentially at the expense of other flows. Flows requiring guarantees should be given a higher priority over other flows at an OverQoS node. The remaining bandwidth is distributed among the other flows.