Figure 3 captures the interactions between the various components in the entry and exit overlay nodes. The entry node consists of two modules: one that implements the CLVL abstraction, and another that performs per-aggregate or per-flow traffic management. The first module communicates with the exit OverQoS node to estimate the link loss rate and delay. It uses this information to adapt the data traffic to conform to the CLVL abstraction. The second module allocates the capacity of the CLVL among competing traffic aggregates or flows. The exit OverQoS node is responsible for measuring the loss and delay characteristics and reconstructing lost packets if necessary. If the CLVL abstraction uses ARQ for loss recovery, the exit node propagates individual packet loss information to the entry node.
The entry node exerts control on the traffic in the bundle at two
levels of granularity: on the bundle as a whole, and on a per-flow
basis within the bundle. At both these levels, the entry node can
control either the sending rate or the loss rate. The CLVL management
module at the entry node first determines the sending rate of the
bundle, , using MulTCP [29] to emulate the aggregate behavior
of
virtual TCPs.
Next, it determines the level of redundancy
required to achieve a
certain target loss-rate
based on the loss-characteristics
determined by the window. The resulting available bandwidth
is
estimated to be
. The traffic management module at the entry
node then distributes the available bandwidth
among the individual
flows. If the net input traffic is larger than
, the entry node drops the
extra traffic and exercises control in distributing the losses
amongst the flows.