We setup experiments to evaluate the correctness and performance of TBR. We used a PIII-700MHz Linux laptop equipped with a D-Link DWL-650 card running the Hostap driver as the AP and IPAQs equipped with Cisco-350 cards as competing nodes.
For each type of experiment, we ran in two different AP configurations: one with TBR, Exp-TBR, and one without, Exp-Normal. Each data point is an average of 5 to 10 runs and in each run, each contending node sends about 2000 1500-byte packets. All throughputs measured are achieved TCP throughputs.
When the AP is run under the normal configuration, no queue is set up in the driver. Instead, the kernel interface queue (with the maximum size of 110) is used to store packets. When the AP is run with TBR, n queues each with the maximum queue size of is set up inside the driver. The kernel interface queue is then set to 10. Thus, the total buffer space available to each scheme is the same.
Figure 8 compares the throughputs achieved by two competing nodes when the AP is configured with or without TBR. When competing nodes use the same data rate, Exp-TBR and Exp-Normal yield almost identical results, showing that TBR incurs little overhead.
When nodes use different data rates, the throughput achieved by each competing node as well as the total throughput differ significantly depending upon whether TBR is used or not. As shown in Figure 9(a), when TBR is used, the total achieved throughput in the down-link direction increases by about in the 5.5vs11 case, in the 2vs11 case and in the 1vs11 case.
Analytical (Eq6) and experimental (Exp-Normal) values agree for all the cases when the AP is configured without TBR. Similarly, Exp-TBR and Eq12 show very similar results, affirming that our regulator achieves the objective of providing long-term equal channel occupancy time to competing nodes. The slight differences in performance between Exp-TBR and Eq12 is due to the fact that TBR needs to estimate channel occupancy time without the retransmission information available. Whenever a packet loss is experienced by a node, the channel occupancy time of that node needs to be decreased accordingly. Without the retransmission information, TBR in this case slightly biased the node sending at a lower data rate, thus decreasing the total throughput by a small amount compared to Eq12. In the future, we plan to extract (from the card firmware) or estimate retransmission information as suggested in Section 4.
Figure 9(b) shows similar improvements achieved by TBR in the up-link direction. We also ran experiments involving mixed up-link and down-link TCP flows and found similar results (not shown here).
To understand how well TBR works when traffic contains flows with various demands, we set up a scenario that involved two nodes, n1 and n2, each sending TCP packets at the same data rate of 11 Mbps but experienced different bottleneck link capacities. n2 experienced the bottleneck bandwidth of 2.1 Mbps while the wireless link is n2's bottleneck. We achieved this by limiting the sending rate of the application generating TCP packets at n2. The expected DCF's behavior is to give n2 2.1 Mbps of channel bandwidth and n1 the remainder. Table 4 shows the throughputs achieved under Exp-TBR and Exp-Normal. There is no significant difference between the two sets of results showing that the rate adjustment algorithm described in Section 4.3 works.