,
consists of four Ethernet segments connected
by three switches. Two of the segments are 100 Mbps Ethernet
while the other
two are 10 Mbps Ethernet.
The switch that connects the two 10-Mbps segments, and the
non-switch machines on the 10-Mbps segment are 66MHz 486 machines, while
those on the 100-Mbps segments are 90MHz and 100MHz Pentiums.
Although the wiring and the NIC hardware at the
hosts need not be changed,
RETHER has to be implemented in the intermediate switches
to provide bandwidth/delay guarantees.
Because modifications to commercial Ethernet switches were not possible for us,
we implemented the RETHER switch using a general-purpose machine that
is equipped with multiple Ethernet interfaces, much like a network-layer
software router. With the advent of faster microprocessors and system
architecture, we believe implementing LAN switches based on
general purpose machines is both feasible and cost-effective.
Our implementation experience shows that it is
indeed possible to build a RETHER switch completely in software.
Since all the experiments are conducted locally in our lab,
propagation delays are negligible in these measurements.
For all the following measurements, the token cycle time is set
to be 33 msec.
![]() |
Extensive tests on the prototype demonstrate that bandwidth
reservations made by RETHER connections are indeed satisfied in all cases.
Since RETHER is implemented directly inside the device driver,
each packet arrival,
be it token or data, entails an interrupt processing overhead.
When the network is lightly loaded and there are
few nodes in the network,
the token simply circulates around
the network and the CPU processing overhead for token-circulation
interrupts is significant. This is indicated in Figure
.
The graph plots the time taken by a user level process to execute the same
computation intensive program without RETHER and with RETHER in the presence of
minimal real-time bandwidth
reservation. The measurements were made on a 100-Mbps network in
which the token processing time is only 70
sec per node. As can be seen,
the token-induced interrupt overhead becomes acceptable
only when the 100-Mbps network has five or more nodes.
On 10-Mbps networks, the relative interrupt processing overhead was not as bad
because the token processing time is around 450
sec.
Table
indicates the
time to setup connections crossing 0 to 3 switches.
Column 2 shows the connection setup time
when all the Ethernet segments are in the CSMA mode.
The main delay component in this case is the time to switch
each segment from the CSMA to the RETHER mode. Column 3 indicates the time
taken to setup a connection when the corresponding network
segments are already running RETHER . In this case, the main component in the
connection establishment time is the time to forward the connection
establishment message in the non-real-time mode. The connection establishment
time increases with the amount of bandwidth already reserved for
real-time connections because it would
take longer for the connection request message,
which is transmitted as non-real-time traffic,
to reach its destination.
The protocol processing associated
with connection setup itself at each intermediate switch
is relatively minor compared to the above times.
A significant component of the connection setup delay is due to
scheduling and executing user processes at either end-point to complete the
connection establishment. However, these are not under the control
of RETHER and thus
are not included here.
The times reported in Table
include the time to set up the
connection at the receiver and sender ends in the kernel, but do not
include any user-level processing.