We now turn to describe several problems that can arise when a system with conventional network architecture faces high volumes of network traffic. Problems arise because of four aspects of the network subsystem:
Eager receiver processing has significant disadvantages when used in a network server. It gives highest priority to the processing of incoming network packets, regardless of the state or the scheduling priority of the receiving application. A packet arrival will always interrupt a presently executing application, even if any of the following conditions hold true: (1) the currently executing application is not the receiver of the packet; (2) the receiving application is not blocked waiting on the packet; or, (3) the receiving application has lower or equal priority than the currently executing process. As a result, overheads associated with dispatching and handling of interrupts and increased context switching can limit the throughput of a server under load.
Under high load from the network, the system can enter a state known
as receiver livelock [20]. In this
state, the system spends all of its resources processing incoming
network packets, only to discard them later because no CPU time is
left to service the receiving application programs. For instance,
consider the behavior of the system under increasing load from
incoming UDP packets . Since hardware
interface interrupt and software interrupts have higher priority than
user processes, the socket queues will eventually fill because the
receiving application no longer gets enough CPU time to consume the
packets. At that point, packets are discarded when they reach the
socket queue. As the load increases further, the software interrupts
will eventually no longer keep up with the protocol processing,
causing the IP queue to fill. The problem is that early stages of
receiver processing have strictly higher priority than later
stages. Under overload, this causes packets to be dropped only after
resources have been invested in them. As a result, the throughput of
the system drops as the offered load increases until the system
finally spends all its time processing packets only to discard them.
Bursts of packets arriving from the network can cause scheduling anomalies. In particular, the delivery of an incoming message to the receiving application can be delayed by a burst of subsequently arriving packets. This is because the network processing of the entire burst of packets must complete before any application process can regain control of the CPU. Also, since all incoming IP traffic is placed in the shared IP queue, aggregate traffic bursts can exceed the IP queue limit and/or exhaust the mbuf pool. Thus, traffic bursts destined for one server process can lead to the delay and/or loss of packets destined for other sockets. This type of traffic interference is generally unfair and undesirable.