Check out the new USENIX Web site. next up previous
Next: Adding an inactivity timer Up: Description of flow slices Previous: Core algorithm


Scaling to high speeds

Figure 1: Architecture
[width=0.45]architecture.eps

The flow slicing probability $ p$ controls the memory usage, but since we do a lookup in the flow memory for every packet, flow slicing does not control the processing load. In the presence of limited processing power, we add a random packet sampling stage in front of the flow slicing stage (see ). A simple solution is to set the packet sampling probability $ q$ statically to a value that ensures that the processor performing the flow measurement can keep up even with worst case traffic mixes. Based on Cisco recommendations [17] for turning on NetFlow sampling for speeds higher than OC-3, we set $ q$ to $ 1/4$ for OC-12 links, $ 1/16$ for OC-48, etc. With these packet sampling rates, and with worst case traffic consisting of the link entirely full with 40-byte packets, the flow measurement module has around $ 2\mu s$ per packet and it has time to perform around $ 35$ (wide) DRAM accesses on average.


next up previous
Next: Adding an inactivity timer Up: Description of flow slices Previous: Core algorithm
Ramana Rao Kompella 2005-08-12