Check out the new USENIX Web site. next up previous
Next: Performance Up: Implementation Previous: Implementation

Evaluation methodology

In evaluating a normalizer, we care about completeness, correctness, and performance. The evaluation presents a challenging problem because by definition most of the functionality of a normalizer applies only to unusual or ``impossible'' traffic, and the results of a normalizer in general are invisible to connection endpoints (depending on the degree to which the normalizations preserve end-to-end semantics). We primarily use a trace-driven approach, in which we present the normalizer with an input trace of packets to process as though it had received them from a network interface, and inspect an output trace of the transformed packets it in turn would have forwarded to the other interface.

Each individual normalization needs to be tested in isolation to ensure that it behaves as we intend. The first problem here is to obtain test traffic that exhibits the behavior we wish to normalize; once this is done, we need to ensure that norm correctly normalizes it.

With some anomalous behavior, we can capture packet traces of traffic that our NIDS identifies as being ambiguous. Primarily this is ``crud'' and not real attack traffic [12]. We can also use tools such as nmap [3] and fragrouter [2] to generate traffic similar to that an attacker might generate. However, for most of the normalizations we identified, no real trace traffic is available, and so we must generate our own.


  
Figure 5: Using NetDuDE to create test traffic
\begin{figure}
\centerline{\psfig{figure=netdude_courier.ps,width={3.2in}}}
\vspace{.05in}\end{figure}

To this end, we developed NetDuDE (Figure 5), the Network Dump Displayer and Editor. NetDuDE takes libpcap packet tracefile, displays the packets graphically, and allows us to examine IP, TCP, UDP, and ICMP header fields.5 In addition, it allows us to edit the tracefile, setting the values of fields, adding and removing options, recalculating checksums, changing the packet ordering, and duplicating, fragmenting, reassembling or deleting packets.

To test a particular normalization, we edit an existing trace to create the appropriate anomalies. We then feed the tracefile through norm to create a new normalized trace. We then both reexamine this trace in NetDuDE to manually check that the normalization we intended actually occurred, and feed the trace back into norm, to ensure that on a second pass it does not modify the trace further. Finally, we store the input and output tracefiles in our library of anomalous traces so that we can perform automated validation tests whenever we make a change to norm, to ensure that changing one normalization does not adversely affect any others.


next up previous
Next: Performance Up: Implementation Previous: Implementation
Vern Paxson
2001-05-22