First Conference on Network Administration
Santa Clara, California,
April 7-10, 1999
Keynote
Session: Monitoring and Video
Session: Configuration Management and Security
Invited Talks
Home, Road, and Work Have Merged Via the Internet
Norm Schryer, AT&T Research
Summary by David Parter
"If the code and the comments disagree, both are probably wrong."
Norm Schryer
Norm Schryer has an OC-3 to his house. And a cable modem. Why? He works
for AT&T Research, where his group has been investigating broadband
network services. In fact, they have spent most of the last five years
on the infrastructure required to do research on broadband services.
Schryer discussed many aspects of widespread high-speed multilocation
networking and their implications for network managers:
- Services are widely distributed.
- Distributed responsibility among PCs, laptops, palmtops, etc.
- Central fragility: The center of the network is subject to
failure.
- Security issues are important and cause problems too.
- "The bigger and better the network, the easier it is to destroy
itself."
WAH/WOOH: Work at Home / Work out of Home
Most members of Schryer's group have cable modems at home. All of them
work all the time (or close to it) "fixing it, making it, or doing
something with it." They are "on net" all the time. This makes for more
loyal employees, some of whom move in order to get broadband services.
Those who live in areas where cable modems or xDSL are not available
are "geographically challenged."
VPNs
For both the home user and the road warrior, Virtual Private Networks
(VPNs) are required for security. The VPN is tunnelled over the
broadband service (cable modem, xDSL, etc.) using IPSEC. According to
Schryer, VPN software clients are very fragile, and VPN hardware is
great but expensive. VPN software is like "rolling ball bearings down a
razor bladeif any fall off, IPSEC breaks." Several vendors
provide VPN software, but it is often usable only on specific versions
of the operating systems and, especially with Windows, often breaks
when other software or patches are installed.
Their solution is a dedicated computer running Linux, which provides
the VPN and IPSEC software. All the other computers at home do not need
to be reconfigured in order to use the VPN.
For performance reasons, Schryer strongly recommended against creating
a VPN over the commodity Internet. Instead, you should contract with
your broadband service provider for a high-speed dedicated link to your
corporate network.
At this point, about half of AT&T Research is using its VPN system
from home.
Road Warriors
When travelling, most users have a laptop running a Windows operating
system and have to use a VPN software client (instead of the "moat,"
the Linux tunneling system). Their experience is that the Windows IP
stack is very fragile, and IPSEC shuts down and locks out the user when
it detects what it thinks is an attack. Frequent reinstalls of the
laptops result.
Schryer also discussed other challenges for the network administrator
when dealing with high-speed connections to the home and road:
high-bandwidth applications such as music; universal messaging; PDAs;
and wireless links and network telephony, among others.
In summary, he identified the following principles, with the note that
if you are horrified at the scope of the problem, that is good.
- Very strong vendor management is needed: insist on standards,
interoperability, and reliability; SNMP management of IPSEC is a
disaster; "pin your vendor to the wall on reliability"; if they lie,
you pay for it.
- 7x24 customer carewith a global network, customers are
always on7x24 coverage is "dirt cheap" for keeping customers.
- "Do not step in the Microsoft": Microsoft network software is
extremely fragilecan break at any installation of software,
even nonnetwork software; Windows is a nonmanaged platform; segment
what you are doingkeep critical stuff on a separate platform
that is managed.
Session: Monitoring and Video
Summary by Ryan Donnelly
Driving by the Rear-View Mirror: Managing a Network with Cricket
Jeff Allen, WebTV Networks, Inc.
"Cricket is not the same as MRTG" was the theme upon which Jeff
Allen began his discussion of the new enterprise network-management
tool. Cricket was born out of a need to forecast network growth and
plan for expansionnot merely react to significant changes in
usage patterns. In addition, Cricket, like its predecessor, provides
for instantaneous "is this link up" management features. Also, like
MRTG, Cricket is a cgi-based management tool, featuring graphical
displays of traffic trends over configurable time periods with
comparison to other dates/
times. As such, it provides information for long-term analyses of
traffic trends on a specific link or group of links.
The system, while appearing MRTG-like, has evolved in numerous ways. An
increase in configurability, via a hierarchical configuration tree,
allows for data gathering from scripts, SNMP, Perl procedures, etc. The
data is obtained by means of a collector, which runs out of cron and
stores data in RRD files, later to be interpreted by a graphing
program. The RRD/
grapher program implements "graphing on demand," which generates
results similar to those displayed by MRTG.
In addition to graphing data, Cricket also gathers application and
host-based statistics to monitor such events as PVC states, cable modem
signal strength, and router CPU load.
Cricket has taken its place as the successor to MRTG. While it can
display much of the same information, its hierarchical configuration
tree and improved code allow it to perform many more tasks in a more
efficient manner. The Cricket home page is
<https://www.munitions.com/~jra/cricket>.
Don't Just Talk About the WeatherManage It! A System for
Measuring, Monitoring, and Managing Internet Performance and
Connectivity
Cindy Bickerstaff, Ken True, Charles Smothers, Tod Oace, and Jeff
Sedayao, Intel Corporation; Clinton Wong, @Home Networks
The Internet Measurement and Control System (IMCS) was developed to
provide quantitative measures of Internet performance. IMCS provides
such statistics as HTTP GET requests and ICMP echo times to
flagship Internet sites. It then uses such statistics to delineate
process limits for a given data set.
Such limits are obtained by two Perl procedures, TimeIt and
Imeter. TimeIt measures the total time to look up a
URL, connect, transfer data, and disconnect. Throughout the transfer
period it also logs both transfer and error rates. Imeter
completes the statistics-gathering engine by adding an ICMP echo
component. Pings are issued to such strategic locales as root name
servers and large Web sites.
The monitoring and alert component of IMCS is Web-driven. As soon as
IMCS determines that the process limits have been violated for a
specific measure, an "all-in-one" Web page can alert network operations
center staff to potential problems on a link and thus allows for a
degree of preemptive network management.
To judge by Intel's experience, IMCS can provide a network
administrator with rich performance statistics about specific network
traffic. But the system faces the problem of process limit setting, by
virtue of the lack of a consistent statistical model.
In the future, the authors plan to integrate an expanded set of
services to be monitored, such as SMTP and DNS, and also plan to
integrate flow data.
Supporting H.323 Video and Voice in an Enterprise Network
Randal Abler and Gail Wells, Georgia Institute of Technology
Randal Abler and Gail Wells have been exploring the possibilities of
implementing H.323, a voice- and video-over-IP transport. The H.323
standard allows for the proliferation of two-way videoconferencing over
the Internet and other applications such as distance education.
Developed as a LAN protocol, H.323 is UDP-based. As such, extensive
H.323 traffic has the capability of overwhelming TCP traffic on a
congested link. In trying to check such traffic levels, most H.323
applications contain the ability to limit client bandwidth usage. In
addition, a network-based bandwidth-limiting solution, such as MPLS,
could be warranted. As evidenced by testing, bandwidth limitation is
crucial not only for network-congestion reasons but also for
client-performance reasons. Programs such as Microsoft NetMeeting
experienced degraded performance when used on a link with a speed
inconsistent with the program's bandwidth setting. Testing also showed
that conventional modem links do not supply sufficient bandwidth for
acceptable realtime video usage. However, with the advent of digital
connectivity to the home, the future of H.323 appears bright.
Session: Configuration Management and
Security
Summary by David Parter
Network Documentation: A Web-Based Relational Database Approach
Wade Warner and Rajshekhar Sunderraman, Georgia State University
Wade Warner described Georgia State University's use of a relational
database to track and document configuration information for its
network. An online Web-accessible database was preferable to paper
records because the Web interface allows for access from all platforms,
and because paper records don't scale and are hard to use.
The implementation that was described is specific to the GSU network,
but it is easy to add other types of devices. It makes extensive use of
Query by Example (QBE), so that changes in the device types do not
require redesign or rewriting of the user interfaces. Currently they
are using Oracle, but they have an MSQL implementation freely
available.
Items included in the database are installed software, printers, and
specific information about the computers (such as amount of memory,
serial numbers, and MAC addresses). They track computers by hostname
and IP address, which has caused some issues for multi-homed hosts.
Several commercial tools are available for similar tasks, including
populating the initial database. All suffer from the same problem:
keeping the database current. The GSU group populates the database by
extracting information from a variety of existing sources (such as the
DNS system), manually entering the data, and using a Web form.
There are three access levels for viewing the data: network admins have
all access; the middle layer can query specific network devices and
generate preformed reports; and the users can see the availability of
certain resources such as printers and software. All access is
controlled with a user ID and password.
There were some interesting questions and comments from the audience
about using port-scanning and SNMP to gather some of the initial data.
(Wade said that they are working on that.) Also, USC had (or has?) a
similar system, requiring the MAC address be entered into the database
before they will assign an IP address.
Just Type Make! Managing Internet Firewalls Using Make and Other
Publicly Available Utilities
Sally Hambridge, Charles Smothers, Tod Oace, and Jeff Sedayao, Intel
Corp.
Jeff Sedayao gave a very interesting presentation on the management of
Intel's distributed firewall environment, which relies heavily on make
and other common publicly available UNIX utilities.
The Intel firewall architecture has the following characteristics:
- a typical screened subnet architecture.
- inner/outer firewall routers with a "DMZ" containing bastion
hosts.
- ISP routers on a common network segment outside the "outer"
router (not in the DMZ).
- geographically dispersed routers and firewalls, with failover
from one firewall complex (DMZ site) to another as needed. (Currently
there are eight firewall complexes.)
The bastion hosts need consistent configurations from one firewall
complex to another, and this system has to be managed by limited staff.
Their solution is to view all the firewalls as a single distributed
system rather than a collection of independent firewalls. A single set
of source information is used to construct customized configuration
files for each firewall (using make). Each DMZ network segment is
described by a series of serializable segments of firewall access lists
that can be combined in any order to function correctly. Each set of
rules is stored in a separate file. For a given firewall, the
constructed ACL is ordered for the best performance (most common rules
first) for that particular firewall complex, preserving the required
order of individual rules to enforce the desired access policy.
The bastion hosts are handled in a similar manner. Each is an exact
copy of its counterparts at other firewall complexes. There is only one
configuration for a given function (DNS, Web proxy, etc.). make is used
to manage the configurations, and rdist (with ssh) is used to
synchronize the bastion hosts with the master.
Their experience with this system has been positive, with the following
lessons learned:
- The system is scalable and robust for both support staff and
users.
- There is a fundamental need for discipline.
- It is easy to ensure that a correct configuration is installed
at all the firewall complexes. (For example, they had the "Melissa
virus" check pushed out before their manager asked about it.)
- It is also easy to propagate an error to all sites.
- Admins must be trained to change the master, not the deployed
configuration.
- Version control is necessary.
- Using RCS-like tools, one can easily look at changes from the
previous configuration to try to identify the cause of a problem.
- rdist can be used to compare the master configuration tree and
the installed bastion host.
- The router configurations are reloaded daily to ensure that the
most current version is in use.
Future work is planned on automating testing of configurations before
pushing to the bastion hosts, and on automating the firewall ACL
optimization.
Tricks You Can Do If Your Firewall
Is a Bridge
Thomas A. Limoncelli, Lucent Technologies, Bell Labs
Typical firewalls act as routers, even if the routing function is not
otherwise necessary. The network on one side of the firewall is one IP
network, and the network on the other side is another network. To
applications on either end, the firewall looks like a router. Tom
Limoncelli described his experiences with a firewall that acts as a
bridge (transparent to the applications) instead.
The major advantage of the bridge firewall is that it can be inserted
into the network without requiring a configuration change in any other
devices. This also reduces the need for extra transit networks between
the traditional (nonfirewall) router and the firewall, and allows for
quicker installation (without downtime). In addition, Limoncelli argued
that the software is simplified by not having to implement routing
functions. (Not everyone agreed with this claim.)
According to Limoncelli, his is not a new ideait was proposed
(and perhaps built?) a long time ago, and then the concept died as
people focused on router-like firewalls. He did not mention who had
done the initial work.
He worked through several scenarios, including replacing a router-like
firewall with a bridge-like firewall without downtime:
Step 1: Program the new firewall with the same rules.
Step 2: Insert into the network (which should take a few seconds
and one extra network cable) on the local-network side of the existing
firewall.
Step 3: Run both firewalls in series. Check the log files on the
inner firewallit should not filter (remove) any traffic at all.
(The outer firewall should have eliminated all filtered traffic.)
Step 4: Remove the router-like firewall (removing the transit
network). This will require short downtime, as the external router will
have to be reconfigured from the transit network to the local network.
Additional rule testing can be added by deploying the bridge firewall
first on the transit-network side. The existing firewall should show no
packets filtered, as the outer (bridge) firewall should have removed
them.
In addition to replacing existing router-like firewalls (or augmenting
existing firewalls to provide multiple layers of defense), the
bridge-like firewall can be used to add firewalls to existing networks
without reconfiguration. For example, it is easy to add a firewall to
protect a manager's office systems without disrupting the rest of the
network. Unfortunately, it is also easy for someone to add a
bridge-like firewall without consulting the network staff, since the
firewall is transparent to the IP layer.
Many of the questions focused on details of the Lucent product,
although Tom's experience is with the research prototype.
New Challenges and Dangers
for the DNS
Jim Reid, Origin TIS-INS
Summary by Ryan Donnelly
According to Jim Reid, the Domain Name System is definitely at an
evolutionary crossroads. The introductions of IPv6, dynamic DNS, secure
DNS, and Windows 2000 have sent DNS administrators around the globe
scrambling to design more effective methods for managing the DNS
infrastructure. In doing so they have encountered innumerable issues
associated with redesigning DNS.
One of the primary concerns with DNS evolution is its hasty marriage to
WINS, the Microsoft DNS-like system for Windows machines. In trying to
integrate the two systems, IETF engineers have had to add multiple new
records and features to the DNS system in order to make the migration
as painless as possible. Two of the primary facilitators of the change
are dynamic DNS and secure DNS.
Dynamic DNS is a process by which hosts are authorized to modify a
given DNS zone file. This is necessary to continue to support the
"plug-and-play" philosophy expected by Windows/WINS users. In
implementing this system, engineers have encountered the problems of
excessive zone transfers that are due to constant zone updates, and the
problem of host authorization. While the excessive zone update problem
can be solved by implementing a periodic zone-transfer schedule, the
issue of authorization requires a more complicated solution: secure
DNS.
Secure DNS is a method by which keys are assigned to hosts that are
authorized to modify DNS data. While partially solving the problem of
host authentication, the overhead involved in key management and
encryption/decryption is substantial. In addition, the storage of such
1,024-bit keys in zone files has the capability of increasing the
current zone files by as much as a factor of 10.
Reid concluded his talk by emphasizing the fact that much development
work still has to be done, and that nothing is cast in stone. The
presentation is available at
<https://www.usenix.org/publications/library/proceedings/neta99/invitedtalks.html>.
Problems with World-Wide Networking
Holly Brackett Pease, Digital Isle
Summary by David Parter
Holly Brackett Pease described many of the challengestechnical,
political, and logisticalof deploying a worldwide private IP
backbone.
Digital Isle provides a worldwide IP backbone for its customers,
bypassing the more common but often congested network access points and
Internet exchanges. Instead, they contract for peering with the
appropriate Internet service providers in a particular country as
needed by their customers.
Technical challenges include understanding BGP and the way it is
configured by the various ISPs and backbone providers to assure that
traffic does in fact flow over the correct carrier and that routes are
accepted (as appropriate) by peers and their providers. Another
technical challenge is the differing telecommunications standards and
terminology: X.21 is not a movie rating (but you still don't want to
use itG.703 is your friend). Actually, X.21 is an International
Consultative Committee for Telephone and Telegraph (CCITT) standard
defining the interface between packet-type data terminal equipment and
a digital-interface unit. G.703 is a physical/electrical interface
specification, used for leased lines (E1s, similar to T1s but all
digital) in certain European countries and Australia, and supported by
router manufacturers such as Cisco.
Logistical challenges include getting the network equipment to the site
with the correct interfaces, cables, and everything else that is
needed. Once the equipment has been shipped, it is often necessary to
pay a hefty value-added tax to get it past customs. Pease pointed out
that Fry's does not exist in most parts of the world, and if you can
find a supplier for that missing cable, they won't take your credit
cardthey want a purchase order. (Anixter is a good supplier
they do have worldwide coverage.)
Political issues include local laws covering IP telephony and content.
In one case, they had to register their networks as being from that
country in order to be allowed to peer with networks in that country
despite being a partner of the largest ISP.
There were several questions:
Q: Are the peering agreements with the major players in each country
peering of two equals, or on a payment basis?
A: Pay all partnersthis is a technical peering and interchange
agreement, not one ISP trading access to another. Digital Isle is
providing a premium service for their customers; there is a cost. They
don't peer at the local exchanges at all.
Q: Language issues?
A: Most of the ISPs that they have had to deal with have at least one
or two staff with excellent English. If not, they use the AT&T
international translation service.
Q: Content filtering?
A: Mostly not a problem, since Digital Isle already has content rules,
and their customers are Fortune 1000 companies, not ISPs, and have no
downstream customers. IP telephony rules vary from country to country,
and it is necessary to make sure that the network enforces those rules.
The Little NIC That Could
Christopher J. Wargaski, RMS Business Systems
Summary by Ryan Donnelly
In an era of high-growth enterprise networks, NICs (network information
centers) are becoming more and more expensive to run. Chris Wargaski's
presentation on a cost-effective NIC encompassed the idea of a "project
staffing model." In this model, the NIC is the first point of contact
for all IT staff. Owned solely by the networking department, overall
responsibility for the NIC is held by a middle-level manager. Below
such a manager are the on-call manager, NIC analyst, and on-call
analyst.
It is the responsibility of the on-call manager to track problem
occurrence and resolution. Additionally, it is the responsibility of
the on-call manager to report relevant problem-resolution statistics to
upper management.
The NIC analyst is the primary point of contact within the NIC itself.
The analyst is primarily responsible for answering phones and issuing
trouble tickets. The analyst is also responsible for monitoring the
overall health of the network in order to detect network problems
preemptively. When a problem is detected, it is the responsibility of
the NIC analyst to assign the ticket to the on-call analyst.
The on-call analyst is the technologist responsible for fixing the
problem and updating the NIC regarding its status.
In the project staffing model, both the NIC analyst and the on-call
analyst are obtained by rotating engineering/technical staff through
the NIC, as determined by the NIC manager. The manager must consider
extensive input from staff on scheduling conflicts and hold staff
meetings on a regular basis to ensure a constant information flow. It
should be emphasized to staff that training, while important and highly
encouraged, is optional.
The presentation ended with several comments on the topic of NICs as
hindrances as opposed to facilitators.
Splitting IP Networks: The 1999 Update
Thomas A. Limoncelli, Lucent Technologies, Bell Labs
Summary by David Parter
After a rap intro, Tom Limoncelli introduced his talk by commenting
that when he and his co-authors first wrote the Bell Labs network
splitting paper (1997), they were unsure if network splitting was
interesting to anyone and if it would ever be done again. He then asked
if anyone from Microsoft or Hewlett-Packard was in the audience
since they will (or may be) facing the same issues soon. In fact, the
sysadmin group at Bell Labs has since been called upon to renumber more
networks, involving more total computers, than in the original work.
The original problem was splitting the Bell Labs research networks (on
a computer-by-computer basis) between AT&T and Lucent/Bell Labs,
because of the AT&T "tri-vestiture" (splitting into AT&T,
Lucent, and NCR). In addition, they took the opportunity to upgrade the
networks, reorganize the file servers, and bring the workstations into
a consistent configuration.
The project was a major success, mainly as a result of the amount of
advance planning and effort spent "getting it right" before widespread
implementation.
Prior to actually renumbering the networks, it was necessary to
identify which workstations and file servers were destined for which
networks. In some cases, this was not known for several months, because
of uncertainty about which members of the staff were going to which
company. In addition, data stored on the file servers had to be sorted
and moved to a file server destined for the correct network. An
important tool in this phase was a Web page that allowed users to check
on the status of the project and read information about specific
resources. Additions and corrections were supplied by the users. The
users had less concern about potential disruptions since they were
involved in the planning process.
The information about which workstations and file servers were destined
for which network was also used to populate new NIS netgroups named for
the new networks. The new netgroups were used to control filesystem
exports on the newly reconfigured file servers. By generating the
netgroups from the master list of which workstations were destined for
which network, they eliminated a potential inconsistency.
The key technology used in the actual renumbering project was
multi-netting all the Ethernets, so that all the original IP networks
and the new networks were on the same physical networks. This allowed
them to renumber workstations one at a time without disrupting all
other workstations on the network.
Repatching data jacks from their current network hub or switch to a
network switch designated for the correct future network was done in
parallel with the renumbering. As all the switches and hubs were
connected together, this was transparent to the users.
When the workstation and fileserver renumbering was thought to be
complete, the connections between the two new (physical) networks was
reduced to a single bridge. Traffic was monitored at the bridge in
order to detect workstations and file servers that ended up on the
wrong physical network. After all cross-traffic (other than broadcast)
was eliminated, the bridge was removed, the two networks were
independent, and the project was completed.
Network Management on the Cheap
Rob Wargaski, RMS Business Solutions
Summary by Ryan Donnelly
Rob Wargaski's talk on cheap network management included a fairly
extensive summary of UNIX network-management power tools and their most
effective uses. He began his talk by discussing two different types of
network management: tactical and strategic. Tactical network management
is essentially day-to-day reactive monitoring and firefighting.
Strategic monitoring is more focused on data analysis and trend
forecasting.
Some of the tactical tools mentioned included ping, traceroute and arp,
used to assess baseline network connectivity. An additional tool is
K-Arp-Ski, a comprehensive X-based sniffer and network analyzer. It can
monitor an IP/MAC address combination, multiprotocol traffic, and NIC
vendor information. He also mentioned several SNMP-based tools, such as
SNMPSniff and Sniffit, along with the standard UNIX tcpdump.
Discussing strategic network-management tools, Rob first brought up the
topic of enterprise network-management packages such as HP OpenView. He
also mentioned some CMU/UCD character-based SNMP tools with API
interfaces. These included snmpget, snmpgetnext and snmpwalk. The
combination of these three tools allows a network administrator to
retrieve either a select amount of data from a network device or the
entire tree of device information. Because such data is atomic in
nature, strategic management requires postprocessing of the information
in order to determine trends and typical network behaviors.
The Evolution of VLAN/ELAN Architecture at Vanderbilt University
John J. Brassil, Vanderbilt University
Summary by David Parter
John Brassil described the Vanderbilt University network and its
evolution from a "flat" bridged network to one making use of many
virtual LANs (VLANs), mostly grouped by common services or
administrative structure. Vanderbilt's network includes almost 10,000
computers and will grow soon with the addition of student dorms.
The talk included a large amount of background information on VLANs,
emulated LANs (ELANs), and ATM LAN Emulation (LANE).
The network is continuing to evolve:
- While the IP network is currently not routed, they will switch
to a routed IP network soon. This is difficult because in the past they
have had a lack of planning in IP address allocation.
- They are encouraging users to use local Internet service
providers for off-campus connectivity using cable modems and xDSL, and
are phasing out their campuswide modem pool.
- They have recently added a firewall at their Internet
connection. It is currently being used only for traffic logging, not as
a firewall.
Interoperable Virtual Private Networks (VPNs), Directory Services,
and Security
Eric Greenberg, Seine Dynamics
Summary by Ryan Donnelly
Eric Greenberg presented several different uses and implementations of
Virtual Private Networks (VPNs). Some potential uses include remote
dial-in applications, private networks, and business-to-business secure
networks.
There are several methods for implementing such VPNs. The first is the
PPTP protocol, which provides for multiprotocol tunneling over PPP.
While beneficial for moving such protocols as Appletalk and IPX over an
IP link, PPTP does not introduce any new security features. L2TP,
however, does.
L2TP was described as the successor to PPTP. It is
transport-independent, making it a much more viable option for networks
that implement various transports, and does not necessarily require IP.
It is also UDP-based and has the added feature of enabling digital
certificates.
X.509 certificates are yet another method of implementing VPN security.
These allow machines within a network to authenticate one another by
means of a unique certificate that is stored on a central directory
server. An offshoot of this method is public key cryptography, which
can be implemented by sending a copy of a public key with the
certificate itself. Such certificates can also be issued by numerous
emerging certificate authorities, such as Verisign.
IPSEC is an additional method of designing secure virtual private
networks. It provides for IP-only tunnelling and is integrated into
IPv6 via the "next-header method."
Greenberg also discussed some possible security-policy and key
management issues. He suggested that certificate and key management be
performed on a central directory server that would hold the
certificates and their associated security policies.
In a VPN, however, security is not the only concern. As traffic levels
continue to rise, quality-of-service (QoS) issues also arise.
Multiprotocol Label Switching (MPLS) allows VPNs to influence QoS
issues by assigning a priority to a certain type of traffic.
Greenberg concluded his talk by analyzing the features of PPTP, L2TP
and IPSEC, and the environments in which each is most useful.
Internet Measurements
k. claffy, Cooperative Association for Internet Data Analysis at the
San Diego Supercomputer Center, UCSD
Summary by David Parter
k.c. claffy gave an entertaining and blunt talk about Internet
measurements, tools, and interesting findings from CAIDA. She also
provided a lot of answers to what she called "cocktail party" questions
interesting (but not really useful) facts about the Internet.
According to k.c., the current state of measuring the Internet is
"abysmal science." Because of the growth of the commercial Internet,
researchers can't get access to the network infrastructure and
operational data as well as they could when the primary network was
NSFnet. They do have access to the vBNS, which helps. (vBNS is
very-high-performance Backbone Network Service, a product of a
five-year cooperative agreement between MCI and the National Science
Foundation. See <www.vbns.net>.)
She pointed out that this doesn't stop researchers from building "junk"
and doesn't stop users from doing "random junk." There is a lot of
measurement activity now (at least 11 organized groups), but the groups
don't talk to one another, and at this time there is no correlation of
the data sets (e.g., workload versus traffic).
She identified and described four areas of measurement:
- topology
- workload characterization (passive measurements)
- performance evaluation (active measurements)
- routing (dynamics)
One of the more interesting aspects of her talk was the visualization
tools that CAIDA is using to explore the data. Unfortunately, they are
hard to describe. Readers are advised to visit <www.caida.org>
and look at the samples. Slides from the talk are online at
<https://www.caida.org/Presentations/>.
Some of the "cocktail party" facts:
- Half the packets on the Internet are about 40 bytes long.
- Half the packets on the Internet have a full 1500-byte payload.
|