Seventh USENIX Security Symposium
SAN ANTONIO, TEXAS
January 26-29, 1998
These reports were first published in the ;login: special issue on security, May 1998. Our thanks to the Summarizers:
Kevin Fu
<fubob@mit.edu>
Gus Shaffer
<shafferz@umich.edu>
Jim Simpson
<freyjs@umich.edu>
Dug Song
<dugsong@monkey.org>
Our thanks to the Photographer:
Darren Reed
<darrenr@cyber.com.au>
Abstracts of the refereed papers
presented at the symposium are at
<https://www.usenix.org/publications/library/proceedings/sec98/technical.html>.
USENIX members may also access the full text.
KEYNOTE ADDRESS
Security Lessons From All Over
Bill Cheswick, Lucent Technologies, Bell Labs
Summary by Dug Song
In a freewheeling, "experimental" talk delivered to an audience plenty
eager to be his guinea pigs, Cheswick offered several keen and often
humorous insights on the topic of security from a variety of
disciplines, including architecture, biology, chemistry, and military
history.
In case study after case study, he illustrated how security paradigms,
exploits, and strategies cross every imaginable boundary; how the
concepts of perimeter defense, backdoors, trojan horses, and bluffing
recur throughout history. He also highlighted security strategies we
can learn from nature, such as multilayered, multifaceted defenses and
the advantages of diversity.
This fascinating, multidisciplinary historical perspective on security
from one of the "seven avatars of the Internet" included 155 slides in
1.5 hours. You can find more information at
<https://cm.bell-labs.com/who/ches/ppt/universal.ppt> and
<https://cm.bell-labs.com/who/ches/>.
REFEREED PAPERS TRACK
Session: Architecture
Summaries by Jim Simpson
Steve Bellovin of AT&T Labs Research chaired this session,
where two papers involving the design and implementation of responses
to security issues were presented. The first paper concerned adaptive
security policies for systems that separate the policy in a Security
server that the kernel must then enforce. The second talked about a new
wide-area authentication and access control system for distributed
applications, CRISIS. CRISIS is currently used as the security
subsystem of WebOS, a system that lends itself to wide-area
implementation of distributed applications through extensions of
operating system functions like security, resource management, and
remote process execution.
A Comparison of Methods for Implementing Adaptive Security Policies
Michael Carney and Brian Loe, Secure Computing Corporation
Loe started off his talk by explaining that this paper stemmed from a
project called Experience with Adaptive Security Policies, funded by
Bell Labs, which was a followup to an earlier program called
Experimentation with Adaptive Security Policies. Some of the methods
discussed in this paper were done in the original projects. Some of the
goals of the program were to look at how to change policies in systems
that were relatively large.
Two types of adaptive security include:
-
discretionary access control
-
mandatory access control
There are small things you might do with discretionary access that can
be done in relatively simple ways, but the sponsors of this particular
program were more interested in things like globally changing the
security lattice in a system with multilevel security and, in a
nonmilitary situation, defining
ways that rules change when subjects act on objects.
The motivation behind this comes from the idea that you have a large
organization with some sort of security policy and a computer system
with another security policy. You want to make sure that there's a
correspondence between what the organization wants and what the actual
computer does. Most organizations have security policies that change
over time, and you want to make sure the computer system can change as
well, hence the word adaptive.
Examples of when you might want
adaptive security policies include the
following:
-
Upon intruder detection, you could harden the policy.
-
When important information becomes less critical, you might relax the
policy.
-
As a banking example, using both, after business hours you could harden
the policy but during business hours, you could relax relax the policy
so work could get done.
-
A timed release of information could be based on whom it's being
distributed to.
-
In a certain task- or process-based policy where a data item is going
through a pipeline, and processes A, B, and C have to do something
specific to that item, but in a certain order, you might have a
permission set that changes each time the data item is manipulated.
-
A Chinese Wall policy process has access to a number of data items
belonging to one class, and you want to restrain the flow of
information from one data item to another.
-
In role-based policies, something might operate in different roles.
What do you do when those roles overlap?
Implementations were done on the DTOS system; if you want more
information about DTOS, there was a paper presented about it in 1995,
and there was a pointer to it in this particular paper. This project
originally started with a basic security architecture they were working
with that was then abstracted through various changes. The basic
security policy involves a set of processes trying to request services
from the kernel that does not know what the security policy is. The
kernel then forwards a request to a Security server that has the policy
loaded at boot-time from some database. The server then reports the
permissions to the kernel, which enforces them. Essentially, the kernel
queries the server with a "Mother, may I?" and is totally ignorant of
what's going on.
This was slightly abstracted by adding a policy engine to the back-end.
This policy engine makes all the security decisions for the kernel. In
some cases, the enforcers of these decisions may not even be the
kernel; they might be fileservers. Still, the basic architecture is the
same: like the kernel in the previous example, they know nothing about
the policies because that information is located in the engine. They
just enforce them. The engine consists of three pieces:
-
the server/daemon itself, which the enforcer sends queries to
-
the database that the server uses upon boot-up containing policies
-
the interface to the enforcer
A common question involves who controls the interface to the enforcer.
A few variables have to be taken into account before that question can
be answered. In the first example, there was simply one Security
server. However, there might be multiple servers, and these servers
might have to deal with databases of different complexities.
There are four methods for dealing with these variables when
implementing adaptive security policies; the first three methods deal
solely with one Security server answering at a time, and the fourth
deals with having multiple Security servers answering queries:
-
Reload policy: a Security server might have more than one database and
simply load the new policy from a different database.
-
Expand database: make the database a little more complex and add logic
to the Security server.
-
Hand off control: you can have separate Security servers that hand off
control over the interface as to who is answering the queries the
kernels have.
-
Hierarchy of servers: multiple servers divide up the responsibility for
answering policy questions.
The details of implementation are gone over in the paper, and Loe
decided to skip over them to save some time. However, he did decide to
go over the implementation of the last method involving a hierarchy of
servers. The way this was implemented is that you have various
processes out in user space, they are trying to ask the kernel
questions, and the kernel has some mapping from these processes into
the hierarchy. That is, processes 1 through 10 may all refer to server
0, so when they ask the kernel, the kernel knows which server to talk
to. This is more or less a process-oriented security policy, making it
localized, as opposed to being a global policy for all processes.
Because there are several different methods, they decided to evaluate
them in a trade-off study using five different comparison criteria:
-
Policy flexibility: how much can the policy change over a transition?
-
Functional flexibility: transitions transparent? Do things break on
users?
-
Security: are there any flaws or holes because of an adaptive policy?
-
Reliability: is this a reliable method of transition?
-
Performance: not much of an issue for them, so it wasn't gone over in
detail
A hierarchy of servers allows for a great amount of policy flexibility;
the other three methods don't fare as well. The same holds true for
functional flexibility. When you reload new policies in existing
servers, things are going to break on users. There are ample concerns
when it comes to the question of security: how are the mechanisms for
transition controlled? Is there some way for an agent to trigger one of
the mechanisms? Do you know what the current policy is? A hierarchy of
servers doesn't fare as well in this category because with a single
point of control, you can get a better sense of what the security
issues are. With many servers, you have to worry about many more
problems. Finally, reliability issues seemed good over all cases except
for hand off control, because it was prone to deadlock.
In summary, if you are trying to implement a system where small policy
changes will be made, use a more lightweight solution. If you need more
functional flexibility, you should look at a more robust solution, like
a hierarchy of servers.
There were only a few questions, one being: If you load a new policy,
as in option 1, how do you know you can handle it?
Most changes have to be predefined, and if you're using an
implementation where you just reload the policy, changes will be
minimal.
The CRISIS Wide Area Security Architecture
Eshwar Belani and Amin Vahdat, University of California, Berkeley,
Thomas Anderson, University of Washington, Seattle, Michael Dahlin,
University of Texas, Austin
People often ask what CRISIS stands for Vahdat said it indeed was an
acronym
at one point, but it was so forced and contrived that it's since been
dropped, and they now use a diagram to explain CRISIS. Because people
in Washington and California worked on this project, it is only natural
they might be concerned with wide-area security.
Wide-area applications are part of WebOS project, but WebOS didn't have
a security system, so they couldn't really deploy it. They decided to
implement a system, but found that current security implementations
aren't tailored very well to wide-area applications.
Why is it important to have security in wide-area applications?
-
the recent introduction of remote code, like Java and agents
-
poor connectivity on the net
-
lack of trust among those in a wide area
A very useful example that implements this sort of wide-area trust
mechanism is something called Rent-A-Server. Rent-A-Server is a system
that allows overloaded Web servers to replicate themselves over a wide
area in response to excessive client requests. An overloaded Web server
would need to contact another server and have it execute a copy of the
Web server and thus offload its excess.
There are a few security issues involving Rent-A-Server:
-
Remote process execution: When a surrogate agrees to run, it would
still like some guarantee it gets decent code.
-
Authorization: How do you prove who's making a request? How do you then
determine if that person can run a Web server on a surrogate.
-
Secure access to sensitive data: You want to be sure you have to
subvert more than one node to get access to sensitive data.
In a wide area, it's also good to have a fine-grained transfer of
rights. Short-term local rights are ideal for dealing with this; you
want to avoid modifying access control lists to avoid errors caused by
things like slow links in conjunction with invalidating central
authorities.
Here are a few other applications they targeted:
-
SchoolNet is a mechanism allowing students in various schools to
communicate with each other. Issues include how they authenticate to
each other and how they talk to one another without publishing a list
of members that can be talked to.
-
Wide-area collaboration includes development of joint source code. How
do you securely access the code from one tree from several different
places?
More applications are included in the paper.
Given these issues and their requirements, they looked at a number of
alternatives before setting out to build their own system:
-
using a secure login system giving everyone accounts (But this has no
fine-grained control over access rights and has the overhead of having
to be done everywhere.)
-
Kerberos (But this requires synchronous communication whenever you wish
to set up secure communication and doesn't scale well.)
-
SPKI (But this doesn't take into account the availability of remote
code.)
Major components of this new system are:
-
transfer certificates lightweight revocable capabilities that
allow for delegation and application of rules and support systems for
fine-grained rights transfer
-
flexible support for trade-offs security vs. performance vs.
availability through caching
-
revocation being a first class operation both in common and exceptional
cases
-
preparing requests simplifies accountability, all the terms
specifying whether or not request should be authorized are in request
On each node in CRISIS, there is a security manager responsible for
mapping process IDs running on that node to a set of privileges that
describe all the rights associated with that running process. The
security manager also periodically contacts a Certification Authority
and an On-Line Agent (OLA) to describe these privileges. They used the
X.509 certificate format along with SSL for communication. The
certificates are public-key based and have a dual-endorsement mechanism
where CAs sign identity certificates with a long time-out. These are
left offline, because offline entities are harder to attack. The OLA
signs certificates with shorter time-outs.
Three problems exist in building this:
-
Fine-grained rights transfer. This is achieved in CRISIS with transfer
certificates. This simplified implementation of delegation is necessary
because there will be different levels of trust for different nodes.
-
Mutual authentication. A hybrid ACL capability approach for
authentication is used in CRISIS. ACLs maintain a list of authorized
principals, and transfer certificates grant certain capabilities for
certain principals. This allows you to avoid modifying ACLs. The
reference monitor is responsible for validating time, ensuring that
every principal along the chain of transfer is trusted and that all
organizations endorsing principals are trusted.
-
Revoking rights. A currently trusted server should not be able to come
back in two weeks and access data. Revocation is a first class
operation in CRISIS, so all you need to do is inform the online agent
responsible for endorsing a particular certificate that the certificate
should no longer be endorsed. Because a transfer certificate
-
be cached for the time-out of the online agent, you might revoke
something that someone has cached and that is still valid. The solution
to this is setting the time-out to be shorter.
A problem that illustrates these solutions is as follows: say there's
an overloaded Web server in Berkeley that wishes to enlist the help of
a server in Texas. The Berkeley server signs a transfer certificate
stating that a certain privilege for accessing the transfer database is
granted to Texas. Texas transmits two certificates, one being an
identity certificate saying that some CA says that the key of that
certificate speaks for Texas, and a transfer certificate that says
Berkeley says that Texas can access the database. The reference monitor
is then able to validate this, and the ACL contains only the entry for
Berkeley. When the session is finished, Berkeley informs the OLA that
it should no longer endorse the transfer certificate for Texas.
They wanted to build CRISIS as something that would be compatible with
existing applications, so it's built as a user-level security manager
and as a library that can be linked into applications. Actual
implementations of these systems are gone over further in the paper;
these include accessing remote files and running remote jobs.
WebOS can be found at: <https://now.cs.berkeley.edu/WebOS>.
There were a few questions, including: What is performance like?
The paper includes numbers. They found that performance is limited by
the encryption/decryption, but in the common case, this is quite fast
because they can cache certificates. You're thus limited by the
performance of the network.
What happens if the OLA is killed off?
If the OLA is killed off, endorsements can no longer happen after the
time-out, and revocation happens by default.
Session: Intrusion Detection
Summaries by Kevin Fu
Bro: A System for Detecting Network Intruders in Real-Time
Vern Paxson, Lawrence Berkeley National Laboratory
Named for its Orwellian potential for privacy violations, Bro detects
intrusions by passively monitoring network traffic. Particular traffic
patterns cause a monitor to make intrusion announcements in realtime.
Vern is a member of the LBNL Network Research Group and the author of
flex, a program to generate a lexical analyzer for the front end of
compilers. Developing Bro since 1995, Vern easily earned the best paper
award.
Bro's design considers seven goals: high-speed, large-volume
monitoring; resistance from dropping packets; realtime notification;
separation of mechanism from policy; extensibility; avoidance of simple
policy mistakes; and tolerance for attacks on the monitor. Vern gave
the example of the large LBNL network in which security needs not be
airtight, but intrusions must be detectable. Offline intrusion
detection has its own applications, but it is nowhere near as good as
realtime notification. Vern separates mechanism from policy to easily
allow policy changes in the future. Moreover, an explicit policy
language helps avoid simple mistakes.
Bro consists of several layers. At the bottom, the network layer feeds
a packet stream to the libpcap library (tcpdump uses this library).
After some initial filtering, packets arrive at the event engine,
which is controlled by the policy script interpreter. The event engine
filters out unwanted traffic. For instance, you may care only about FTP
or portmapper packets. Finally, the interpreter makes intrusion
announcements in realtime. A figure in the paper better describes the
interface between each layer.
Key to the design of Bro is a policy language. This strongly typed
language aims to catch policy errors at compile time. One notable
feature protects against malicious strings. Rather than terminating a
string with a NUL character, Bro represents a string by a vector of
bytes and a byte count. If Bro were to use NUL-terminated strings, an
intruder could trick a monitor with a string such as USER
nice\0USER root. No explicit looping mechanism (besides recursive
calls) exists because Bro cannot afford the execution time, and loops
open up denial of service attacks. The single-threaded implementation
offers timer management (10,000 easily), interpretation instead of
compilation, checkpointing, and the ability for offline analysis. The
paper gives an overview of the language and offers several policy code
snippets.
The mere existence of a monitor invites attacks. Bro defends itself
against three categories of attacks. An overload attack forces the
monitor to drop packets so that an intruder can sneak in unnoticed. Bro
defends against this attack by doubting the monitor's capabilities; the
monitor will shed load if necessary. A second attack causes the monitor
to crash. An intruder could cause fatal errors by exploiting bugs or
making the monitor run out of resources. In defense, a monitor uses a
watchdog timer and records a traffic audit trail. Vern notes that
crashes are hard to prevent because a single bug in any program can
open up attack avenues. A third attack involves subterfuge. A nasty
intruder could mislead the monitor. For example, an intruder may
fragment IP datagrams in hopes that the monitor will fail to reassemble
the fragments. The paper devotes an entire page to this difficult
problem. Vern also explained that security through obscurity alone is
not sufficient, but you should not advertise a policy script because
doing so could reveal local bottlenecks to an intruder.
Wrapping up his talk, Vern gave the current status of Bro. Currently,
Bro can analyze four specific applications: finger, ftp, portmapper,
and Telnet. As for performance, Bro easily handles a 25Mbps FDDI ring
during busy hours (the statistic in the paper is incorrect; the paper
claims 50Mbps). Generally, the system drops no packets. However, the
event engine processes about 200 packets/second after the filter
discards unwanted packets. At LBNL, the monitor generates about 40MB of
connection logs and 20 realtime notifications each day. This resulted
in several CERT reports and many account deactivations. The source is
available upon request. See
<https://www-nrg.ee.lbl.gov/nrg-papers.html> for more information.
A participant asked whether Bro could handle the capacity of ATM and
switched technology. Vern answered that, in such a case, you can deploy
multiple coordinated boxes. He does not expect any problems arising
from ATM packet assembly. Another audience member asked whether Bro
would overload if placed on a gateway. Vern explained that such a
system could be a reactive firewall or could talk to a firewall or
router. You want to filter out the majority of traffic.
Cryptographic Support for Secure Logs on Untrusted Machines
Bruce Schneier and John Kelsey, Counterpane Systems
Bruce Schneier, president of Counterpane Systems, presented a paper on
secure
logs. Essentially, the secure logging system detects modifications to
logs on an untrusted machine. Drawing analogies to the legal system,
Schneier explained that "we don't prevent crime in this country; we
detect it." The same idea is put forth in secure logging. Schneier's
work includes the authorship of "Applied Cryptography" and creation of
the Blowfish cipher. Instead of presenting slides in ascending order,
he counted downward because he claims the audience "really wants to
know when they can get out." Incidentally, I saw Schneier writing his
slides during the previous session. In any case, his talk went well,
and he made the audience roll with laughter.
In the general model, an untrusted machine, U, talks to a trusted
machine, T. An ATM machine or smart card qualifies as an untrusted
machine. U maintains audit logs, occasionally commits logs to T, but is
not necessarily expected to be compromised. If an intruder attacks U at
time t, the intruder cannot undetectably alter log entries before time
t.
There are limits to possible solutions. First, no security measure can
protect U after time t. An intruder can do anything after time t.
Second, if T and U communicate online continuously, the problem no
longer exists. Third, cryptography alone cannot prevent deletion.
Write-only hardware, such printouts, or WORM drives can prevent
deletion.
Schneier identified a few wrong ways to protect logs. The logger could
encrypt log entries, but then the key is stored on U (ignoring
performance-heavy public key crypto). MACs (Message Authentication
Codes) suffer from the same problem. A logger could use hash chains,
but an intruder can fake them.
The approach in this paper combines MACs and encryption keys for an
irreversible process. A complicated diagram depicts the addition of
entries to a log file. Several variants of the logger exist (offline
versions, phone transmission medium). See the paper for an outline of
the protocol.
An audience member questioned whether this system can detect intrusions
at the time of attack. Schneier explained that the secure logger does
not exactly detect attacks; that's a "hardware problem." If an intruder
can get into a system without generating log entries, then this system
cannot help. Another participant pointed out that an electronic wallet
can be attacked at time t=0. In Schneier's scheme, he assumes the owner
is an attacker. It is a vulnerability. Questioned about the difference
between irreversible hashing and hash chains, Schneier responded that
an attack could delete log entries in reversible hash chains. Chains
allow easy forward hashing. Schneier's logger uses hash chains, but
prepends extra information. Finally, an audience member asked why the
system does not use public-key encryption. If you have small logs
infrequently accessed, would not PK encryption help? Schneier replied
that PK encryption is too slow and annoying for the goals of this
secure logger.
In the future, Schneier hopes to provide better multiparty logging,
distributed trust, and anonymization. His talk explained the problem of
secure logging well, but the logger itself could not be fully explained
in 25 minutes. Schneier has online information at
<https://www.counterpane.com/secure-logs.html>.
StackGuard: Automatic Adaptive Detection and Prevention of
Buffer-Overflow Attacks
Crispan Cowan, Calton Pu, Dave Maier, Jonathan Walpole, Peat Bakke,
Steve Beattie, Aaron Grier, Perry Wagle, and Qiang Zhang, Oregon
Graduate Institute of Science & Technology; Heather Hinton, Ryerson
Polytechnic University.
Leader of the IMMUNIX system survivability group and one of the few
tie-wearing USENIX attendees, Crispan Cowan talked about a general
method to prevent buffer-overflow attacks. System admini
strators typically patch vulnerable programs one at a time. Taking a
proactive approach, StackGuard modifies the compilation process to
prevent most overflow attacks.
A buffer overflow can allow local accounts (or compromised accounts) to
obtain root privileges. In the standard buffer-overflow attack, kids
use cookbook methods to feed a long string into a function that does
not check array bounds. In doing so, an attacker can inject code into
the stack, overwriting the return address. When the function returns,
the system jumps to the memory location of choice, typically code to
obtain a root shell.
StackGuard uses a compilation technique to prevent overflows. The
method is so simple you will cry: detect changes to the return address
by pushing one more word on the stack. StackGuard pushes a "canary"
value (a direct descendant of the Welsh miner's canary) after the
return address. When returning, a function checks whether the canary
value has changed. The canary must be secret, randomly generated at
execution time, and decidable. The last two conditions sound
contradictory to me, but I haven't looked under the hood.
MemGuard, a variant of StackGuard, has granularity to protect a single
word in memory. Unfortunately, MemGuard has disappointing results.
Cowan admitted that the "5,400% to 8,800% overhead probably is not
worth it." But StackGuard requires a simple patch to GCC, which emits a
little more in the function prolog and epilog. StackGuard function
calls are more expensive, but GCC compilation time does not appear
affected by StackGuard. GCC does not spend much time on function calls.
Experimental evaluation shows that StackGuard defends against future
attacks. Cowan's group tried exploits from bugtraq, Linux security
announcements, and <comp.unix.security>. In most cases,
StackGuard detects an overflow, halts, then warns the user. For
instance, the Samba overflow came out after the implementation of
StackGuard, yet StackGuard detected the attack. However, StackGuard
fails to detect attacks where the return address is not attacked. For
instance, the SuperProbe overflow involves the rewrite of a function
pointer. Cowan gave wonderfully simple demonstration of StackGuard.
Typing just a few commands, an unaltered dip binary produced a root
shell. When compiled with StackGuard, dip dumps core and warns of the
overflow.
Cowan identified two remaining problems: restarting daemons and
responding to an attack. Halting is not too important for daemons
started by inetd. But persistent daemons such as sendmail or inetd need
a restart mechanism. A watchdog program could restart such services.
The second problem involves what to do after detecting an attack. A
more restrictive program could be started (e.g., using MemGuard). This
could result in denial of service. StackGuard is not perfect;
buffer-overflow problems can get through. But StackGuard is effective
against overflows you do not know about. This gives you time to apply
patches, which you should still patch anyway.
An audience member asked whether an attacker could guess every canary.
Cowan explained that two versions of the pseudorandom number generator
exist. One uses seeding from the time-of-day (cringe!) and another uses
/dev/random. Luckily, StackGuard will not give an attacker a
chance to guess a canary twice. Given that the canary is a 32-bit word,
attackers will have to try to get around the canary rather than attack
it directly.
Overflow attacks usually overwrite data pushed earlier on the stack. To
protect against a downward attack, return addresses should live on
another stack. It may be useful to put return addresses on another
stack, but Cowan's group did not do this.
Another audience member asked why detect overflows instead of
preventing overflows in the first place. Cowan replied that you would
have to check on every write. You could check array bounds for every
write, but StackGuard checks just once per function call. You may find
lower overhead if reads occur more often than writes.
I note that buffer-overflow attacks appear benign at first. But when
combined with local account compromises, a single buffer-overflow
(hundreds are known to exist) can turn your system to swiss cheese. The
common attack works as follows: an intruder will sniff a password (we
all use encrypted Telnet, right?), exploit a cookbook overflow, install
a packet sniffer, then repeat the first step.
Related work includes Snarskii's FreeBSD libc fix and Solar Designer's
nonexecutable stack. Cowan notes that StackGuard and other solutions
can combine for better protection. In the future, StackGuard hopes to
protect nonstack data structures, integrate with intrusion detection
software, and secure a Linux distribution. StackGuard is GPL'ed and
currently works on the x86 architecture. A whole range of IMMUNIX
projects can be found at
<https://www.cse.ogi.edu/DISC/projects/immunix/>.
Data Mining Approaches for Intrusion Detection
Wenke Lee and Salvatore J. Stolfo, Columbia University
Wenke Lee, a PhD student of Professor Stolfo, presented the paper on
detecting intrusions by data mining. When one applies data mining to
intrusion detection, a systematic framework allows the construction of
adaptive detection models. Unfortunately, it was difficult to hear
Lee, but his paper explains most topics in the talk.
Lee aims to avoid manual or ad hoc intrusion detection mechanisms. Such
mechanisms fall into two categories: misuse detection and anomaly
detection. The problem with misuse detection is that someone must
manually code known intrusion detection patterns. Moreover, new
patterns are not detectable. In anomaly detection, someone must select
the right set of system features for measurement. No solid guidelines
exist, just experience and intuition.
Why use data mining? Intrusions usually leave trails of audit data. For
example, cellular phone and credit card companies use data mining
approaches to detect fraud. In data mining, you select appropriate
features to audit. Lee chose data from sendmail function call traces
and tcpdump output. The rest of the system consists of a low-level
learning agent, base detection agent, and meta detection agent. An
agent will extract connection-level features such as the number of
connection attempts, the method of
termination, etc.
Experimental results show single-digit misclassification rates for
local traffic, but misclassification rates as high as 22% for inter-LAN
communication. However, the addition of temporal statistical features
reduces misclassification. Specifically, small window sizes for longer
sampling periods produce better classification. One problem involves
the selection of an optimal window size. A classifier should learn
trends and patterns. For instance, it should predict whether you will
visit Web site C after visiting Web sites A and B.
Lee is just in the beginning of the project and hopes to ensure further
work in data mining approaches to intrusion detection. Currently, he is
testing the effectiveness against known intrusions.
After the session, Vern Paxson and Lee discussed ideas as USENIX
attendees flocked around them like ants. Maybe we will see a hybrid of
data mining and network monitoring in the future. You can find more
information at
<https://www.cs.columbia.edu/~sal/hpapers/USENIX/usenix.html>.
Session: Network Security
Summaries by Gus Shaffer
Each of these three presenters introduced an unanswered issue in
network security protocols and presented a full implementation of a
solution.
Securing 'Classical IP Over ATM Networks'
Carsten Benecke and Uwe Ellermann, Universität Hamburg
With the increasing trend toward ATM networks and the desire for
IP-based services on these networks, many new security issues have
arisen. New services such as ATMARP (ATM Address Resolution Protocol),
have been required. One solution that many try is the use of a
tried-and-true IP security device, the firewall. But if used only in
conventional ways, the firewall can have certain disadvantages in ATM
networks. If placed between switches, it gives good security, but does
not allow "native ATM" services between subnets. If coupled with one
switch, "native ATM" services are available, but maintaining security
requires a much deeper integration of firewall and switching services.
This paper discusses the risks and possible solutions for this
approach, including the following:
-
1.A more secure ATMARP service. One suggestion for securing the
services is in
-
configuration of the ATMARP server. It is preferable to install an
ATMARP server on a host in each LIP (logical IP subnet), as opposed to
on the switch, for multiple reasons. One can control access to the host
much more easily than to the switch. The security features of a
host-based server can be easily increased with software, whereas one
must rely on firmware on a switch. Giving each LIP its own ATMARP
server lets it restrict access according to its own private security
policy.
-
Integration of the switch into a firewall concept. By generating a
direct ATM connection, an external host may totally circumvent a
firewall. The authors suggest using access control lists to prevent
unwanted connections from external hosts using this technique.
Benecke and Ellermann discussed more weaknesses and other possible
solutions. They also pointed out that address filtering in the switch
is not enough to enforce typical security policies and that they should
be used in combination with firewalls. You can find more information
at: <https://www.cert.dfn.de/fwl>.
A Java Beans Component Architecture for Cryptographic Protocols
Pekka Nikander and Arto Karila, Helsinki University of
Technology
Nikander and Karila presented a new practical architecture and an
implementation framework based on the Beans architecture and security
API of JDK 1.1 and the Conduits+ protocol framework for implementing
application-specific protocols with relative ease. These systems can be
securely executed in a "pro
tocol sandbox" and can be securely downloaded over the Internet and
used without any prior arrangements of software installations. This new
Bean-based system allows you to combine many small atomic units to
create sophisticated protocols, making switches between encryption or
authentication methods relatively easy. The authors hope to extend the
Java Beans support so that a graphical Beans editor could be used to
compose protocols. The current implementation is the third in a series,
a Java Bean-enabled version of the second prototype, which was a total
rewrite of the first. It consists of 43 public classes, about 3,800
lines of Java source code, 3,040 lines of which were generated using a
UML-based tool. You can find more information at:
<https://www.tcm.hut.fi/~pnr/jacob/>.
Secure Videoconferencing
Peter Honeyman, Andy Adamson, Kevin Coffman, Janani Janakiraman,
Rob Jerdonek, and Jim Rees, Center for Information Technology
Integration, University of Michigan, Ann Arbor
Researchers at CITI have worked to develop a secure videoconferencing
system that is not only strongly encrypted and fast enough to give full
motion video, but does all this in a provably secure way. In order to
achieve this goal, they used VRA, a stream cipher invented by Bellcore
cryptographers (Venkatesan, Rajagopalan, and Aiello). To store and
distribute keys securely, they used the Shoup-Rubin key distribution
protocol, which is a smart card-based key distribution protocol between
two communicating parties and a trusted third party. Shoup-Rubin was
proved not to disclose session keys, even to an extremely powerful
adversary, by its creators using Bellare and Rogaway's complexity
theory techniques. The encryption and key exchange algorithms were then
added to VIC, a freely available MBONE videoconferencing tool, using
GSS (Generic Security Service) API.
All of this was
achieved on commodity hardware, Pentium 166s running Windows 95, where
the bottlenecks turned out to be MPEG video encoding and decoding, and
Wintel data movement; the encryption made no difference in throughput.
But in measurements of time spent in encryption, the VRA more that
doubled RC4's throughput. In the future, CITI hopes to add encrypted
audio to VIC and possibly add a key revocation mechanism to
Shoup-Rubin. You can find more information at:
<https://www.citi.umich.edu>.
Session: Distributed Systems
Summaries by Jim Simpson
Hillary Orman of DARPA/ITO chaired this session concerning the
implementation of security policies in various parts of a system,
involving the interaction of various software agents. The first paper
discussed a proposed solution to the inherent problem with security
policies in distributed systems caused by heterogeneous subsystems that
have little or no knowledge about each other. The second involved an
operating system security model used to control programs such as
downloaded executables. It compares this model with language-based
security models. The last paper in this session discussed methods to
alleviate loopholes in the current implementation of Java.
Unified Support of Heterogeneous Security Policies in Distributed
Systems
Naftaly H. Minsky and Victoria Ungureanu, Rutgers University
The general idea behind this talk concerned the fact that modern
distributed systems are made up of many different subsystems which may
have been designed separately, without knowledge of each other. They
also might be
governed by different security policies. This may cause a problem when
a single software agent within
the system finds that it must interact or that it belongs to many of
these subsystems and must accordingly be subject to disparate
policies.
To remedy the potential problems of this situation, Minsky stated that
their solution proposes a security scheme under which policies are
defined explicitly and are enforced by a unified mechanism. Each policy
under this scheme specifies what it deals with and how it deals with
it. It is currently implemented in an experimental toolkit called
Moses, which supports a number of policies such as:
-
discretionary models that use capabilities or ACLs
-
mandatory latticed-based control
models
-
sophisticated models required for commercial and clinical applications
The motivation for this unified mechanism is best illustrated with the
attempted implementation of two policies designed for commercial
application in a distributed system:
-
Chinese Wall Policy, which, implemented in MLS, does not lend itself to
distributed systems because data may belong to different administrative
domains. (ACL or capability-based scheme implementation does not work
within distributed systems because it does not scale well.)
-
Sealed-Bid Auction Policy, which does not lend itself to traditional
security schemes.
A new sort of security mechanism can implement both of these policies.
This system views a security policy as made up of a triple that
includes:
-
the set of messages regulated by the policy
-
a distributed group of agents that can send and receive these messages
-
a set of rules governing the exchange of the messages between the
agents
To implement a policy, several controllers are set up that interpret
the rules for the policy, which is stored on one or more servers,
called a secretary. This server also controls the current control
states of each distributed agent. When a process needs to do something
in the system using a particular policy, it must first connect to the
server that stores the policy and ask to be associated with some
distributed agent. Once the process authenticates and the server okays
the connection, the server assigns the process to a controller and
sends it the current control state and public key of the agent. There
are several ways the process can authenticate:
-
a simple password
-
an X.509 certificate
-
an SDSI certificate
If a secretary or a controller fails, a replicate may serve in its
place to prevent failure. This system scales well because the policy is
enforced locally by each agent, so the number of agents in the group
has no direct effect on how they work with each other. This form of
distributed trust has many benefits:
-
The creation of policies is simplified and makes some nontraditional
schemes possible.
-
A single agent can work with many different policies.
-
Enforcing policies is more efficient.
-
The system scales well.
There were a few questions, most involving how the different parts of
the system worked, which are gone over in the paper. Minsky politely
thanked each person asking a question for the question, and then began
his explanations. One of the more interesting answers, one that I often
surprisingly heard, was in response to a question about
how you guarantee the authenticating client goes through a controller
to get to the server? The answer: there are no guarantees.
Operating System Protection for Fine-Grained Programs
Trent Jaeger, Jochen Liedtke, and Nayeem Islam, IBM T.J. Watson
Research Center
Jaeger explained that his talk would be about phase 1 of a project
called the Lava Operating System. Lava has a small kernel, and the goal
of the operating system is to build a fine-grained access control
system into it.
They first took a look at implementing fine-grained access control
through a system of monitors that store permissions of their contents,
implement transformation of those permissions, intercept IPCs that are
sent for its content, determine the authorization requirements for the
operation in the IPC, and authorize the operation using the content's
permission.
This operating system has support mechanisms for control, but because
they didn't want to deal with the complexities of language-based
mechanisms, they used a hardware-based mechanism. They do mediation
over IPCs, which allow the operating system control over the mediation
because it can intercept IPCs.
Concepts that they are building on include:
-
programs that want to perform actions upon objects
-
permissions based on principals that allow oper
ations to be performed
Say there is a requester that wants to use a component in an
application. The components are downloaded and authenticated, and the
principals are assigned to the components. They are then loaded via
some path, and these paths are intercepted by a monitor that controls
access rights the process has. When a monitor intercepts an IPC and the
operation authorization semantics have determined that an operation
should be authorized, the monitor uses the permissions from process to
do so. Authorization capabilities cannot be forged because monitors
accept them only from process load servers or other trusted monitors.
Monitors can be assigned to multiple processes.
To put these monitors to use in implementing security policies, Lava
utilizes IPC redirection. Any IPC to a process is always handled by the
monitor to which that process is assigned. This aids in things like
mandatory access policies and suspicious process isolation. Redirection
is enforced by the kernel for security reasons. Monitors will drop any
request that does not conform to the security policy; they also will
not accept unauthenticated messages for processes they control,
thus protecting the system from outside attacks. This architecture
allows for the building of hierarchical domains of control.
Jaeger then went into performance/
response times of various actions a monitor might take, such as
intercepting IPCs and working with authorization mechanisms. These are
detailed further in the paper; however, the conclusion is that the
overhead might seem to be high, but fast IPC and efficient
authorization mechanisms help bring the overhead down to a reasonable
amount. There is not a whole lot of data right now, but the ideal is a
12% overhead for 30,000 IPCs with a current measurement around 30-40%.
Expanding and Extending the Security Features of Java
Nimisha V. Mehta, The Open Group, and Karen R. Sollins, MIT
Laboratory for Computer Science
Mehta offered a quick change of pace by talking about the ubiquitous
Java and how to expand its current sandbox. This included:
-
simpler policy configuration
-
more easily extensible access control structure
-
extension of security checks to include all level of Java programs
Several different methods were used in trying to achieve these goals.
The first covered was the use of logging facilities to determine what
an applet might be up to. This comes in handy when tracking data
transfers when trying to keep data confidential. If an applet has
access to confidential information, you might not want it to access the
network. However, another applet that does have access to the network
may be running on the system. You do not want the second applet to get
data from the first and then connect to the network, so you log all
transactions of each applet.
Another method is the concept of file ownership; this divides the file
system into applet domains. Each file written by an applet is then
owned by that applet. If so configured, a file written by one applet
may not be accessible to another applet.
To implement their design, they overrode the Security Manager that
comes with the current JDK. This Security Manager is queried whenever
an applet needs access to a system resource. The new security manager
implements the proposed logging system, so whenever an applet goes to
read or write a file, an entry is created in a log. The applet then
becomes the exclusive owner of that file; a file cannot be owned by
multiple applets. The entry is removed only when the file no longer
exists.
Mehta then presented a prototype of
the implemented Security Manager and went over a new variable called
Applet.Category. This label can be set to "suspicious" when an applet
accesses over a certain number of protected files. Applets with the
"suspicious" label are then subject to different, stricter policies.
Another variable, PublicDirs, when used in conjunction with
Applet.Category, allows for control over reading files. Another
Applet.Category label, Contaminated, is set when a file is opened for
reading; Contaminated applets are not allowed to connect to the
network.
Security issues that arose in implementation involved:
-
Errors. When an error occurs, should it affect all applets, or only the
problematic applet? It should affect only the one applet. If a problem
affects all applets, it's probably a system problem.
-
CheckX Methods. These are provided to check access to a certain
resource; some Web browsers give access to the Security Manager and
allow applets to query what their permissions are. Their current
implementation logs these queries as accesses, and this might cloud the
future for more accesses if a policy to limit an applet to so many
accesses is set.
-
Identifying applets. They are currently identified by their DNS
hostname. Because servers sometimes use multiple machines to offload
work, the same applet may come from several different machines and thus
not be able to access its files at a later point.
Finally, Mehta went over performance. Their results showed that their
modified Security Manager takes 1.67 times that of the regular
AppletViewer's, but that performance can be improved with more work in
providing native support and by using a Just-In-Time Compiler (JITC).
Questions followed. Most seemed to be about further definition of
terms. What is meant by an applet that is no longer allowed access to
the network? It can no longer make any more connections. Does this not
debilitate the applet if it's got a socket connection? No.
Session: World Wide Web Security
Summaries by Jim Simpson
Chaired by Diane Coe of Concept5 Technologies, this session provided
insight on many of the current security problems that plague the Web.
Annette Krannig's paper about PLASMA discussed a security
implementation that can specifically address different types of content
with different security methods, as opposed to something like SSL,
which does complete encryption of all content on a lower level.
JavaScript and some of its flaws appeared in the second paper, where a
few people from Bell Labs proposed some methods to circumvent these
problems. Finally, three people from Stanford University presented a
finite state analysis of SSL 3.0 using a program called Murph.
Towards Web Security Using PLASMA
Annette Krannig, Fraunhofer-Institute for Computer Graphics
IGD
Krannig's talk addressed the issue of a lack of security that takes
into account the multimedia or structural elements of Web documents.
This is where PLASMA comes into play. PLASMA is a PLAtform for Secure
Multimedia Applications. Because multimedia and hypertext systems are
an upcoming platform in the transfer of information, their security is
a top priority. PLASMA was designed to be functional in si
tuations other than the WWW, however. To demonstrate its functionality,
the Web was chosen because it is indeed an information transfer system
that encapsulates data.
PLASMA is useful in that it can specify different cryptographical
algorithms, depending on the data. If the content is something more
complex than simple text, perhaps a faster crypto algorithm might be
used, even though it may be less secure. The selection of these
services is determined by the end-user because that is the only person
who knows how important a given content may be. PLASMA offers the
concept of high-level security, which previously has not really been
looked upon.
The architecture behind PLASMA consists of several different modules
that work in conjunction with each other:
-
filter the most important module (It breaks the data down into
its components and notes this in a Script. The Script is later used
when encrypting these components.)
-
three security services nonrepudiation, confidentiality, and
reliability
When users go to access data, they are queried for a PIN. If it is
correct, they are granted access to the data storage area. After this,
the users and the participating parties must mutually authenticate with
each other to gain certainty about identities; this is done with the
X.509 authentication protocol. During authentication, security policies
are also transmitted in order for a Coordinated Security Policy to be
agreed upon so all participants can securely transfer their data.
PLASMA on the Web uses a proxy architecture. These PLASMA proxies
perform cryptographic operations on the data; proxies must be used to
ensure browser-independent transfer of data. These proxies look for
different tokens denoting PLASMA services:
-
Authentication allows PLASMA clients to authenticate with each other.
-
The container denotes that the packet contains PLASMA data and should
be decrypted.
-
The final token is used when you wish to close the session.
When a user requests data, PLASMA gets them, encrypts them based on the
prespecified policy, and then sends them out as data over HTTP. The
proxy on the other end receives the data, decrypts them, and then
displays them. When the session is done, a Final token is sent to the
server to end it.
Security of Web Browser Scripting Languages: Vulnerabilities,
Attacks, and Remedies
Vinod Anupam and Alain Mayer, Bell Laboratories, Lucent
Technologies
Mayer quickly went over what his talk was going to be like and in so
doing explained he would be pointing out some problems with the current
implementations of scripting languages on the Web. There are many
different kinds of attacks to be played using scripts; one gone over in
this talk was the Bell Labs attack.
Several clients out there have security issues; Netscape 2.x through
4.x and Microsoft's IE 3.x through 4.x. In addition, Microsoft uses
VBScript in conjunction with ActiveX and JScript. These scripting
languages are integrated with and embedded in HTML. Their problems stem
from not having a decent security framework to start with. In analyzing
the security impl
ementations of the various scripting languages, they came across a
serious flaw in Javascript that would allow browser windows to be
tricked into trusting scripts from rogue sites. To work around these
problems, they introduced the idea of a safe interpreter called
SecureJS that assures data security and user privacy.
A safe interpreter must implement:
-
Access control. This is implemented through a partitioning of
namespaces, to which objects existing in a context are assigned. After
partitioning namespaces, you have to figure out how to specify the
partitions for read objects and write objects. Finally, there has to be
a mechanism where access to external interfaces can be specified so
that the script can do something if it needs to with the network or
files.
-
Independence of contexts. Resources for different contexts must be kept
disjoint.
-
Management of trust. This controls the trust among different contexts,
in case they need to work with each other.
The safe interpreter would prevent a Bell Labs attack; the Bell Labs
attack required that the attacking script persist across multiple
document fetches, access data from other loaded documents, and be able
to transmit captured data to a desired location. With secure JS, though
the rogue site might originally set up two contexts that can
communicate with each other, the trust relationship would have been
removed if one of the contexts closed down; the contexts would be
prevented from communicating with each other.
By using an object-instance hierarchy in an access control system, a
certain framework with which security policies can be implemented is
realized. However, had thought been given to the security design for
these scripting languages, the ability to prevent flaws would have been
greatly increased.
Finite-State Analysis of SSL 3.0
John C. Mitchell, Vitaly Shmatikov, and Ulrich Stern, Stanford
University
Shmatikov started his talk by going over a few things about
finite-state analysis in general and about the tool they used, Murph.
Murph is normally used in hardware verification and analysis.
Murph takes a specification and checks it through by explicit state
enumeration. If it can go through all the states, the model satisfies
the specification. What can we learn from this finite-state analysis?
First, this is fairly high-level analysis, so the attacks include
pretty much everything but cryptography, which is a low-level property
of the protocol. They assume perfect cryptography and that the only way
an attacker is going to be able to decrypt something is to have the
key. Murph is useful for finding authentication bugs and attacks
involving version rollback.
The first thing they looked at was the SSL handshake. It establishes
secret keys between a client and a server. Thus they mutually
authenticate to each other.
In order to do that, they must do the
following:
-
C (client) must include its identity in its Hello message to the
server.
-
S (server) must send its public key in a certificate signed by a public
authority.
-
C also must have the CA sign its verification key.
-
C and S must re-exchange information after the crypto handshake is
complete to make sure there was no attack on the plaintext Hello
messages and verify all the following messages.
-
Nonces must be added to protect against replay attacks.
One of the features of the SSL 3.0 server and client is that they can
talk to SSL 2.0 clients and servers. They took the simplest version of
the protocol and gave an
attack found by Murph. They then added part of the protocol that
prevents the attack and ran Murph again. They continued this until,
optimally, there were no attacks found. After going through these
requirements, Murph found another attack involving the finish of the
handshake that is easily prevented by assuming the protocol is
incomplete until final verification messages are sent.
In the end, they learned that finite-state analysis can be applied to
complex protocols like SSL and actually find new problems with the
protocol in given instances.
There were a few questions. One person inquired why they used Murph
instead of other modelling tools. It came down to unfamiliarity with
how other tools might model SSL and their use of Murph in other cases.
Session: Cryptography
Summaries by Kevin Fu
Certificate Revocation and Certificate Update
Moni Naor and Kobbi Nissim, Weizmann Institute of Science
Kobbi Nissim presented a paper on efficient certificate revocation,
analyzing existing certificate revocation schemes and a new scheme with
better incremental efficiency. The Naor-Nissim (NN) certificate
revocation scheme uses Certificate Revocation Lists in the form of
authenticated search structures. Nissim won the best student paper
award.
Certification involves three parties: a trusted certification authority
(CA), an untrusted directory, and untrusted users. A CA issues and
revokes certificates offline and periodically updates a directory.
Users query a directory for "proofs" of
validity and receive personal certificates from a CA. This paper
addresses the problem of revoking users' certificates while keeping
communication costs and message sizes to a minimum.
Nissim summarized three existing certificate revocation schemes:
Certificate Revocation Lists (CRLs), Micali's Certificate Revocation
System (CRS), and Kocher's Certificate Revocation Trees (CRTs). The CRL
offers the simplest approach. A CA periodically sends a digitally
signed message to a directory. This message lists all revoked
certificates. An obvious drawback is that the message can grow
overwhelmingly large. In CRS, a CA periodically signs a message
denoting the validity of each certificate (revoked or not revoked). CRS
has excellent query communication costs but suffers from an increase in
CA-to-directory communication. CRTs use binary hash trees to
authenticate statements about certificate validity. CRT has the
advantage that the entire CRT is not necessary to verify a given
certificate. Moreover, users can easily prove certificate validity to
other users. However, updates cause the recomputation of the entire
CRT.
In the NN certificate revocation scheme, authenticated directories
allow efficient directory queries and certificate "proof" updates. NN
actually consists of a family of solutions, but Nissim demonstrated the
2-3 tree version. Leaves represent revoked certificates' serial numbers
(in ascending order), and values of internal nodes result from
collision-intractable hashes of their children. The paper describes
other data structure variations.
To vouch for the validity of the authenticated data structure, a CA
signs a message containing the root node and tree height. Checking for
a revoked certificate involves recomputing the path (of hashes) from
the root node to the leaf representing the revoked certificate. This
computation requires knowing all nodes on the path from the root to the
leaf, along with all the children of the nodes on this path. Then to
prove the validity of a certificate, a user must demonstrate two paths
from the root node to two neighboring leaves such that the serial
number of the unrevoked certificate is sandwiched between the values of
the neighboring leaves. Because a proof of validity is small and hashes
can be easily verified by any user, the new scheme allows users to
efficiently prove validity to each other, reducing the
user-to-directory communication costs.
Nissim exhibited a table that denotes the presence or absence of
desirable qualities in CRLs, CRS, CRTs, and the NN certificate.
Unfortunately, the paper does not include the same table, but you can
deduce the information by reading the itemized lists within the paper.
Qualities include low CA-to-directory communication, low
directory-to-user communication, high directory update rate, whether a
user may hold a proof of validity, scalability, and the existence of an
update mechanism. The NN scheme works well except that the amount of
directory-to-user communication (and thus the overall amount of
communication) increases.
The new scheme can also update certificates incrementally. In
traditional certification schemes, CAs may issue short-lifetime
certificates, then reissue new certificates for each user. But
the CA becomes a bottleneck. In the NN scheme, Nissim suggests that the
CA broadcast an update message to all users. Taking advantage of the
tree structure, users can update their certificate "proofs"
appropriately. An audience member asked what would happen if a
directory or user were to miss an update. Nissim replied that proxies
can collect update messages. Furthermore, one can authenticate proxies
by checking the time of an update because the reissuing period is a
known expression.
The appropriate revocation method depends on requirements of
response-time (propagation of revocation) and efficiency (the amount of
communication). I recognize two main reasons for certificate
revocation. Casual revocations result from noncritical reasons such as
job changes or college graduation. If risk is low and most revocations
are casual, decreasing the amount of communication at the cost of more
propagation delay may be acceptable. However, revocations due to a
compromise need immediate revocation. In a high-risk system, immediate
revocations may justify the cost of extra communication.
Nissim's clever overlays helped introduce certificate revocation, but
too many complicated ideas uprooted the end of his talk. Nevertheless,
the NN certificate revocation scheme is elegant and worthy of the best
student paper award.
Attack-Resistant Trust Metrics for Public Key Certification
Raph Levien and Alexander Aiken, University of California,
Berkeley
Raph Levien, a graduate student of Alexander Aiken, presented a paper
on the role of trust metrics for attack-resistant public key
certification. The authors characterize existing trust metrics,
establish a theoretical best case in their model, and offer a metric
that meets the theoretical best case.
Trust metrics address the problem of binding public keys to opaque
names. For instance, an email address may bind to a trusted public key.
Trust metrics aim to
accept most good bindings but reject forgeries. By requiring multiple
certifications, bindings become more attack resistant. In the general
case, a certification authority asserts that a key k belongs to a name
n. This model naturally leads to a certification graph where nodes are
keys or name/key associations and edges represent certification. A
trust
metric evaluates such a graph and outputs either accept (trusted) or
reject (untrusted).
Levien admitted that the following makes two bold assumptions: the
metric will accept most of the good name/key bindings and the name
space is opaque (names reveal no information). Without these
assumptions, the analysis becomes intractable. He suggests that the
metric is good for email but maybe not for high-security applications.
The proposed model considers two types of attacks. In a node attack, an
adversary steals keying material from a CA. The adversary could reuse
the keying material over and over. In an edge attack, an adversary can
convince a CA to falsely certify a node. The adversary does not
actually have the keying material. Why distinguish between node and
edge attacks? It is harder to protect against edge attacks.
Within the paper you can find a table of successful attack thresholds.
The table summarizes the necessary number of compromised nodes and
edges to invalidate a trust metric. Resistance to attack ranged from
single points of failure (just a single node) to quadratic numbers of
nodes.
The maxflow-edge metric achieves the theoretically best case. Each key
is certified by exactly d other keys (for concreteness, d could be 10).
This method is consistent with both the Web-of-trust and the CA model.
For a successful node attack, an adversary must compromise d nodes.
Approximately d2 nodes or edges are necessary for a
successful edge attack. Other attacks include chosen node (breaking a
certain key) and random node (breaking any key). Both attacks need d
nodes for a successful node attack or about d2 nodes for a
successful edge attack.
Levien mentioned problems with metric evaluation. Trust metrics are
limited, cost grows slowly with cost of certification, will not scale
up to the Internet. Levien calls the graph model somewhat simplistic.
For instance, revocation is not part of the model. The paper could use
more detailed captions so the casual reader can see the main ideas.
Breaking what must be a USENIX record, Levien delivered his talk with
several minutes to spare. This paper offers good advice for designers
of authentication metrics. You can find the paper and related
information at <https://www.cs.berkeley.edu/~raph/>.
Software Generation of Practically Strong Random Numbers
Peter Gutmann, University of Auckland
Peter Gutmann reviewed poor pseudorandom number generator (PRNG)
implementations and presented a practically (not pseudo) strong random
number generator. Gutmann's accomplishments include the Secure File
System and CryptLib. This self-proclaimed eternal graduate student
warned the audience of his tendency to speak fast. In order to thwart
this habit, an associate routinely shot him with a Texan rubberband gun
(three times, to be exact).
Gutmann first highlighted significant holes found in PRNG
implementations, including Netscape, Kerberos V4, MIT_MAGIC_COOKIE, and
SESAME. Because these implementations used predictable and/or public
seeding, one could easily determine the seed. Once an adversary has the
seed, most PRNG implementations become deterministic. In fact, Gutmann
looked at the SSH source code just before giving his talk. Apparently
the SSH generator does not use a one-way function (OWF); rather it uses
a linear feedback shift register (LSFR) and exposes the internal
state of the pool.
In designing a practically strong random number generator (PSRNG),
Gutmann accumulated a list of general requirements. A PSRNG should
protect against input analysis, manipulation of input, output analysis,
and disclosure of internal state. Moreover, a PSRNG should use good
coding practices. For example, Gutmann complained of the difficultly in
understanding spaghetti source code in PGP 2.x.
A PSRNG should tap several sources for randomness. Random sources
include the purely physical devices (lava lamps, radioactive decay);
physical and postprocessing (SG100); multisource; singlesource (PGP 2.x
or /dev/random); secret nonce and PRNG (Applied Cryptography); fixed
secret value and PRNG; or a known value plus a PRNG (Netscape, Kerberos
V4, SESAME).
To gauge the effectiveness of entropy polling, Gutmann tried to
compress successive samples of random data. He analyzed entropy polling
on several platforms. DOS and OS/2 do not offer much entropy. In
Windows 95 and Windows NT, you have some relatively nondeterministic
system statistics to exploit. Surprisingly, Gutmann found that Win16/32
produces lots of entropy upon reboot (about 2.5 times as much as a
long-running machine). This results from the startup of 16-bit Windows
drivers in somewhat nondeterministic order. However, network statistics
in Windows NT contain almost no entropy, and reboots cause little
change in entropy. In evaluating UNIX randomness polling, Gutmann had
to "handwave." BSD has more sources of randomness, but Gutmann did not
test rebooted machines because 150 people use his only BSD machine.
Protecting the pool state is as important as protecting cryptographic
keys. In UNIX, root can use the mlock() call. And in Windows
NT, locking does not work as documented. Macs can use the
HoldMemory() call. Windows 95 locking does not work (the
function returns "true"). Because Windows has "methods" to read the
pool, one can obfuscate things by spreading the pool all over the
place. Gutmann found this method "pretty good."
Gutmann also presented a PSRNG implementation that is better described
by diagrams in the paper. In his PSRNG, he used hard-coded paths to
utilities such as netstat to acquire randomness. He recommends running
the randomization process with the UID nobody (so as not to expose
root-privileged information) and timing the harvest of random numbers
to kill slow entropy sources. CryptLib contains Gutmann's practical
random number generator.
A participant questioned Gutmann about blocking issues in the Linux
implementation of /dev/random. If the /dev/random is
"drained" of its randomness, it may block to obtain more entropy.
Gutmann explained that one cannot easily speed up /dev/random
because it tries to estimate how long to stir the pool for the amount
of randomness you request. If you empty the pool, /dev/random
will block until the pool is refreshed. Trading some security for
performance, programs may find the urandom() call sufficient
for some applications.
Throughout the talk Gutmann gave random bits of advice. For instance,
generating a public and then the corresponding private key from the
same pool can expose the state of the pool for the private key. You
have to keep them separated or intruders will come out to play.
Gutmann hesitated to call his scheme pseudorandom in the pure sense.
Pseudorandomly generated numbers usually refer to bit strings
indistinguishable from truly random bit strings, given a polynomial
amount of computation time. However, most PRNGs suffer from performance
drawbacks. Instead, Gutmann calls his scheme practically random
a balance between trial-by-fire security and performance.
I note that even the experts can overlook PRNG problems. For instance,
Jeff Schiller, a well-established security expert, co-designed Kerberos
and co-authored an Internet RFC titled "Randomness Recommendations for
Security" in 1994 (see RFC 1750). However, people found a flaw in the
Kerberos V4 random number generation in 1996! Even the experts can
overlook random number generation. Therefore, PSRNG designers should
pay careful attention.
This entertaining talk addressed issues of practical security and good
implementation practices. For more information, see
<https://www.cs.auckland.ac.nz/~pgut001/>.
INVITED TALKS TRACK
The Security Product Market: Trends and Influences
Marcus J. Ranum, Network Flight Recorder Inc.
Summary by Kevin Fu
Clad in western-style boots and wearing just-out-of-bed hair, Marcus
Ranum spoke about money and its effect on the security industry. Known
for his work on firewalls, Ranum now serves as president and CEO of
Network Flight Recorder Inc. and has been chief scientist at V-ONE
Corporation. Ranum gives a disclaimer that his opinions could be right
or wrong. Use his opinions at your own risk.
One morning people just "woke up" as security experts. But apparently
they never went back to sleep; Ranum calculates that the US security
market has a 70% compound annual growth! Such enormous changes come
from natural growth, IPOs, and the injection of VC
money (or energy, as he describes it). Several security companies
announced IPOs between 1995 and 1997. IPOs pressure companies into
short-term development. Investors expect something each quarter.
Investors salivate over Internet-related IPOs coincidentally scheduled
with interviews, articles, and industry initiatives. However, IPOs and
VC often produce artificial growth.
As an aside, Ranum offered an exaggerated get-rich-quick scheme: go to
security companies and read the guest register. Maybe you will see the
names of a lawyer, some investment bankers, and a prominent CEO all on
the same day. Then bet with your kids' college tuition. I could almost
hear the audience thinking aloud.
With new capital, a company can either compete like mad or perish. It
can outsell a competitor, claim to own most of the market share, buy
the pieces it does not own, or watch its stock value sink. Investors
don't care about technical details of security. They care about the
right press. Of course, this produces lots of short-term fluff. Most
public companies just want to get something out the door because they
cannot develop long-term strategies while pleasing investors each
quarter.
Ranum then gave his opinion on future industry trends. Instead of
licensing technology, companies will acquire or merge with others. For
instance, Security Dynamics (authentication) purchased RSADSI
(encryption). Likely merger paths include the marriage of
authentication and firewall technology. However, Ranum does not expect
monster IPOs from encryption companies. And everyone knows the
importance of controlling a public key infrastructure (PKI)
that's the problem! Ranum vividly described wannabe CAs as "greedy
pigs" who "knocked over (the PKI) while charging the trough."
The next part of Ranum's talk concerned new growth areas. Intrusion
detection will become popular. "Network grep" technology has a big
market because it's easy to explain. You passively look for intrusion
patterns. (See the summary on "Bro.") However, virtual private networks
have their lines drawn; it's too late for startups.
By 2000, analysts expect a $14 billion/ year market for complete,
managed network systems. Customers want one box (count 'em, one) for
security, firewalls, and pizza. The one-trick pony company will give
way to one-stop shopping. As a side effect, bogus products and bogus
consultants will proliferate, tarring the real security experts with
bad press. Name recognition will become the metric of security quality.
Customers will choose one of the five big-name companies. Expect
serious commoditization.
What is the effect on security? The industry gobbled up everyone who
understands security. Moreover, security experts can charge top dollar
attracting wanna-bes and diluting the market. However, Ranum
offers some assurance that the joyride will soon slow down;
astronomical fees will not last forever.
Throughout his talk, Ranum advised people to learn more about Windows
NT. He explained that "Microsoft takes things the customer wants, then
rolls it into an operating system." People want to use NT because they
wrongly believe it is easier than UNIX. The UNIX community can
partially blame itself for this perception. We bicker over flavors of
window managers and GUIs; we "scare people" into using NT.
If you design a plugin security module for NT, you will get money.
Microsoft will either buy you or steal you, but either way you will get
a red Ferarri. Otherwise Microsoft will drive you to the ground. "You
have to be a total schmuck to fail," said Ranum. In the end, everyone
will use NT. "Cry, but don't laugh," he warns.
An audience member asked for justification that token-based
authentication systems and smart cards will soon fade away. Ranum
replied, "Why carry a token when you can carry a Pilot?" He further
explained that tokens alone do not complete transactions and that smart
cards need backing from computer giants for widespread acceptance.
I note that large growth reduces the accuracy of market predictions. If
the security market has a compound growth of 70% a year, no one can
accurately predict the future. For more information about Ranum's work,
see <https://www.clark.net/pub/mjr/>.
Computer Security and Legal Liability
Moderator: Steve Bellovin, AT&T Labs Research
Panelists: Tim Lagankamp, Interliant Corporation, John Courtney,
Andrews & Kurth, LLP, and Peter D. Kennedy, George Donaldson &
Ford, LLP
Summary by Gus Shaffer
Tim Lagankamp opened the session with a presentation on the liability
of a system administrator in a break-in situation. The primary issue in
this case is obvious, and not-so-obvious, neglect of duties. What
exactly is your obligation to your customer's security, what
could/should you have done to ensure that security, and is your lack of
action clearly a direct cause of the opening?
John Courtney followed Lagankamp with an overview of the liability of a
vendor whose product allows a break-in. The
vendor's legal status in this situation
is similar to that of the sysadmin. Did they knowingly leave security
holes, and exactly how secure must they make their software in a
buyer-beware market?
Peter Kennedy finished off the initial presentations with a look at
hacker liability and the legal pursuit of a hacker from the victim's
point of view. Issues included financial obligations of hackers for
damage due to their intrusion and pursuit of hackers through
international borders and over a chain of accounts.
The question-and-answer session also touched on topics such as software
licensing and the legal aspects of e-commerce.
Factoring: Facts and Fables
Arjen K. Lenstra, Citibank, N.A.
Summary by Kevin Fu
This talk offered a historical perspective of factoring and security
assumptions. The world's foremost expert on factorization, Arjen
Lenstra became widely known when his team cracked the RSA-129 challenge
at Bellcore. Lenstra currently works for Citibank and has fun as
president of Digicrime. He emphasized that this talk has no relation
whatsoever to his employer. He also noted that this is his first time
at a USENIX Security Symposium and that "you can't expect much else
from a banker."
Factoring is a simple problem: given an odd nonprime n, find a
nontrivial factor of n. The problem of testing primality is easy. In
theory, this runs in random polynomial time while in practice, it runs
in deterministic polynomial time. However, people believe that
factoring large numbers (with no small factors) is difficult. This is a
religion, and there is no evidence to show it is true. Everyone seems
to agree, but, says Lenstra, "I have no idea why."
According to the recently deceased James Ellis
(<https://www.cesg.gov.uk/ellisint.htm>), the British government
had trouble maintaining lots
of keys for the armed forces. Therefore, they sought a system that
required no prior key. This account of Non-Secret Encryption (NSE)
exists in a 1970 CESG report by J. H. Ellis: "The Possibility of Secure
Non-Secret Digital Encryption." Then in November of 1973, C. C. Cocks
wrote "A Note on Non-Secret Encryption" in a CESG report. The algorithm
proposed by Cocks (CCC) looks very similar to RSA. Interestingly
enough, CCC did not consider signature use. Of course, Diffie-Hellman
appeared in 1976, and RSA appeared in 1977. Another scheme from CESG is
based on DLP in a ring rather than a multiplicative finite field. This
report was made months before Diffie-Hellman. Lenstra asked,: "Were
PKCS, RSA, and DH first discovered in the UK?" For a dramatic effect,
he left the rest of the slide empty.
Security of "accepted" PKCS is based on the supposed difficulty of
factoring or taking discrete logs. Do we have nontrivial lower bounds
for difficulty of factoring or DLP? No, except for some special cases
of no practical use. There do exist nonpolynomial upper bounds from
algorithms such as the Quadratic Sieve (QS) in 1982 and the Number
Field Sieve (NFS) in 1989. Conclusion: perceived security of PKCS is
based on "our credulity and mathematical incompetence."
Lenstra explained that plenty of people who have never factored are
talking about factoring. For instance, Richard K. Guy doubts "anyone
can factor 80 digits without special form in the present century" in
his 1976 paper, "How to Factor a Number." Martin Gardner wrote of "A
new kind of cipher that would take millions of years to break." In
Lenstra's opinion, a laptop computer will soon be able to factor an
80-digit number in a single day.
To argue his point, Lenstra extrapolated current factoring
capabilities. In 1994, a QS factored an RSA-129 modulus. This required
5,000 MIPS years for stage 1 (sieving) and two days on a 16K MasPar for
stage 2 (matrix). Then in 1996, an NFS factored a 130-digit number in
less than 700 MIPS years for stage 1 (68 hours and 700MB). However,
stage 2 required much more computation time, even on a Cray C-90.
Extrapolating these figures, Lenstra believes factoring a 512-bit
number with a QS would require 500,000 MIPS years for sieving and four
days (and 1GB of space) on a Cray C-90 for the matrix. Substituting
NFS, sieving would take 20,000 MIPS years, and matrix computations
would take three months (and 4GB of space). Therefore, 512-bit moduli
are not long enough for current technology. But factoring 1,024-bit
moduli seems hopeless. Just to sieve, the QS would require
1015 MIPS years, and the NFS would take 1011 MIPS
years. Lenstra concludes that 512-bit QS factorization is feasible,
512-bit NFS factorization is hardly feasible, and 1,024-bit
factorization is hopeless.
Looking to the future, Lenstra addressed how faster processors and new
theory affect factoring. Speeding up a processor will not significantly
improve current sieving techniques. Unless memory speeds increase,
faster processors will not help. As for new theoretical insights,
Lenstra says, "Anything may happen."
Does difficulty imply security? It does if the entire production
process is closely watched by people who understand all issues
involved. But many people are either "laid off or promoted to security
jobs." Then how can you avoid "creative" products? You could use
trusted software (but does it exist?). Reading and understanding lots
of source code is impractical. Trusting employees to write your own
code is unrealistic. You could select the right vendor, but how?
Combining products from different vendors is expensive, slow, and may
not work. These problems apply to everything (operating systems,
compilers, etc). Some experts say 160-bit elliptic curve cryptosystems
(ECC) offer security comparable to 1,024-bit RSA. Lenstra believes
160-bit ECC is merely very difficult, not impossible. (See the next
talk.)
Lenstra summarized that proving the difficulty of IFP or DLP would not
change anything because hardness of IFP or DLP does not imply hardness
of many algorithms such as RSA. However, fast algorithms to solve IFP
and DLP would topple most of the foundations of modern public key
cryptography. Lenstra concluded by asking, "Why won't business people
leave the Internet alone?!"
An audience member asked what to do if you were to find a fast
factoring algorithm. Lenstra offered two suggestions. First, the
discoverer should make the finding extremely public and publish the
work as soon as possible in order to save his or her own skin.
As an alternative, Lenstra told the audience, "Just tell me about it."
Some confusion erupted over key sizes. Do not confuse bits with digits.
For instance, RSA-129 (digits) = 429 bits.
Elliptic Curves Ready for Prime Time
Alfred Menezes, University of Waterloo
Summary by Kevin Fu
In this invited talk, Alfred Menezes gave a quick introduction to
elliptic curve cryptosystems and their advantages over other
cryptosystems. Menezes charged into the
most mathematical talk of the symposium. His work includes
co-authorship of the Handbook of Applied Cryptography and
Elliptic Curve Public Key Cryptosystems. Once at Auburn
University, he now works for the Advanced Cryptographic Research Center
at the University of Waterloo.
Menezes began by defining multiplicative groups in a finite field. For
instance, discrete log problem (DLP) cryptosystems usually operate in
Z*p.
Z*p represents a multiplicative group
modulo an integer p. This is a set consisting of the positive
integers less than or equal to p that are relatively prime to
p. When p is a prime, the set consists of
{1, 2, . . ., p-1} and having a cardinality
of p-1. For instance Z*13 = [1, 2,
3, 4, 5, 6, 7, 8, 9, 10, 11, 12}.
Why use fancy groups other than just Z*p?
Arithmetic and security of protocols may be more efficient or better.
For instance, you could use smaller numbers, yet retain the same
security.
Elliptic curve cryptosystems (ECC) are based on two variations of the
DLP. An elliptic curve comes from the equation
y2 = x3 + ax + b where a and b are elements of
Z*p such that 4a3 +
27b2 does not equal zero (mod p). The set
E(Z*p) consists of all points (x,y), in
Z*p, that satisfy the above equations,
together with a special "point of infinity." Given a fixed prime
p, there are many curves to choose. This means everyone could
share the same prime, allowing construction of cheap, special-purpose
hardware. Because ECC appears to provide good security with small
secrets and little energy, companies like Motorola will want to use ECC
in small devices such as pagers.
Menezes took a defensive stance on ECC. Has IFP been intensely studied?
Yes, but Gauss could not even perform modular exponentiation. This
prevented him from further success. Moreover, the Number Field Sieve
(NFS) would have been useless to them because computers did not exist.
The Integer Factorization Problem (IFP) was really studied in the last
20 years. IFP is "sexier" than DLP, but in Menezes's opinion, IFP is
not a better basis than DLP. None of these systems is provably secure;
each makes heavy-duty assumptions. ECC assumes an elliptic curve analog
of DLP. There exist good breaks on specific instances of ECC, but no
subexponential algorithm has been found for the general case.
Although ECDLP was just proposed in 1985, DLP and ECDLP abstractly
concern the same problem just different arithmetic structures.
Many DLP problems apply to ECs. Therefore, elliptic curves have been
studied about as much as DLP. IFP attacks appear to have better
software attacks. For instance, the RSA-129 effort involved several
hosts over the Internet. However, DLP-based cryptosystems have better
hardware attacks (according to Wiener).
An audience member asked about patent issues with ECC. With RSA, patent
issues come into play. But for general ECC, no patents exist (just
certain implementations). With ECC, you can avoid patents. However,
government standards protect liability. Some companies like patented
technology for liability and licensing reasons.
Currently, no subexponential attacks on ECC exists. However,
cryptographers disagree on whether to assume no subexponential attacks
exist simply because no one has found one. Many cryptographers find the
security comparisons between ECC and DLP hard to justify.
Because describing Menezes's diagrams in words hardly does justice, I
suggest reading his book, Elliptic Curve Public Key
Cryptosystems, or the paper, "Elliptic Curve DSA (ECDSA): An
Enhanced DSA."
See <https://www.dms.auburn.edu/~menezal/> for more information.
Securing Electronic Commerce: Applied Computer Security or Just
Common Sense
Clifford Neuman, University of Southern California
Summary by Dug Song
Neuman's talk focused on the particular security requirements of
electronic commerce, which need to take into consideration unique
business needs (such as authorization of payment, controlled access to
internal systems by customers, etc.). These include the following:
-
Identifying the risks. This means not just relying on SSL for a secure
transport; you need to understand the dependencies and what's going on
behind the scenes. There may be attacks not only against the server,
but against the client and the commerce protocols themselves to
consider.
-
Matching security policy with mechanisms. Authentication may not be
necessary for all transactions. Compart-mentalization of data, users,
and systems needs to closely match policy. Different transactions may
required different protection.
-
Careful consideration of firewall and host security issues. These
include the pros and cons, for example, of having a server behind,
outside, replicated on both sides of, or between firewalls.
-
Intrusion detection and audit. Identifying anomalous transactions may
be difficult, but you can enlist the customer's help by sending
confirmations over an alternate channel, etc.
Neuman also mentioned other concerns of electronic commerce, including
reliability and service denial attacks, as well as insider attacks.
Real-World Security Practices
JoAnn Perry, Independent Consultant, and Shabbir Safdar, Goldman,
Sachs & Co.
Summary by Dug Song
Being a successful security professional in the business world requires
as much politicking and strategizing as technical work. Perry and
Safdar offered keen insights into the problem of making security a real
concern of your organization and of establishing the political clout
needed to do your job effectively.
The unfortunate reality is that most companies view information
security as an annoying inconvenience. As a security professional, your
challenge is to shape attitudes and opinions in your favor and to make
security a crucial concern of business.
To that end, Perry and Safdar offer the following hints:
-
"Don't be the naysayer." People will be less likely to solicit your
help if all you ever do is deny them. Be a solution provider vs. a
gatekeeper, and demonstrate your end-user focus.
-
"Talk to them in business terms." Management needs to understand risk
assessment in terms they can understand. Learn to talk the talk;
understand the business side of the house as well as the technical.
-
Motivate your peers. Do your own PR increase the organization's
security awareness so your work isn't a constant uphill battle. Join
industry groups, share information with other security professionals.
Security teams can test one anothers' security. Even status reports, if
made understandable to everyone from a business perspective, can be an
opportunity for increasing awareness. The goal is to educate end-users
and change the usually security-hostile corporate culture in your
favor. Make allies.
In terms of technical security's best practices, they recommend these:
- Piggyback on standardization efforts.
Make the most efficient use of your time!
-
Find ways to make security cost-effective. (e.g., good Web
authentication can enable low-cost customer service).
-
Perform regular security audits, which can also be played politically
to keep you evenhanded in functionality vs. security debates.
-
Prepare your PR person for an
incident.
-
Build a supportive staff (e.g., a help desk staffed with interns to
provide front-line support).
-
Stay on the cutting edge. Pre-empt the hype of new products by testing
and evaluating them before your end-users do.
WORKS IN PROGRESS
Qun Zhong: Enforcing Consistent Mobile Code Security Policy Across an
Enterprise Network
Peter Honeyman: Workstation Authentication
<https://www.citi.umich.edu>
Stefan Axelsson: A One-System Call Audit Trail and Its Effect on UNIX
Security
Randy Witlicki: Protocol Tunnels and Computer Network Security
Mike Reiter: Crowds Anonymous Web Transactions
Crowds are a fully implemented way to use encryption and request relay
to make http requests anonymously.
<https://www.research.att.com/projects/crowds>
Alain Mayer: LPWA
The Lucent Personalized Web Assistant is a free proxy service that adds
privacy by creating unique personal identities at Web sites without
revealing personal information, such as your email address.
<https://www.lpwa.com/>
Peter Gutman: The Cryptlib Crypto/Security Architecture
The cryptlib library provides an easy-to-use interface that allows any
programmer to easily add strong encryption and authentication services
to software.
<https://www.cs.auckland.ac.nz/~pgut001/cryptlib.html>
Greg Rose: Opportunistic Encryption in Sendmail
Rose has released patches for sendmail, incorporating encryption using
his new stream cypher SOBER.
<ftp://mcs.une.edu.au/pub/ssmail>
Tadayoshi Kohno: A free SSH library
The SSH library is not yet available due to export restrictions.
Greg Rose: SOBER Fast Stream Cipher
SOBER is a new stream cipher, designed to be small, fast, and secure.
It is ideal for use in mobile phones as well as a wide range of
encryption needs.
<ftp://mcs.une.edu.au/pub/ssmail/sober.tar.gz>
Raph Levien: Scaling trusted keys to the Internet
Umesh Maheshwari: A Flexible Commerce Architecture
Maheshwari presented a theoretical
protocal architecture for electronic
commerce.
Wietse Venema: Vmailer
Vmailer is a more secure and more easily configurable alternative to
sendmail
<https://www.vmailer.org/>
Peter Honeyman: Smart Cards: Threat or Menace?
Honeyman presented CITI's progress in their research in the field of
smart card implementation and use.
<https://www.citi.umich.edu>
Ian Goldberg: Pilot as Certificate
Goldberg presented his plans for developing the Palm Pilot as a
replacement for smart card in public key certification.
<https://www.isaac.cs.berkeley.edu/pilot/>
Mudge: Pilot as Wardialer
Mudge presented the L0pht's progress with their pilot development
efforts.
<http:www.l0pht.com>
BOFS
OpenBSD BOF
Theo de Raadt, OpenBSD Project
Theo de Raadt presented a history of the OpenBSD project, its current
status, and its future directions, at one of the night's most popular
BOFs.
OpenBSD formed as a spinoff of the NetBSD project, whose program was to
port a freely available 4.4BSD UNIX to as many hardware platforms as
possible. OpenBSD's major differences from its progenitor include an
emphasis on security and correctness, incorporation of strong
cryptographic security systems (such as IPSEC, Kerberos, etc.), open
access to source (via anonymous CVS), and incorporation of features
from the competition (including FreeBSD, Linux, etc.).
Over the years, de Raadt and company have performed at least two full
security audits of the OS from the ground up. This included bugfixes,
general code-cleanup, and proactive security changes that have had
beneficial results all around (e.g., randomization of PIDs, protocol
fixes, etc.). As a result, OpenBSD hasn't
had a single remote root vulnerability and only three reported
localhost exploits in the past year a record he challenges any
major UNIX vendor to match.
<https://www.openbsd.org/>
Firewalls BOF
While de Raadt's OpenBSD BOF filled a small room to overflowing, the
firewalls BOF almost filled a ballroom. As usual, Brent Chapman led the
discussion, but with a surprise announcement. Brent had started the
firewalls mailing list after the third USENIX Security Symposium (in
Baltimore), but is now passing the list to a new home. Brent now works
with an ADSL vendor and is no longer focused on firewalls, so the move
makes sense.
After the brief introduction, the BOF becomes a question-and-answer
session, with the audience supplying both. Questions ranged from
screening Java packets with firewalls (just look for the magic number
0xCAFEBABE) to interoperability of IPSEC encryption between different
vendors' implementations. The BOF lasted over two hours (still short of
a record 3.5 hours set a couple of years before).
|