Check out the new USENIX Web site.
SummariesUSENIX

 

8th USENIX Security Symposium
August 23-26, 1999
Washington, D.C., USA

These reports were originally published in the Special Security Issue (November 1999) of ;login:.

Keynote Address
Session: PDAs
Session: Cages
Session: Keys
Session: Potpourri
Session: Security Practicum
Session: Access Control
Invited Talks Works in Progress
  Our thanks to the summarizers:
Michael J. Covington
Rik Farrow
Kevin Fu
Matt Heavner
Ping Liu
Patrick McDaniel
Jim Simpson

KEYNOTE SPEECH: EXPERIENCE IS THE BEST TEACHER

neumann_peter_g Peter G. Neumann, SRI International


Summary by Kevin Fu

Peter Neumann of SRI International substituted for Taher Elgamal as the keynote speaker. Most of the talk dealt with issues such as security, survivability, reliability, and predictable behavior. Neumann has many claims to fame, including the moderation of <comp.risks> and a 1965 publication on filesystem access control in Multics. Neumann used stories, quotations, and clever puns to discuss principles of good software engineering.

Past efforts fundamental to software engineering include Multics, T.H.E. system, domains-of-protection principles, confined execution, PSOS (a provably secure operating system by PGN), and isolated kernels. But most existing commercial systems are fundamentally inadequate with respect to reliability and security. Survivability is also not well understood. Unfortunately, not much good research is finding its way into products. Developers ignore the risks, and the same old flaws appear over and over. For instance, eight of thirteen CERT reports this past year resulted from buffer overflows. Programming languages do little to prevent security problems. Few systems are easy to evolve.

Neumann challenged the old adage of KISS (Keep It Simple, Stupid). He argued that such advice does not work for extremely complex systems. He also disagreed with the "build one to throw away" principle of Brooks, because this encourages companies to perform beta testing with normal users.

There remains much to be learned from past mistakes; these problems have multiple dimensions. However, Neumann reasoned that we do not learn much from failures because the builders tend to move on to something else or build another system with the same problems.

The insider problem is largely ignored. This is a difficult problem to solve because the insider is already authorized. In defense of Wen Ho Lee, Neumann referenced the Los Alamos National Laboratory incident as an example.

Neumann recommended that developers specify general requirements, create stronger protocols, use good cryptographic infrastructures, design for interoperability, force commercial developers to do the right thing (maybe with open source), perform network monitoring and analysis, and find a good way to integrate systems.

The Q/A session consisted mainly of additional stories from the audience. Matt Blaze asked the rhetorical question, "Why in poorly designed systems does everything become part of the trusted computing base? Why limit [this characterization] to only poorly designed systems?"

John Ioannidis explained that Neumann is not "preaching to the choir" but to "lots of soloists." People do not listen to the soloists. Ioannidis argued that education of users is just as important as proper engineering practice. Otherwise, uneducated users will simply bring in the Trojan horse. The discussion led to an insane number of analogies about singing and security.

Causing the crowd to erupt with laughter, Greg Rose said it's amazing that we can build great flight simulators, but we can't build good air traffic control systems.

Dan Geer quizzed Neumann on the difference between brittle and fragile. Neumann responded that brittle means an object breaks when tapped a bit; fragile means the object will probably fall off the shelf. In Multics, a failure in ring 0 would cause the whole system to collapse. A failure in ring 1 would only cause the process to crash. In ring 2, a failure might only generate an error message. The rings contained the error.

What is Neumann's final prognosis? All reports conclude that the state of practice is not good. Risks abound. For related information, including a pointer to the Risks archive, see <https://www.csl.sri.com/neumann/>.

Good quotation from the talk: "Too many people think security is a cookbook thing. Especially the government."

REFEREED PAPERS

Session: PDAs

Summaries by Kevin Fu

The Design and Analysis of Graphical Passwords

Ian Jermyn, New York University; Alain Mayer, Fabian Monrose, Michael K. Reiter, Bell Labs, Lucent Technologies; Aviel Rubin, AT&T Labs — Research


Fabian Monrose analyzed the security of graphical passwords and proposed two graphical-password schemes that could achieve better security than textual passwords. This paper won both the best student paper award and the best overall paper award at the symposium.

Even though textual passwords can fall vulnerable to dictionary attacks, passwords still serve as the dominant authentication mechanism today. There's convincing evidence that graphical passwords, on the other hand, have better memorability and reveal less information about their distribution.

Results from cognitive science show that people can remember pictures much better than words. Combining this with the commonly found Personal Digital Assistant (PDA) allows new graphical input capabilities. Graphical schemes can decouple position from the temporal order in which a password is entered.

Monrose listed three desired properties of graphical passwords. First, they must be at least as difficult to search exhaustively as traditional passwords. Second, keys must not be stored as cleartext. Third, graphical passwords need to be repeatable and memorable.

After building a strawman scheme by which graphical passwords can emulate textual passwords, Monrose described a second scheme, dubbed Draw-a-secret. It takes as input a picture and the order of its drawing. This scheme keeps track of boundary crossings and pen lifts from the screen. This information then passes though a hash function to produce a raw bit string as a password.

Monrose pointed out that there is as yet no characterization of the distribution of graphical passwords. This means one picture is not known to be more likely than another as a password. On the other hand, textual passwords have well-known distributions related to dictionaries.

Rather than focus on the unrealistic upper bound of graphical-password choices, Monrose analyzed a minimum bound by creating a grammar (similar to LOGO) that describes common ways to enter a graphical password. Evidence indicates that by further assigning complexities to the terminals, graphical passwords with short complexities (pen strokes) can surpass the power of textual passwords.

Peter Honeyman went into an evil thesis-committee mode, shaking the earth with many difficult questions. Assuming at least 60 bits of entropy are necessary to protect confidential information, Honeyman questioned whether five pen strokes in a 5x5 grid is enough. Monrose did not answer the question directly.

Another audience member asked what would be the typical graphical password (a smiley face, picture of a dog, etc.). Suggesting that attacks similar to dictionary attacks may exist, the audience member asked how to classify what is memorable. Monrose explained that his team did not have enough experience to characterize distributions.

Another audience member asked for anecdotal evidence about memorability. Monrose explained that in practice, a 6x6 grid results in poor memorability. It's simply too fine-grained for the average user to repeatedly enter a graphical password correctly. The 5x5 grid creates a good balance between security and memorability.

Peter Neumann pointed out that written Chinese has a known keystroke order that may impose a distribution. Graphical passwords may pose a challenge for writers of Chinese. While a native writer of Chinese confirmed Neumann's point, she believes graphical passwords have merit.

Asked about incorporating timing and pressure for entropy, Monrose replied that it has not been considered. Neumann added that his group found pen pressure useless as a source of entropy.

While there is no concrete proof that graphical passwords are stronger than textual passwords, Monrose gave convincing evidence that graphical passwords stand a good chance and at least deserve further analysis. In the future, Monrose hopes to better classify memorability of graphical passwords. For source code and further information, see <https://cs.nyu.edu/fabian/pilot/>.

Hand-Held Computers Can Be Better Smart Cards

Dirk Balfanz and Edward W. Felten, Princeton University


Dirk Balfanz proposed that a PDA could serve as a better smartcard. By implementing a PKCS#11 plug-in for Netscape Communicator and the 3Com PalmPilot, Balfanz reduced the number of components in the Trusted Computing Base (TCB). The trusted part of the system remains only on the Pilot. Smartcards usually do not have a trusted authentication path to the user. The user often communicates a secret PIN to the smartcard via a PC and smartcard reader. This places the PC and smartcard reader in the TCB! Had any malicious software existed on the PC, it could have easily obtained digital signatures of bogus data. Of course, a user interface or warning light on the smartcard could prevent such malicious use. Unfortunately, most smartcards do not have a user interface.

The implementation of PilotKey currently works for Linux, Win9X, Solaris, and Windows NT in concert with Netscape Communicator and the Pilot. Preliminary results show that a 512-bit RSA signature takes about five seconds, and key generation about two to three minutes. However, a 1024-bit RSA signature takes 25 seconds and a whopping 30 minutes for key generation. Balfanz pointed out that the Pilot uses something like a 16MHz processor. He expects the processor to speed up in the future.

The PilotKey program implements the PKCS#11 functions for the Pilot. The program enables a user to provide randomness via the user interface. Because PilotKey works directly with the user, the PC does not see the PIN. Finally, the program lets the user choose whether or not to send decrypted messages back to the PC. Incidentally, PilotKey cannot securely handle email attachments (unless you can perform base64 decoding in your head).

Alas, the Pilot does exhibit some problems acting as a smartcard. First, it is not tamper-resistant, and the operating system does not provide memory isolation among programs. Hence, one should not use untrusted software on the Pilot in conjunction with PilotKey. Second, it is important not to lose the Pilot. While the secret information is password-protected, this offers only limited protection. Third, the PilotKey is not appropriate for "owner-is-enemy" applications. For instance, a program keeping track of a cash balance is inappropriate. The Pilot owner could easily change the balance. Finally, HotSyncing could reveal secrets to the host computer. Balfanz claimed these problems are not show stoppers. Inciting a few chuckles, he explained that you "just need a secure OS on the Pilot." In the paper, Balfanz makes some good suggestions on how to secure the Pilot OS.

Peter Honeyman began the inquisition with a hail of questions. Another audience participant reminded the rest of the crowd to turn off the Pilot's IR when not in use. One questioner asked why not just fix the OS on the PC if we can fix the OS on Pilot as suggested. Balfanz admitted this is the same problem, but fixing the OS on the PC is not any easier.

A participant suggested that splitting trust seems simply to push the problem down the line. People will want more features (e.g., signed Excel spreadsheets). Balfanz explained that shifting trust to the PDA comes at the expense of usability. Another participant argued that removing all the software from a Pilot results in a relatively expensive smartcard.

Questioned about Europe's desire for autonomous smartcard readers and smartcards with displays, Balfanz responded that he would not trust the ATM or the reader. Bruce Schneier wrote a similar paper on splitting trust (<https://www.usenix.org/publications/library/proceedings/smartcard99/schneier.html>) for the first USENIX Workshop on Smartcard Technology, held last May.

For more information on PilotKey, follow up with <balfanz@cs.princeton.edu>.

Offline Delegation

Arne Helme and Tage Stabell-Kulø, University of Tromsø, Norway


Arne Helme explained mechanisms for offline delegation of access rights to files in the context of a distributed File Repository (FR). In this model, each user has her own file repository but would like to share files. Offline delegation refers to delegating authority without network connectivity. For instance, one could use verbal delegation.

PDAs challenge centralized security models, provide users with personal TCBs, and support the construction of "delegation certificates." This meshes well with the design criteria for offline delegation: delegation should not allow impersonation; credentials must form valid and meaningful access rights; and the authority granted in a certificate should not be transferable or valid for multiple use.

The implementation consists of an access-request protocol and a prototype for the 3Com PalmPilot. The software contains a parser/generator for SDSI-like certificates and a short digital-signature scheme using elliptic-curve cryptography.

The access-request protocol (analyzed with BAN logic in the paper) describes the process of creating, delegating, and using offline certificates. The delegator must read 16 four-digit hexadecimal numbers to convey an offline certificate (e.g., by phone). Helme's Pilot software allows the delegatee to quickly receive these numbers via a convenient user interface.

Helme clarified that certificates in this scheme are valid for only one use. Servers keep a replay cache. One problem with the current signature scheme is that the FR cannot distinguish between two different files that had the same filename at different times.

Helme concluded that offline delegation is a natural extension of services to the File Repository, that PDAs support verbal delegation, and that performance is satisfactory. For more information, contact <arne@acm.org>.

Session:Cages

Summaries by Michael J. Covington

As was mentioned numerous times throughout the symposium, a critical component to building survivable systems is the ability to integrate untrusted information "pieces"—from hardware to mobile code and downloaded programs—into a trusted computing base. This session focused on building protection mechanisms for systems that would enable users to maintain security in their environment, while also being able to benefit from the use of an untrusted component. The three papers presented in this session discussed the design and implementation of prototype systems that provide a "caged" environment in which users are protected and enabled to proceed with their work in a safe manner.

Vaulted VPN: Compartmented Virtual Private Networks on Trusted Operating Systems

Tse-Huong Choo, Hewlett-Packard Laboratories


Tse-Huong Choo described the design and architecture of a software-based IPSec product named Vaulted VPN. The motivation behind the development of Vaulted VPN was to incorporate IPSec support into a trusted operating system. The move to a trusted OS was based on experience with conventional VPNs that have repeatedly failed because of security "hotspots" inherent in the operating system itself. By starting with a trusted foundation, Choo claims that security, performance, and overall robustness in the system are improved. More specifically, the trusted OS offers features such as sensitivity labeling and mandatory access control, while adhering to the principles of least privilege—all of which support the building of a compartmentalized IPSec implementation.

The system discussed in this talk is actually implemented as an IPSec VPN that consists of a series of compartmented, concurrently executing IPSec stacks. After briefly discussing some design alternatives, Choo presented the Vaulted VPN architecture, providing detailed descriptions of how packets move through the redesigned stack. In addition, he touched briefly on the topic of key management in the Vaulted VPN system.

One critical design feature, according to Choo, is the ability to have various components in the system run as a non-root user. By keeping the IPSec stack(s) in a separate compartment and running them in an environment that is stripped of most privileges, a single security failure will most often not lead to another. In addition, message channels are protected by a combination of both MAC (Mandatory Access Control) and DAC (Discretionary Access Control)—a feature that is not present on many standard operating systems today. Choo concluded by revisiting the benefits of a trusted OS and by commenting on the security gains achieved through compartmentalized designs.

Enforcing Well-Formed and Partially Formed Transactions for UNIX

Dean Povey, Queensland University of Technology


Dean Povey opened his presentation with a simple quote: "Sometimes it sucks to be a 'user.'" He proceeded to explain that although security is a critical component of information systems, users are often frustrated because they are not given sufficient rights to accomplish tasks assigned to them. With this problem as his motivation, Dean described an optimistic access-control system. By operating in a relatively trusted environment where the user population consisted of researchers and system administrators, Dean's system makes a base assumption that accesses are legitimate, but allows audit and recovery of the system when they are not.

There are three critical components to maintaining the security and stability of the environment in an optimistic system. First, there must be audit mechanisms in place to track who performs what functions on the system. There also must be accountability built into the system so that users who abuse their "extended rights" are held accountable. Finally, there must be a recovery mechanism that is capable of returning the system to a valid state should something go wrong.

Dean detailed tudo (Trusted User Do), an application designed to enforce both well-formed and partially formed transactions in a UNIX operating system. Based on sudo, tudo supports fine-grained access control of files and directories and also provides the logging and recovery features necessary to support well-formed and partially formed transactions. tudo incorporates its own access-control mechanisms, as well as recovery features that are being enhanced to provide greater stability and increased functionality.

The tudo prototype is available from <https://security.dstc.edu.au/projects/tudo>. This proof-of-concept implementation demonstrates how a reference monitor can be constructed using system call tracing facilities that enforce both well-formed and partially formed transactions.

 

Synthesizing Fast Intrusion Prevention/Detection Systems from High-Level Specifications

R. Sekar and U. Uppuluri, State University of New York at Stony Brook


The work described in this presentation was motivated by the building of survivable information systems and the construction of intrusion-detection systems that are able to isolate intrusions before they impact system performance or functionality. Sekar discussed the design and implementation of a system that allows users to specify acceptable patterns of system calls. Essentially, the runtime environment intercepts system calls and checks them against a set of specifications, disallowing or otherwise modifying those calls that deviate from the specifications. By intercepting calls before they reach the kernel, the system is capable of reacting before any damage-causing system call is executed.

Sekar presented samples of the specification language and provided an illustrated example in which he prepared some sample specifications for an FTP server. A critical component to his implementation was the development of a new, low-overhead algorithm for matching runtime behaviors against specifications. Surprisingly, the algorithm uses, in most cases, a constant amount of time per system call intercepted and it uses a constant amount of storage. Overall, experiments have demonstrated that the analysis of each system call contributes less than 5% overhead to the system.

Session:Keys

Summaries by Kevin Fu

Building Intrusion-Tolerant Applications

Thomas Wu, Michael Malkin, and Dan Boneh, Stanford University


Tom Wu discussed how applications can store and use private keys in an intrusion-tolerant manner. As a side benefit, Wu's methods also provide high availability of private keys. Existing public-key architectures rely on nonshared keys or split-key storage. The private key is reconstructed, creating a single point of failure. Methods in Intrusion Tolerance via Threshold Cryptography (ITTC) create no single point of attack because private keys are never reconstructed.

In Shamir's secret sharing scheme, a dealer generates a key and splits it into several shares for trustees. The trustees can later reconstruct the original key at a central location. In this scheme a dealer sees the original key and all the shares, creating a single point of failure. Compromise to the dealer's memory results in total disclosure of the key. Wu uses an intrusion-tolerant scheme that is not vulnerable to such a single point of failure. He employs methods, based on Boneh and Franklin, to generate a private key already in a shared form. To create shares, Wu uses an idea of Frankel's. Share servers apply the share to an operation (e.g., sign, decrypt, encrypt) rather than give the share to the application as if it were a dealer. Without a threshold number of results from the share servers, an adversary cannot reconstruct the results of a private-key operation. SSL protects the communication between the application and share servers. In order to break the ITTC scheme, an attacker must compromise multiple systems in a short period of time. One can also identify malicious share servers.

Wu's implementation adds intrusion tolerance to a Certificate Authority and Apache Web server. Integration of ITTC was trivial with the OpenSSL code. By relying on load balancing and multiple threads for parallelism, the ITTC adds only a 17% drop in throughput and 24% drop in latency when compared to non-ITTC RSA. Wu pointed out that this tolerable slowdown is insignificant considering the overhead of SSL session establishment.

Wu concluded that ITTC improves security and reliability for servers and CAs. Meanwhile, the performance impact is minimal. An audience member asked about the security consequences of an intruder obtaining a core file from such an intrusion-tolerant Web server. Wu happily replied that the intruder could retrieve only the result of a single decryption, not the private key of the Web server. At no single point is the key entirely reconstructed.

For more information, see <https://www.stanford.edu/~dabo/ITTC/>.

Brute Force Attack on UNIX Passwords with SIMD Computer

Gershon Kedem and Yuriko Ishihara, Duke University


Gershon Kedem gave an overview of brute-force cryptanalysis. He listed the primary barriers to brute force: expertise, nonrecurring time and cost, access to tools and technology, replication costs, reusability, and performance. Based on work with an SIMD computer, he proposed a design of an SIMD computer dedicated to brute-force cryptanalysis.

Kedem spent a long time presenting a table, which is in the paper, comparing the experience, cost, and time necessary for brute force when using software, FPGAs, ASICs, and custom chips. Because the talk spent so much time summarizing past research, the audience mostly lost touch with the contributions from this paper.

A Single Instruction Multiple Data (SIMD) machine is made of a large array of small processors. This configuration makes it possible to get close to the theoretical limits of the processor. PixelFlow is an SIMD machine made of many flow units. Each flow unit includes an array of 8,192 processing elements. Using 147,456 PixelFlow SIMD processors, Kedem was able to perform brute-force cryptanalysis of 40-bit RC4 (38,804,210 key checks/second) and the UNIX crypt scheme (24,576,000 UNIX password checks/second). PixelFlow could try all 40-bit key combinations for a particular RC4 ciphertext in about 7.87 hours.

Because UNC Chapel Hill created the PixelFlow machine for image generation, it does have some limitations when used for cryptanalysis. It has few registers, no dedicated shift unit, no pipelining, and no memory indexing. The lack of memory indexing prevents fast table lookups. Had PixelFlow used memory indexing, Kedem explained, there would have been a 64X speedup for RC4 and a 32X speedup for DES (but the speedups from Kedem's talk are twice that of the figures in the paper). These limitations are specific to PixelFlow, not SIMD machines in general. Kedem then proposed an SIMD design for brute-force cryptanalysis and compared this to FPGA-based machines.

Adi Shamir's TWINKLE project was one buzzword mentioned in this talk. However, an audience participant pointed out that TWINKLE does not take into account that LEDs fade over time.

For information related to brute force cryptanalysis, see <https://theory.lcs.mit.edu/~rivest/bsa-final-report.ps> or contact Kedem at <kedem@cs.duke.edu>.

Antigone: A Flexible Framework for Secure Group Communication

Patrick McDaniel, Atul Prakash, and Peter Honeyman—University of Michigan


Patrick McDaniel introduced Antigone, a flexible framework for defining and implementing secure group policies. To demonstrate the usefulness of Antigone, McDaniel integrated it into the vic secure video conferencing system.

Antigone is unique in that it focuses on the following goals:

  • Applications can flexibly use a wide range of security policies.

  • The system supports a range of threat models.

  • It is independent of specific security infrastructures.

  • It does not depend on the availability of a specific transport protocol.

  • The performance overheads are low.

Taking into account that security policies vary from application to application, Antigone provides a basic set of mechanisms to implement a range of security policies. First, a session-rekeying policy allows sensitivity to certain membership changes including JOIN, LEAVE, PROCESS_FAILURE, and MEMBER_EJECT. Second, an application message policy guarantees types of security such as confidentiality, integrity, group authenticity, and sender authenticity. A third policy specifies what kind of membership information other members can obtain. The fourth policy determines under what circumstances Antigone can recover from failures.

One participant asked how to demonstrate that all policies are complete. McDaniel explained that his current work does not have a complete answer. Another participant asked for comparisons between Antigone and other software to implement security policies. McDaniel responded that Antigone currently does not take into account peer groups, voting, negotiating protocols, or ciphers. However, he sees no fundamental reason preventing Antigone from addressing these issues.

In the future, McDaniel hopes to investigate adaptive and role-based policies, implement new mechanisms, benchmark Antigone, and integrate the software with real applications. For more information, see <https://antigone.citi.umich.edu/> or email <pdmcdan@eecs.umich.edu>.

Session: Potpourri

Summaries by Patrick McDaniel

A Secure Station for Networking Monitoring and Control

Vassilis Prevelakis, University of Piraeus


As many system administrators have discovered, managing the network infrastructure for multiple independent communities is a difficult task. Vassilis Prevelakis presented the requirements, design, and experiences with the deployment of a secure network station. The Network Monitoring Station consists of a collection of off-the-shelf hardware and software used to securely manage and monitor remote sites within the Greek University Network (GUnet). The later portion of the presentation discussed the integration and operation of the stations within the target networks.

The target environment presented by Prevelakis consists of independent campus networks that are managed and monitored by a central GUnet network control center. The goal of the architecture is to provide secure, universal access to the network entities within GUnet. This goal is achieved through the deployment of network stations within each remote site.

The initial requirements of the network-monitoring station limited the number of potential solutions. The target networks contained hardware from multiple vendors. Software running on these network components has security services of variable availability and quality. Because of the heterogeneity of networking hardware, no single interface is available. Thus, the design must flexibly support a number of management interfaces, some of which may be unknown at design time.

The hardware constraints were equally daunting. Each station is required to be built using decommissioned personal computers. Moreover, because of reliability problems, the use of hard disks was deemed undesirable.

Having presented the target environment and architectural constraints, Vassilis outlined the station architecture and operation. The primary design decision was the identification of the station operating system. Hardware constraints immediately disqualified Windows-like platforms as potential solutions. Next, a number of UNIX platforms were analyzed. Because of the out-of-the-box availability of IPSec and the history of good security-service design, the OpenBSD 2.3 UNIX variant was selected. Because of the lack of a hard drive, Vassilis determined that each station must be booted from an OS image contained on a single floppy disk. Using PICOBSD configurations and crunchgen utilities, the OS, system utilities, and station-specific configuration data are compressed onto a single floppy disk. The floppy disk is used to boot a station and may be removed thereafter.

Inter-station communication is primarily based on IPSec tunneling. Where IPSec is unavailable, as in administrative workstations running Windows, ssh is also supported. A problem encountered during deployment was the large number of security associations to be configured. This problem is addressed by the automation of the association-generation process from a single station configuration database. However, it is acknowledged that the distribution of new SA information after database modification creates significant administrative overhead.

The network monitoring stations were required to provide facilities for both network monitoring and management. Monitoring facilities allow the tcpdump collection of network traffic to be delivered to a logging station over an IPSec tunnel. In monitoring the station itself, the syslog data can be delivered similarly.

Management of the remote site is achieved by accessing the station directly or through SNMP interfaces. In those instances where a network object (host, router, bridge) does not support appropriate security services, a station may be used as a proxy to the object's serial interface. In these cases, an object's management interface may be accessed only via the network-management station.

A question from the audience identified a limitation of the existing architecture: the lack of key management services. A change in the IPSec SA database requires the re-creation of boot disks for all stations. This requires the physical involvement of administrators at each site and for each station. Vassilis stated that this problem was to be addressed in the near future, but in practice can be avoided in a large number of cases. It was stated that new stations can be added without affecting the entire network. Thus, at the cost of connectivity, additional stations can be added without requiring all other stations be notified of the change.

Another limitation identified by the audience was the boot disk itself. Because the boot disk has limited capacity, the number of utilities that can be made available is small. The speaker noted that not only are the disks small, but they are an outdated technology. He is currently investigating, among other possibilities, booting over the network and from a CD-ROM.

The Flask Security Architecture: Systems Support for Diverse Security Policies

Ray Spencer, Secure Computing Corporation; Stephen Smalley and Peter Loscocco, National Security Agency; Mike Hibler, Dave Andersen, and Jay Lepreau, University of Utah


Stephen Smalley began his talk by indicating that previous operating-system-level security architectures were deficient in at least one of the following areas: control of propagation of access rights, enforcement of fine-grained access rights, or revocation of existing access rights. The result of a DARPA- and Air Force—funded project, the Flask architecture is intended to address all three of these requirements simultaneously. In addition to meeting these design objectives, Flask is required to have low impact on the performance of user applications and system services.

At its most basic level, the Flask architecture provides support for the flexible definition and enforcement of (potentially dynamic) operating-system-level security policy. Based on the Fluke microkernel operating system, a prototype of the Flask architecture has been developed and benchmarked. This work represents extensions to the authors' previous work on the DTOS architecture.

The Flask architecture defines two subsystems to be integrated into a target operating system. The security server subsystem uses local policy definitions, operation context, and policy-specific code to make security-related decisions. As directed by decisions made by the security manager, the object-manager subsystem enforces policy over the set of objects within the operating system.

A key difficulty of dynamic policy support addressed by the Flask architecture is in providing atomic revocation of previously granted access rights. Without atomic support for revocation, the propagation of the dynamic policies within the system may not be deterministic. This may result in inconsistent policy enforcement.

Flask addresses revocation by requiring that the invalidation of a granted right be a lightweight operation at each object manager. Thus, the security manager may quickly invalidate the right at each object manager that is affected by a particular policy change. Using this approach, Flask may ensure the atomicity of the change with respect to policy decisions.

The number of policy decisions resulting from even simple user actions may be large. Thus, the performance of the Flask architecture may be limited by the cost of the interactions between the security and object managers. An observation made by the authors is that subsets of these policy decisions are often closely related.

The Flask access vector cache (AVC) is used to limit the amount of communication between the object and security managers. In responding to a policy-decision request, the security manager provides a vector of policy decisions related to and containing a response to the original request. This vector is cached in the AVC, from which the needed policy decision is obtained. Subsequent policy requests that can be serviced by the cached vector can be completed without the involvement of the security manager.

Historically, the performance of operating systems supporting fine-grained security policies has been poor. In the interest of determining the cost of the Flask mechanisms, a prototype was developed and compared with two existing operating systems. Fluke, a capability-based OS from which Flask was derived, outperformed Flask by 5% for a simple make task. FreeBSD outperformed Flask by 100% on the same task. It was found that the caching of related policy decisions significantly reduced the IPC costs between managers. The authors were encouraged by these results and indicated that additional optimizations were being considered.

A member of the audience asked for a clarification of the meaning and operation of polyinstantiation within Flask. Used typically as a isolation mechanism, polyinstantiation is the duplication of an object to be used by two or more processes. A cited example of polyinstantiation is the tmp directory, where a security domain may wish to protect temporary files from other domains. Smalley continued by describing the mechanisms Flask uses to map operation context to instances of polyinstantiated objects.

A Study in Using Neural Networks for Anomaly and Misuse Detection

Anup K. Ghosh and Aaron Schwartzbard, Reliable Software Technologies


Aaron Schwartzbard enthusiastically presented the results of a study that applies the learning capabilities of neural networks to intrusion detection. He began with a description of the limitations found in existing intrusion-detection approaches. Because of the lack of mechanisms that generalize knowledge of known attacks, detecting new attacks is difficult. Although it's not entirely addressed in this work, Schwartzbard identified the misidentification of normal behavior as attacks (false-positives) as a challenge.

An important aspect of any intrusion-detection system is the fundamental detection approach. In anomaly-detection systems, departures from normal behavior are identified from models of expected activity. Conversely, misuse-detection systems scan activity logs for instances of known aberrant behavior. Thus, depending on the type of approach taken, intrusion-detection systems develop profiles of normal or aberrant behavior on the basis of previously collected event data. The logs used for analysis are typically obtained from traces of user commands, network traffic, or system calls. It has been found that the type of log used has an effect on the performance of the intrusion-detection algorithm.

A limitation of anomaly-detection systems lies in the specification of normal behavior. As user activities change over time, the static nature of the profile may lead to false positives. Moreover, if an attack is used in the generation of the profile, it will thereafter be deemed normal behavior. However, this approach has the advantage that novel attacks may be detected.

Schwartzbard noted that misuse-detection systems typically identify signatures of known attacks. The system logs are scanned for occurrences of these signatures, and matches are flagged as attacks. Unfortunately, this approach will only detect attacks that fit signatures obtained from training data.

In attempting to address the limitations of existing approaches, the authors apply a machine-learning approach to intrusion detection. Using training data to develop weights and activations, a back-propagation neural network is developed for each program to be monitored. During subsequent analysis, event data is encoded and fed into the network. The numerical output of the network is then used to identify potential intrusions.

Because detecting attacks on the basis of entire sessions is difficult, logs are typically analyzed using n-grams of contiguous events. However, identifying an attack from a single n-gram can be similarly difficult. To combat these problems, trends within collections of n-grams are used to identify intrusion. The size of an n-gram collection and the weights applied to individual n-grams are parameters of the detection algorithm.

Based on the weights and activations derived from training data, the output of a network over some n-gram is a value within the interval (0 .. 1). Thus, the approach does not identify specific behavior, but indicates the amount of anomalous (or normal) behavior within a particular n-gram. When the sum of the weighted output values of n-grams within a collection exceeds a sensitivity threshold, a potential intrusion is flagged.

The authors analyzed the effectiveness of their approach using the 1998 DARPA Intrusion Detection Evaluation program corpus of data. The corpus training data identified both normal and anomalous behavior within a number of system logs. The training data was used to create a network specific to each program to be analyzed.

The experimental results presented the effectiveness of intrusion detection as a function of the sensitivity of the system. As an algorithm becomes more sensitive, more intrusions are detected (defined as the percentage of real intrusions). Similarly, the rate of false positives (defined as the percentage of misclassified nonintrusions) increases with sensitivity. An ideal system would have, at some sensitivity, perfect detection (100% detection) with zero false positives (0%).

Using test data, anomaly detection using neural networks was able to achieve a high detection rate (77.3%) with very few false positives (2.2%). However, as the sensitivity was further increased, the false-positive rate increased dramatically without significantly affecting the number of intrusions detected. Schwartzbard noted that these results were comparable to existing approaches.

The use of neural networks for misuse detection did not fare as well. High false-positive rates (5%) were observed in tests resulting in even modest detection rates. The authors state that these results are due in large part to the limited amount of intrusion-training data within the DARPA corpus. Future work will attempt to better classify this approach using more substantial intrusion-training data. However, the authors were encouraged by high detection rates found (90%) at sensitivities resulting in relatively low false positives (18.7%).

The long stream of questions indicated the audience's interest in the work. A member of the audience asked for clarification of the "leaky bucket" approach used in analyzing the collections of n-grams. In attempting to detect an intrusion, the value of the previous analysis is multiplied by some value less than or equal to 1 and combined with the value of the current network output. Thus, the weight applied to the value of a single n-gram output decreases (leaks) over subsequent analyses. Schwartzbard noted that the performance of the algorithm may be greatly affected by the multiplier (leakage).

Another question was directed at the fundamental intrusion-detection mechanism, not at the authors' application of neural networks. It was noted that because only n-grams of contiguous events are analyzed, it may be possible for an adversary to shape attacks to be consistent with normal behavior. The speaker noted that this was a known problem, and that current research is investigating approaches that address this limitation.

Session: Security Practicum

Summaries by Kevin Fu

The Design of a Cryptographic Security Architecture

Peter Gutmann, University of Auckland gutmann_peter


Peter Gutmann gave a fast-paced talk on how to design a versatile, multiplatform, cryptographic architecture. The implementation works on many platforms ranging from 16-bit microcontrollers to supercomputers and ATMs.

Most security toolkits specify an API, not an architecture. In contrast to the traditional outside-in approach, Gutmann's architecture takes a general cryptographic architecture, then wraps an interface around it.

Gutmann built his architecture based on two concepts: objects encapsulate the architecture functionality, while a security kernel enforces a consistent security policy. Each object has tightly controlled I/O for security reasons. Objects are either action objects (e.g., encrypt, decrypt, sign) or container objects. The containers further decompose into three object types: data containers, key and certificate containers, and security attributes.

Gutmann found that C did not work well for implementing this architecture. The implementation comes in several languages, ranging from C/C++ to Perl and Visual Basic. Gutmann also wrote a formal specification and used the Assertion Definition Language (ADL) to verify his code.

An object can be in one of two states, low or high. In the low state, one can perform all allowed operations on the object. In the high state, one can perform only a limited, safe subset of those operations. An audience member asked whether Gutmann's design prevented an object from selecting from more than two states (i.e., whether it was implemented as something like a single-bit flag). In Gutmann's experience, two states are sufficient. The security kernel supports an infinite number of states, but expecting the user to manage them all is very complex (they would have to track a complex FSM [Finite State Machine] as an object moves from one state to another), and so far there has not been any real need to use more than two.

Everything is implemented in Gutmann's cryptlib library at <https://www.cs.auckland.ac.nz/~pgut001/cryptlib/>.

Why Johnny Can't Encrypt: A Usability Evaluation of PGP 5.0

Alma Whitten, Carnegie Mellon University; J. D. Tygar, University of California at Berkeley


Alma Whitten raised serious issues about deficiencies in cryptographic user interfaces. After explaining user interfaces (UI), she spoke about the critical results of a usability evaluation of PGP 5.0.

Recognizing that security is fundamentally different from traditional software, Whitten defined usability for security as:

  • A user can tell what needs to be done.

  • A user can figure out how to do it.

  • A user does not make dangerous errors.

  • A user does not get annoyed and give up.

Whitten noted that applications such as word processors would focus on the second point. But one cannot assume the second point in security. For instance, in security the undo function cannot reverse the accidental mailing of plaintext.

Whitten explained that "security is not a word processor" because of:

  • unmotivated users (security is a secondary goal)

  • the barn door problem (the undo problem)

  • the abstraction problem

  • the software being only as strong as the weakest link

  • lack of feedback

Whitten chose PGP 5.0 with the Eudora plug-in on a Macintosh for a case study because, by general consumer-software standards, the interface was reasonably well designed. Her team used two methods to evaluate PGP: laboratory testing (objective, limited, and expensive) and cognitive walkthroughs (subjective, comprehensive, and cheap).

The cognitive walkthrough showed that visual metaphors needed more thought. For instance, the quill head can confuse users. One user mixed up a .sig file with signatures. There is also information overload with too much visual data, such as displaying the whole key ring and metadata. With respect to barn door and feedback problems, we need more protection against irreversible errors such as sending plaintext.

In the laboratory test, users had 90 minutes to perform a series of actions, working in the scenario of a political campaign. The user had to securely coordinate five virtual campaign members. The 12 participants were of a variety of ages between 20 and 55, with educational backgrounds ranging from some college to a doctoral degree, and backgrounds from fine arts to computer technology. Half of the participants were male.

Everyone pretty much could generate key pairs (after a few users misunderstood by generating keys for each campaign member). Twenty-five percent of the users managed to send plaintext instead of ciphertext. Two of these three people realized the mistake. Several users encrypted the message with their own public keys instead of the recipient's key. Nearly everyone fell into this trap. Eventually, after receiving error messages from the virtual campaign members, the participants were able to send encrypted mail correctly. Half of the participants eventually encrypted a message. A fourth of them did it without much help.

By the time the first tests were done, only five people got far enough to decrypt a message. Most could decrypt. However, some mixed up ASCII-armored keys with PGP messages, since the two look the same. The study concluded that PGP 5.0 with Eudora has a nice UI, but competent people in the target group could not handle this. Whitten suggests that to fix these deficiencies one should simplify the UI, minimize what is less important, and add the right kind of help.

Someone suggested that maybe the problem is the PGP model. Can we get a reasonable interface with PGP? Whitten responded that her group looked at where the PGP model did not match users' expectations.

Avi Rubin asked how much documentation and training the test subjects received. Whitten gave each subject printed, bound copies of the Eudora and PGP manuals in addition to a quick tutorial on how to send email with Eudora. In other words, the test subjects had more resources than most users. Moreover, they read the manuals.

Another audience member asked how many users did an out-of-band check to see if the encrypted messages worked. These details are in the paper, but Whitten noted that one technical person noticed the keys were signed and understood that trust in the key had to do with signatures on the key. The majority of users did not understand the trust metric.

Derek Atkins humorously defended himself by saying he designed much of core API, but not UI, for PGP. He asked about problems relating to confusion among various key types. Whitten said that one virtual campaign member had an RSA key and would complain to the test subjects about problems decrypting email. Only one subject figured out that the email had to be encrypted once with RSA and once with DSA.

Greg Rose added an anecdote: USENIX used to run a heavily scripted keysigning. It worked well, but about two-thirds of messages required human interaction.

Another participant asked about the importance of interruptions (e.g., dialog box warnings) to the user about security. Whitten explained that there was no time to look at user tolerance levels. Ideally, one would make the UI sufficiently obvious to prevent such dangers in the first place.

Questioned about how many of the problems resulted from poor interaction with the Eudora plug-in versus the core PGP interface, Whitten explained that it is hard to distinguish. The UI needs a display to make obvious what is encrypted and what is not. For further information, send email to <alma@cs.cmu.edu>.

Jonah: Experience Implementing PKIX Reference Freeware

Mary Ellen Zurko, John Wray, Iris Associates; Ian Morrison, IBM; Mike Shanzer, Iris Associates; Mike Crane, IBM; Pat Booth, Lotus; Ellen McDermott, IBM; Warren Macek, Iris Associates; Ann Graham, Jim Wade, and Tom Sandlin, IBM


John Wray described the reference implementation of the Internet Engineering Task Force's (IETF) Public Key Infrastructure (PKIX). Wray explained the motivation behind the project. IBM needed a pervasive, open PKI for business; major vendors pledged support for PKIX; and there was a need for reference code to exercise the PKIX specifications.

The team implemented the PKIX specifications as a C++ class library that consists of several RFCs for X.509, Certificate Management Protocols (CMP), Certi-ficate Request Message Format (CRMF), and LDAP V2.

Wray highlighted what the team learned from this experience. What did they do right? They used:

  • a single data structure for protocols and persistent data

  • C++ destructors to minimize memory leaks

  • a single back end, good for promoting code reuse

  • an unfamiliar platform (NT) for development

However, Wray also noted what they did wrong:

  • no proper error architecture (but Wray has never seen a good one)

  • interoperability testing too late

  • sloppiness with respect to case sensitivity (NT is case-insensitive, but UNIX is case-sensitive)

  • STL (Standard Template Library) problems

Asked how easy is it to use the CDSA architecture, Wray replied that indeed CDSA 1.2 presented difficulties. Questioned about certificate revocation, he replied that the implementation publishes certificate revocation lists.

For more information, see <https://web.mit.edu/pfl/> or read the archives of <imc-pfl@imc.org> on <https://www.imc.org/imc-pfl/>.

Session: Access Control

Summaries by Matt Heavner and Ping Liu

Scalable Access Control for Distributed Object Systems

Daniel F. Sterne, Gregg W. Tally, C. Durward McDonnell, David L. Sherman, David L. Sames, and Pierre X. Pasturel, Network Associates, Inc.; E. John Sebes, Kroll-O'Gara Information Security Group


Gregg Tally presented ongoing work to extend CORBA with a desire for fine-grained access (per object, per operation) in order to facilitate the widespread use of distributed object-oriented systems. DTE (Domain & Type Enforcement) is a set of access-control mechanisms for UNIX kernels that was extended to distributed-object systems and applied to CORBA as OO-DTE, a plug-in (for the ORB) that uses SSL (for domain checking). OO-DTE uses a distributed policy scheme (with a master policy server using CORBA connections to distribute policy to local policy servers). The talk featured details regarding an example application to manage books (book check in/out and query) in a library. The example is presented in the conference proceedings paper, but was extended a bit in the talk.

Tally briefly compared OO-DTE with CORBASec (a spec released in 1996): OO-DTE permits the use of wild-card rules to facilitate the assignment of types to methods. CORBASec requires the enumeration of rights for each interface, without inheritance, making specifying new interfaces more tedious.

One feature of the DTE work was DTEL, a high-level compilable language for implementing security policy. OO-DTE includes DTEL++, which provides constructs for assigning types to methods.

Tally summed up the results of the work as follows: OO-DTE is currently implemented as a plug-in to ORBIX and Visibroker; it is scalable and flexible (allowing both coarse and fine granularity); with DTEL++ there is a high level of ease of administration (policy creation through a set of general rules and exceptions, using inheritance from base interfaces to derived interfaces), so large numbers of objects can be "labeled" with just a few DTEL++ policy statements. Benchmarking reveals that SSL and Intercept take more overhead than the OO-DTE component.

The OO-DTE and DTEL++ implementation code is available from Tally at <gtally@nai.com>.

Several people questioned the interoperability between different ORBs, since the solution depends on the object keys, which unfortunately are different from vendor to vendor; the answer is that there are no good solutions yet. Peter Honeyman suggested using references such as LDAP to solve the problem, but the feasibility needs to be investigated. Another audience member asked if OMG plans to standardize the object key. The answer is that it's fundamentally not supported (big laugh).

Certificate-based Access Control for Widely Distributed Resources

Mary Thompson, William Johnston, Srilekha Mudumbai, Gary Hoo, Keith Jackson, and Abdelilah Essiari, Lawrence Berkeley National Laboratory


Mary Thompson presented the current state of the Akenti system for certificate-based access control. The motivation for Akenti is a distributed computer and collaborative-use system, shared between organizationally and geographically distributed facilities and users. The goals of the project are to implement policy-statement-based access, to provide multiple-stakeholder control of a single resource structure, and to use a public key system. The emphasis for the system is on usability. Currently, the PKI used consists of X.509 certificates, SSL, and digital signatures. (PKI deployment is more of a future issue, not so important on the small scale for which Akenti is currently being designed.) Akenti is implemented as several library routines used in conjunction with Apache with minimal local policy files (whom to trust, and certificate location).

A major feature of Akenti is the GUI for designing access policy, including the ability for users to check access lists in order to see what portion of the infrastructure may be blocking access (e.g., if a stakeholder has written a "bad" rule); this may be changed so only stakeholders can view access lists in larger implementations.

Some vulnerabilities in the Akenti system are due to the distributed-certificate nature of the system: a certain level of trust on possibly insecure remote machines is required, and network outages can be disastrous. A problem in the rule-set implementation is that independent stakeholders can create mutually exclusive access policies and unintentionally lock out users. A performance view of Akenti shows the following features: it has fairly high granularity; 80% of Akenti time is tied up in fetching certificates; and search and failure takes more time than a successful lookup. The current status of the project is that Akenti/Apache is in use at LBNL and Sandia for control of Akenti code distribution and access control for notebooks.

Currently Akenti runs on Linux and Solaris. Future modifications to the Akenti project include the use of XML certificates, a standalone implementation (unlinked from Apache), expanded use conditionals, and possible implementation of bandwidth-control policies.

There were four questions at the end of the presentation. Q: Is there a problem with certificate storage for use by both Netscape & IE users? A: There used to be, but it seems to be working now. Q: Use of Netscape certificate generation ("click hell"). (The question was a plug for the "PK no I" WIP by Honeyman.) Q: Is there a problem with a "spoof" attack to get around "not" rules? (Meaning: if there is a rule that "Joan from Sun can't use the Coke machine," then Joan can subvert this by logging in to request access to the Coke machine through a non-Sun account.) A: This is a possible problem: the full range of access-control implementation is still a work in progress. Q: Is there a problem with revocation of rights? A: The revocation of rights is currently to be implemented at "host institutions" and propagated through the Akenti system.

Digital-Ticket-Controlled Digital Ticket Circulation

Ko Fujimura, Hiroshi Kuno, Masayuki Terada, Kazuo Matsuyama, Yasunao Mizuno, and Jun Sekine, NTT Information Sharing Platform Laboratories


The last paper in this session is about a digital-ticket-circulation and corresponding trust-management scheme. The requirements the authors are trying to address are:

  • flexibility to satisfy different business purposes

  • flexible and automatic management

  • simple verification

The author proposed an "Onion Ticket Accumulation Model," with the user's identification information as the onion core and the user's rights forming the outside layers. A related trust-management scheme was also proposed, aiming to provide some means of ticket verification regardless of the circulation route. The approach is quite ad hoc: digital signature, hash value, plus the ticket type identifier information.

The authors hope to be in IETF discussions soon and to be published soon. Information will be at <https://info.isl.ntt.co.jp/flexticket/>.

INVITED TALKS

Cryptography and the Internet: Networks and Security and Why the Two Don't Get Along

Steven M. Bellovin


Summary by Matt Heavner

bellovin_steve Steven Bellovin, filling in for Peter Neumann (who gave the keynote address), gave a talk originally presented at CRYPTO '98. The talk fitted in well with the Security Symposium.

The rhetorical question that started the talk was: "Why is cryptography coming to the Internet now?" The answer is that the driving force of money has been applied in the form of e-commerce. As businesses attempt to extract money from consumers via the Internet, there is a public perception of insecurity and also a public perception that cryptography is an answer. Therefore the "solution" of cryptography is getting stapled onto the Internet. Current cryptography use on the Internet is primarily for email (PGP and S/MIME) and the browser use of SSL. Up-and-coming cryptography includes IPSec and SET.

The current use of cryptography in email (PGP, S/MIME) suffers from a scaling problem and the lack of a true PKI necessary for widespread (pervasive) use. Although SSL is not limited to HTML, that is the overwhelming use of SSL. Also, although SSL is meant to provide a PKI solution, such a thing does not actually exist, because users don't know what certificates are or what to do with them!

IPSec is a more general cryptographic solution than either of the two above (since it is implemented at the IP layer and therefore does not require modification of each application). The initial use of IPSec will be for VPNs (where no general PKI solution is required). With the possible implementation of IPSec in upcoming Microsoft and Freenix OSes, the widespread deployment should be a reality within three years (but does this mean that it will be widely used?). One of the biggest problems with IPSec is reality (the use of nonsecure computers for secure work, e.g., a former CIA director's recently revealed work habits).

SET is an alternative to the widespread use of credit card (CC) numbers on the Internet. SET may replace CC numbers on servers. It is a multiparty protocol, in that it involves the consumer, the bank, and the vendor. The SET system will not be widespread without a monetary incentive from the CC companies (such as lower rates for merchants). However, the CC industry may be interested enough in the implementation of SET that it could happen relatively soon and become widespread.

The next phase of the discussion concerned what is still missing for widespread use of cryptography on the Internet. First, the speed of public-key operations creates a bottleneck at the server—modern CPUs can handle large public keys, but servers get bogged down with multiple simultaneous requests.

Second, secure routing on the Internet is a difficult problem, for two reasons: the topology of the network (generally many hops between hosts) and the lack of well-defined secure and/or synchronized time. The time problem is difficult because the lack of a trustable time standard makes any time-stamping of validity periods impossible. One possible solution to the topology problem may be a "chain of digital signatures" between trusted pairwise connections along the entire path between hosts. However, the backbone routers are already pushed to their capacity, and this would require many digital-signature verifications. A problem exists in determining the currently correct route for traffic — the time for which a given route is correct is small—therefore, a route that may very recently have been correct may no longer be correct. Another similar problem is secure multicasting.

One problem found in trust-management issues is the conflict between machine and human understanding of trust management. One example is a certificate presented by interactive.wsj.com which was issued for www.wsj.com. Obviously, this is not the same as the difference between nasa.gov and nasa.com, yet Web browsers will have trouble with the mismatch in names.

Another problem in getting cryptography integrated into the Internet is one of "cryptography versus cryptographic engineering." A cryptographic paper may propose a system, and setting up a "standard" implementation (made of key lengths, etc.) is the next step. Then the software implementation is a completely different problem. There are several requirements for Internet cryptography, exacerbating the theory-versus-engineering problem. Fine granularity is desired, but this simplifies the types of attacks to which the crypto system is vulnerable.

Returning to the DNS/routing issues, Bellovin brought up the specific issue that the DNS TTL field changes constantly, so the DNS record cannot be signed! Also, secure DNS needs to provide dynamic updates, as well as negative answers to queries, within a reasonable period of time.

Several examples of the conflict between cryptographic requirements and "real world" network requirements can be found in IPSec. IPSec hides everything, so traffic analysis by network engineers is no longer possible. Network-address translators cannot deal with IPSec (if they can't get at IP and port number, it is not possible for them to translate). Adjusting the windowing of network traffic for transmission over satellite link latency, as well as "tinkering" with the transmission over wireless nets, cannot be done with IPSec in use.

One final difficulty with cryptography and the Internet is protocol verification. Designing a cryptographic protocol is hard enough, but reality can be even worse! The common attitude of "shoot the engineers and ship the product" to get products to market, and the problem of "late science" (a flaw in cryptographic implementation found years after product distribution) are additional head-aches associated with the widespread use of cryptography on the Internet.

In conclusion, Bellovin pointed out that no more than 15% of CERT advisories in 1998 to mid-August could be solved/avoided with widespread deployment of cryptographic solutions. (For example, many of the problems are still simple buffer overflows.) One last problem is that a lot of bad crypto is out there (e.g., a self-extracting crypto message system was sold as a means of eliminating the need for secure exchange of secret keys!).

The viewgraphs for Bellovin's talks are online at <https://www.research.att.com/~smb/talks/inet-crypto.ps>.

The main subject discussed in the question period after the talk was Public Key Infrastructure. The points made were that PKI is a top-down system, whereas the real world is not top-down. So it comes down to anonymous trust in the real world. One major problem is the lack of user knowledge—as an example, Netscape ships browsers signed with expired keys.

Rik Farrow asked about the problems of interoperability among current IPSec implementations. Bellovin answered that it works well at the IP level; it is only so-so at the certificate level; and it is bad if different certificate authorities are used. Hopefully this will improve on a six- to 12-month time scale.

US Crypto Policy: Explaining the Inexplicable

Susan Landau, Sun Microsystems Laboratories


Summary by Jim Simpson

A very well-informed and eloquent person, Susan Landau is a senior staff engineer at Sun Microsystems Laboratories. She also co-wrote the multi—award-winning book Privacy on the Line: The Politics of Wiretapping and Encryption (which Avi Rubin, who introduced Landau, highly recommends to anyone in the field of computer security and to those looking to better their comprehension of the field). Her informative talk to a crowded room, punctuated with witty comments, explained the context of the current US crypto policy.

In the period between 1764 and the early 1800s, the founding fathers of the US used cryptography when communicating about love, life, and—most important — politics; they understood the importance of maintaining integrity. In 1999, the government takes a contrasting stance and claims that the same idea of protecting communication will nullify its ability to investigate acts of terrorism and protect citizens. In 1992 the FBI indicated that by 1995 strong crypto would make 40% of wiretaps ineffective; in 1995, out of all wiretaps, only one or two were not understandable. Clearly, their concerns remained unjustified.

Landau presented the Fourth Amendment to frame the issue and explained it thus: even though the government can obtain the right to search, it does not have the right to find. She indicated that this difference is very important, and further explained that the government is actually more concerned with wiretaps than with email. In a case in which the Supreme Court did not agree that wiretapping someone was a violation of the Fourth Amendment, a dissenting judge suggested that wiretapping is really no different from the writs of assistance the colonists rejected. Finally, Landau mentioned the moment in the Watergate hearings where Senator Talmadge reiterated the importance of the concept of privacy, in response to another senator's deprecating comment about how circumstances had changed since the Fourth Amendment was written.

Many Americans do not realize privacy is not necessarily guaranteed by the Constitution, even though there are amendments that allude to privacy. Most rulings on privacy come from case law. Some cases have ruled in favor of privacy while others have not, indicating that privacy is not always clearly defined. This is one of the reasons why the US crypto policy seems to be inexplicable: cryptography helps guarantee privacy in communications.

In 1934, Congress passed the Federal Communications Act, which in effect said the government may not tap and divulge wired communication. A court case prior to the law ruled in favor of those doing the wiretaps, while a case afterward ruled in favor of those being wiretapped. There had actually been a similar law at the time of that early case in 1928, but the lawyer did not think to use it. When Germany invaded Russia in 1939, J. Edgar Hoover got permission for the FBI to intercept—but not divulge—communications by wiretap, and this activity continued throughout the war. By the time Truman was in office, the last sentence of the executive order allowing these wiretaps, which limited targets of wiretapping to foreigners, mysteriously disappeared. The FBI worked very hard to keep these wiretaps undiscovered over the next 30 years, by marking files as confidential and using information from "confidential sources" when in court. Whenever Congress tried to investigate the possibility of wiretaps, it was silenced.

In 1967, the case of Katz v. US led to the Supreme Court ruling that no bugs be installed without search warrants, effectively throwing out wiretaps and bugs in court cases. However, because of mounting social tensions involving organized crime, law-enforcement officials pushed for the use of wiretaps, which soon led to the creation of Title III, which did indeed allow wiretaps. Finally, in the late '70s, another law approved the use of wiretaps in foreign-intelligence surveillance.

Title III specifically covers the use of wiretapping. It states that wiretaps should be used only as a close-to-final resort; the use of a wiretap requires a search warrant, and they can be used only to investigate certain crimes. The cost of a wiretap is up to $58,000 a year, so only around a thousand of them are installed each year. The second law, FISA, made provisions for the use of wiretaps for foreign intelligence only; Americans are not to be targeted. Around 500 are installed annually, and information about them is hard to come by. Organized crime and drug trafficking cases make up about 75% of all wiretaps, which conflicts with the FBI's argument that it uses wiretaps to go after terrorists and kidnappers. Only recently have Title III wiretaps included terrorist activities. Kidnapping is the strongest argument against crypto, as crypto is often presented as being the obstacle that prevents an authority from reuniting parent and child; it turns out that only two or three cases out of several hundred kidnappings actually make use of wiretapping.

Landau feels it is a poor idea to base public policy on two or three cases a year. She pointed out how ineffective wiretapping can be in the case of kidnapping: if you do not know who kidnapped the person, how do you wiretap them? Certainly, if you have the approval of the victim's family to listen in on any incoming ransom call, crypto is not going to stand in the way; if the family can understand the call, law officials can, too. At this point, the room exploded in laughter.

It turns out the US government has no problems with the use of crypto for authenticity and integrity. There are economic arguments for the use of crypto as well. The government's response has been to give control of export to the Department of Commerce instead of the Department of State, and DES is limited to 56 bits. Current policy includes Clipper in 1993, CALEA in 1994, and the AES competition. CALEA essentially states that switching networks have to be built so that they are wiretap-accessible, otherwise the telco would face a $10,000 fine per day for every wiretap it was not able to do. Between 1994 and 1998, standards were to be drawn up for how many simultaneous wiretaps were to be possible, how much carrier capacity, and so on. The Department of Justice decided the FBI would be the best agency to do this. The numbers they came up with in 1995 were for 30,000 simultaneous wiretaps in the US; between Title III and FISA, we only do 1,500, for maybe around 1,000 simultaneous wiretaps. The telcos objected, so in 1996 the FBI came back with a different number: 60,000.

The policy arm of the US government is pushing key escrow. Six years of effort have yet to lead to any sort of key-sharing agreement. As a result of key escrow, a tremendous delay in the deployment of other cryptography has taken place. Devices using key escrow have not done very well in the public sector; out of 15,000 secure phones manufactured by AT&T, 9,000 went to the FBI. Finally, an internal memo written by Bill Reich, who works for the Secretary of Export Administration in the Department of Commerce, summed up the sentiments of those who pushed key escrow: they themselves did not like to use it, since it took longer to initialize.

Landau discussed how the SAFE bill, at one point good because it was to relax export control, was gutted and changed by the Armed Services Intelligence Committee to the point where it would make it a crime not to decrypt when ordered to by a court. Following that, she touched on FIDNET, the Federal Intrusion Detection Network, to discover when there are problems with the network information infrastructure. This would monitor the network information infrastructure; Landau wholeheartedly agrees we should secure the nation's network information infrastructure, but wonders why the government is making it so difficult to do so. Once again, laughter and clapping echoed throughout the room.

Landau is often asked for her opinion of what the future holds here. She feels there is a race going on. This race involves what the NSA and/or the FBI are trying to get away with, and what happens in Europe. If enough European competition starts taking away US business, then Congress may pass a certain set of laws. She also feels the FBI has a much more polarized view than the NSA; the NSA knows the game is lost as far as crypto goes, and it has a vested interest in the security of the US industry. A weak US computer industry makes the job of the NSA that much more difficult. Export controls that hamper the US computer industry are problematic precisely for that reason. She also feels that the currently admired practice of open source may change or influence the policy. If source code becomes public information, it will be harder to enforce export control, since the reason to do so will become less relevant.

She ended her talk with a poignant story that took place shortly after CALEA passed. The FBI invited law enforcement from around the world to learn how to install and use similar wiretapping capabilities of digital switching technology. One of the police forces invited belonged to Hong Kong, which is now under Chinese jurisdiction; every year the State Department lists human-rights abuses from around the globe, and China is consistently at the top of this list. She ended with twin questions, "What are the technologies we're exporting, and what are the values we're exporting?"

The Burglar Alarm Builders Toolbox

Marcus Ranum, Network Flight Recorder


Summary by Rik Farrow

ranum_marcus2 Marcus Ranum shared some of his experience, while revealing yet another bit of job history, in this presentation. Ranum could not share code examples, because most of his stuff turned out not to be portable to Linux. But there were more than enough ideas to keep you busy for a while, and perhaps even trip up an almost successful attacker.

He targeted misuse detection, that is, looking for activities that should not occur. For example, his cats are not authorized to portscan his servers, and if they do, something must be amiss. You need to know how your network is actually built to do this; at home, the three large cats are not authorized to open the doors, and when that happens, an alarm will be set off (note that Marcus's cats are all MSCE, so you never know what will happen). You can apply this by setting up misuse detection to detect misuse of site policy. He remarked that this is almost like a security assertion.

 

Another approach is to watch for second-order effects. Second-order effects are things that might happen after a successful, but not yet detected, break-in. These things include adding a new user account, setting the execute or setuid bit, adding a new service, or making changes to your Web pages. You do this by leveraging local knowledge and knowledge of commonly used attacker tricks. Ranum remarked that he had worked for a while installing burglar alarms for his father's company, and that the second-order alarms were what caught the slicker thieves: switches set under a fake jewelry box or on the door of the gun safe.

If someone avoids all your burglar alarms, you know it's an insider job.

Ranum listed the advantages of watching for the second-order effects: detecting previously unknown attacks, cheap to do, and will rarely set off false alarms if done correctly. The disadvantages include having to understand your network and how things should work, and that this approach is policy-directed (you may not have policy).

Marcus had a long list of suggestions. For example, instead of just patching that buffer-overflow attack, have a failed attack generate a syslog message. Use packet filtering on your firewall-protected Web server. Then if anything unexpected reaches the packet filter, you will know that something is wrong. You can also do this with a sniffer, or even tcpdump with a simple set of rules that write to a log file; if it gets written to, something has happened. Another example: block stuff from the inside to the outside (IRC from your Web server as an example, a whole list of things that your Web server shouldn't be doing). There are many tools to help with this: hardware sniffers, in-kernel packet filters, applications (Argus, NFR), tcpwrapper logs.

Use safe-Finger to reverse finger someone who touches a port. There is the lame example used with tcpd, and he also suggested touch files for slow scan detection.

Ranum suggested trapping certain actions by replacing the top half of exec system call (or connect, accept, chmod, exec system calls). This suggestion sparked some audience participation, as someone suggested that MEMCO (seller of SEOS) had patented the idea of replacing the system call jump table. No one knew for sure, but even MEMCO's own PR mentions that this technique was taken from IBM's OS/360. A quick search uncovered a Linux module for installing trojans in the jump call table (<https://www.rootshell.com/search.fcg>, look for <heroin.c>). Gaspar Carson mentioned debugging tools, and Solaris includes an interface for intercepting system calls.

Other suggestions included whacking your shell when it gets called with -c, preventing a second chroot(), disabling fchroot() (never used, but there for completeness only), trojaning commonly used commands and training yourself not to use them (ls, chown, chmod, etc.), replacing the NIT or BPF driver with something that triggers an alarm. A more humorous idea was to create a program called watchdog that does nothing (but creates paranoia).

Along the lines of strange things to leave lying around, Ranum suggested installing things that look like trojans, such as BackOfficer Friendly, redirectors that reflect scans back to their source, and the phat_warez.zip file (a couple of gigabytes of zeroes compressed into something quite small).

There was lots more; you can find the complete presentation on Ranum's Web site: <https://www.clark.net/mjr/pubs/index.html>.

 

ActiveX Insecurities

Richard Smith, Phar Lap Software


Summary by Rik Farrow

Richard Smith is not that well known for his day job as president of Phar Lap Software, a maker of embedded and realtime development tools for x86. But his hobby has helped make him famous. Or, as Smith says, "If Microsoft can build it, he can break it." You might also have heard of Smith as the person who discovered David L. Smith's name found in the Word document that was part of the Melissa virus.

ActiveX controls appeared as VB (Visual Basic) controls on the Web about three years ago. Controls are largely what the X toolkit world called widgets (push buttons, dialog boxes, sliders, etc.), but they can be much more than that. Microsoft, as a predominant vendor, wants to move as many of its technologies as possible onto the Web, for more control. From the users' point of view, it is less concrete why they might want ActiveX on their systems.

ActiveX is DLL (dynamic link library) binary code, and it can be called up and scripted from a Web page. Right from the start, this is a pretty scary thing. HTML is a formatting language, Java and JavaScript are programming languages, but Smith points out that Microsoft is taking a real security risk with ActiveX. Even with digital signatures, ActiveX is a lot scarier than people think. Smith knows of five or six people, including Georgie, a Bulgarian security researcher, who have time on their hands to look at these problems.

What is even more interesting is that you can send HTML via email and expect to have Internet Explorer (IE) interpret it for you. Outlook Express, the mail client shipped with Windows 98 and also as part of Microsoft Office, will automatically invoke IE, which can in turn invoke ActiveX controls. HTML <object> tags contain JavaScript, which can invoke the controls. In his first example, just by reading an email message (from Georgie), there is now a file in his startup folder that invokes command.com (and could have invoked delete tree, the Microsoft equivalent to rm -rf).

At least five to ten million people run IE5. By default under Outlook, the viewer for messages is IE5. So when you hear about browser exploits, think email also. This means you can direct an attack at particular persons, as long as they use IE5. As far as Smith knows, there is no way to disable HTML interpretation, but you can improve your security by disabling scripting.

Actually, Microsoft did include a method for assuring that only certain controls, marked as "safe," could be invoked from scripts. What's happening is that ActiveX controls can be marked safe for scripting when they are not. If they are marked safe for scripting, they can be used in Web pages (or email). Windows 98 comes with over 900 controls, and thousands more may be added by the vendor and by installing software packages.

Smith and his friends have uncovered about ten controls that are marked safe but are not. His favorite example is the Launch ActiveX control that has been shipped with five million HP Pavilion PCs. With this control, email or Web pages can be used to start any application on the targeted system. Compaq installed a slightly less dangerous control, one that can overwrite any file (autoexec.bat, for example). There is also an ActiveX control in IE5 that can steal your PGP secret key.

Internet Explorer does permit you to disable scripting, which will foil these attacks. Of course, you could just use Netscape and not have this problem. But there is another technique that can be used even if you disable scripting. Smith called this the bullying technique. He used the Compaq control as an example. A dialog box pops up, saying that the control will be used only if the user trusts Compaq (based on the authenticode signature). Smith enters "No," and immediately the dialog box pops up again with the same question. Eventually the user is likely to give up and say yes.

Smith mentioned that the problems do not appear as bad with IE4 and Windows 95.

You can view the interfaces to ActiveX controls with OLEview. This is not a standard part of a Windows or NT distribution, but does come with Visual Studio and other developer tools. What would be nicer would be something that would print the IDL (Interface Definition Language) as text, so that UNIX tools could be used to look for interesting interfaces (those named Launch, Write, IObjectSafety, etc.).

Smith provided a URL for pages that will check your system for dangerous controls: <https://www.tiac.net/users/smiths/acctroj/index.htm>. I visited this page (using Netscape and UNIX), but you are really supposed to use it with IE5 (and I think we can trust Mr. Smith not to do anything bad).

There were some questions at the end of the talk. Someone asked what policy Smith would suggest in regard to various scripting languages. Smith said that he feels pretty good about JavaScript and the controls on Java, but doesn't really see much use of ActiveX on the Internet and suggests disabling it. Another person asked about blocking ActiveX at firewalls. Smith responded that this does not work well.

Finally, a person asked plaintively, "Do I tell people it's safe to read email?" Smith responded, "In theory, no, ever since HTML email appeared. Never open up attachments, and get your security settings right in Outlook Express." Sigh.

Apples, Oranges, and the Public Key Infrastructure (PKI)

Paul C. Van Oorschot, Entrust Technologies


Summary by Michael J. Covington

Paul Van Oorschot is a vice president and chief scientist with Entrust Technologies, a spin-off from the Secure Networks division at Nortel. With an extensive background in mathematics, cryptography, and work in the industrial sector, Van Oorschot has been exposed to many of the forces that are motivating the drive for secure computing in today's business marketplace. He discussed the challenges involved in building a flexible, secure, and standardized public key infrastructure (PKI). From certificate creation to contract signing, he discussed the details of this complicated, and still unstandardized, transaction-based process.

Van Oorschot opened his talk by promoting PKIs and by providing a brief history detailing the evolution of certificate-based technologies. As he discussed the details of certificates and the many popular protocols that rely on them, it became apparent that certificate technology is indeed becoming ubiquitous. Unfortunately, the current state of certificate deployment seems to be on a course for disaster. Statistics show that over 150 million browsers have been distributed with the capability to "understand certificates," yet the standards by which these objects are created, destroyed, and maintained have yet to be established.

In addition to lacking a supportive infrastructure for public-key technologies, there remains widespread misunderstandings about PKIs in general. Paul detailed a number of these, from differing deployment markets to complex theoretical and implementation details. His point was clear: there are different markets and different solutions available. There are also various service approaches that affect design. The key (no pun intended) to proper PKI design, however, lies in a thorough understanding of the requirements and well-defined standards.

The most interesting segment of Van Oorschot's talk was his detailed discussion of the fundamental components that make up a PKI and the features that should be incorporated into each. A common misunderstanding is that a certification authority (CA) alone can serve as a PKI. Van Oorschot insists that this is not the case—the CA is but one small piece in a large puzzle. In addition to certificate issuance, there need to be mechanisms in place to provide services such as key generation, key updating, key expiry, and even certificate validation.

Building upon his concept of the all-in-one PKI, Van Oorschot proceeded to present an argument for a single security infrastructure—one in which multiple applications, running on a diverse set of operating systems, could operate seamlessly. Such a system, when placed in the appropriate computing environments, could be leveraged across all enterprise applications and would avoid a great deal of the interoperable complexity and duplication of effort that would arise from multiple systems.

Van Oorschot argued that a "Managed PKI" involves a complete integration of several components into a single, trusted system. With a PKI sitting at the heart of secure e-transactions, it is important that the infrastructure address cross-application security and also provide the key and certificate-management abilities that make the system easy to understand and operate.

Audience discussion that followed the presentation addressed an issue related to actual PKI implementations. Various users in the group were interested in Van Oorschot's comments on PKIs and "mapping services" that distinguish between common names or IDs and internal system identifiers. Although the discussion focused on a particular implementation, concerns regarding widespread usage of public-key technologies were expressed. Clearly, as these technologies are deployed, the ability to look up users and their associated keys accurately will be yet another component of this already rich public-key infrastructure. We can only hope that the standards defining these technologies are introduced soon.

WORKS IN PROGRESS

Summaries by Jim Simpson

USENIX Student Benefits
Peter Honeyman, University of Michigan

As USENIX's elected secretary, Peter Honeyman focuses on academic relationships and scholastic support. USENIX has 7,000 members, grows by about 12% a year, and requires about $5 million a year to run. $1 million goes to good works, which comprise support for directed projects, and—most important to Honeyman—direct support for students in the form of student stipends to attend conferences. Another program, the USENIX Scholars program, spends a third of a million dollars a year supporting graduate-student research in distributed systems. It is easy to write proposals to fund students and their research. Check out <https://www.usenix.org/students>.

An Update on AES Selection
Elaine Barker, NIST

DES has been in use since 1977; AES will replace it in the near future. Twenty-one algorithms were originally submitted over a two-year period, and 15 were analyzed. Of those 15, five were chosen as Round 2 candidates. AES3 will take place in April 2000, Round 2 will end in May, and the winners will be announced in summer 2000. Promulgation should happen in 2001. More information can be found at <https://www.nist.gov/aes>.

Distributed Firewalls
Steve Bellovin, AT&T Labs

Today's firewalls do not meet today's needs. However, firewalls are still very necessary; they are single points of control that block harmful protocols and act as shields for buggy implementations. Bellovin has come up with the concept of a distributed firewall, where control is centralized but doesn't rely on topology, so there is no single point of failure. A system manager uses a high-level language to describe endpoints and specify the security policy. A compiler translates the policy into filter rules that are distributed to all servers. Filtering is done at IPSec layer, so identity matters, not topology. [Editor's note: See Bellovin's article, "Distributed Firewalls," in the special November Security issue of ;login:.]

T-Class SOBER Stream Cipher
Greg Rose, QUALCOMM

This talk turned out a bit different from what was originally planned. Greg announced a new stream cipher called SOBER during the last Security Symposium, and since then no fundamental weaknesses have been found. At this WIP Greg had intended to announce an enhanced, more efficient version. However, it turns out that in the planned "new SOBER" the stuttering used to protect the nonlinear function is broken; it throws away the strength of the cipher. The people who know more about the cipher believe the original stuttering and cipher are strong. They plan on using the original stuttering with their new optimizations.

Security Risk Management
Andrew Kotulic, York College of Pennsylvania

There are no theoretically based process models in this field, so this is a program/policy to figure out how to organize SRM. The conceptual model analyzes which factors security practitioners have control over and which ones they do not. The industry is not very responsive to this research, but the project is pressing forward by routing around chief security officers in companies and speaking with their executives directly.

PK No I
Peter Honeyman, University of Michigan

Spurred by time constraints, Honeyman amusingly went through his talk backwards. The project is trying to provide access-controlled Web space, but it cannot wait for Kerberos to become integrated in browser space. They are looking at solutions using ApacheSSL, but issuing client certificates is a problem — where do they go? Thankfully, IE uses CAPI, and there is a PKCS#11 plug-in for Netscape to take care of it with that browser. There is currently an issue with IE on the Mac when using IE: no CAPI. They modified an MIT solution so it uses one-day lifetime certificates (junk keys), not useful for anything but authentication. It is working, leveraging off their Kerberos infrastructure. Slides from the talks are at <https://www.citi.umich.edu/u/honey/talks/pknoiwip/sld001.htm>.

Intermediate Protocol Enforcement
Craig Metz, University of Virginia

There are a lot of broken systems, and everyone wants a "drop box" that will make problems go away. In theory we could build a box to do that, since protocols are well specified. Metz is in the process of trying to do this right and anticipates finding more problems than anticipated. The code currently works on IPv4, working on TCP segmentation and reassembly. In current form it seems to work as proposed: things that do not follow protocol will not go through.

The History and Future of Computer Attack Taxonomies
Daniel Lough, Virginia Tech

This is an attempt to classify attacks into different categories. Security failures occur in three areas: protocol design, protocol implementation, and configuration by user. Several different kinds of attacks span these areas. A rise in router attacks in the future is anticipated. Lough will be taking the past attack work by Howard and Bishop and seeing if there are similarities with attacks of the present day. Wireless networks of the future may constitute an even larger problem.

Trust Management and Network Layer Security
Matt Blaze, AT&T Labs

Keynote is a systematic way to answer questions characterized by dangerous actions in distributed systems. This is done by representing and specifying policies, credentials, and relationships among principals. Current work involves an IPSec trust-management architecture; this allows control over which packet filter is installed when a security association is created. Keynote is freely available. More information can be found at <https://www.cis.upenn.edu/~keynote>.

Sun Enterprise Network Security Service (Bruce)
Alec Muffett, Sun Microsystems UK

One of the shortcomings of vulnerability scanners is that they do not scale to WAN size. Bruce is a system of distributed daemons that are linked into a hierarchy that distributes and executes security-checking code. Information is centrally retrieved, collated, and then viewed by a Web browser. They are working on bug fixes, functionality enhancement, and security refinements. [Editor's note: See Muffett's article, "SENSS Bruce: Developing a Tool to Aid Intranet Security," in the special November Security issue of ;login:.]

Internet Mapping
Bill Cheswick, Bell Labs

For the last year, Bill Cheswick has been collecting Internet maps. He does this by running traceroutes from a machine at Bell Labs to 90,000 destinations, and then saving the data. Hal Burch at CMU helped with the layout algorithm to generate maps from the data. An interesting use is mapping an intranet to find holes in its perimeter and report back; this program can do it in an hour or so, whereas a high-caliber commercial application takes a month. The code is not available, but they are happy to scan your network. They may turn this project into a consulting tool for Lucent. More information can be found at <https://www.cs.bell-labs.com/who/ches/map>.

Secure Remote Access to an Internal Webserver (Absent)
Avi Rubin, AT&T Labs

Absent is a solution allowing an absent individual to access a secure site. This works with a Web browser using a one-time password over an SSL connection that encrypts the channel and allows secure access to Web servers behind the firewall by rewriting URLs. It is fully functional, and the paper is available from <https://www.research.att.com/projects/absent>.

Cool Smartcard Hacks
Peter Honeyman, University of Michigan

Some of this was previously published in the Proceedings of the USENIX Workshop on Smartcard Technology last May. Peter went over a Kerberos client involving a smartcard, a file system that integrates with a smartcard, secure booting, Javacard Web servers, and PalmPilot hacks. More information on CITI's work with smartcards is available at <https://www.citi.umich.edu/projects/sinciti/smartcard>.

GSM A5/2 Algorithms Revealed
Nikita Borisov, UC Berkeley

GSM phones use a number of algorithms, A3, A8, and A5—A5 being the voice-encryption algorithm. All were designed in secret except for A5/0, although they are widely deployed (~100M units). A5/2 was reverse-engineered in August and a few hours later was broken. Borisov discussed how the algorithm works and how it was broken. The code to break the algorithm is written and functional, and it runs in record time. At the end of the talk, he made the source code for A5/1 and A5/2 available.

 

?Need help? Use our Contacts page.
Last changed: 6 Dec. 1999 jr
Conference index
;login: issue index
Proceedings index
USENIX home