Next: Background
Up: The CRISIS Wide Area Architecture
Previous: Introduction
CRISIS is the security subsystem for WebOS [Vahdat et al. 1997]. The goal
of WebOS is to provide operating system primitives for wide area
applications now available only for a single machine or on a local
area network. Such abstractions include authentication,
authorization, a global file system, naming, resource allocation,
and an architecture for remote process execution. To date, wide area
network applications
have been forced to re-implement these services on a case by case
basis. WebOS aims to ease and support network application development
by providing a substrate of common OS services.
The focus of this paper is the architecture of the WebOS security
subsystem which cuts across all other aspects of the system. Below,
we briefly describe a number of scenarios we have used to drive the
CRISIS design:
- SchoolNet: One motivating example is to provide Internet
services such as email, Web page hosting, and chat rooms for very
large numbers of school children. One desirable feature of such a
system is to allow geographically distributed children to be able to
interact with one another, while keeping both the interactions and the
identities of those involved private. Further, to be useful to school
children, the security system must work with only limited direction
from end users (e.g., you cannot trust a fifth grader to correctly set
up access control lists).
- Wide Area Collaboration: Users in separate administrative
domains should be able to collaborate on a common project. For
example, a project's source code repository should be globally
accessible to authorized principals for check in/check out; in
addition, unique hardware (such as supercomputers) should be
seamlessly accessible independent of geographic location.
- Geographically Distributed Internet Services: If it were easy to
geographically replicate and migrate Internet services, end-users
would see better availability, reduced network congestion, and better
performance. Today, only the most popular sites can afford to be
geographically distributed; for example, Alta Vista [Dig 1995]
has mirror sites on every major continent, but these mirrors are
physically administered by DEC, manually kept up to date, and visible
to the end user. One of our goals is to make all this transparent, to
make it feasible for third party system administrators to offer
computational resources strategically located on the Internet for rent
to content providers; in the limit, content providers could become
completely virtual, with the degree and location of replicas
dynamically scaled based on access patterns.
- Mobile Login: Users should be able to login and to access
resources from any machine that they trust. Secure login requires
mutual authentication. Thus, users will only log into machines
certified to have been booted properly by a trusted system
administrator. Likewise, local system administrators enforce which
users are allowed login access (e.g. login to Berkeley by Stanford
users would be disallowed outright). Finally, users should be allowed
to adopt restricted roles representing the amount of trust they have
for the machine being logged into.
- Encrypted Intermediate Caches: To improve application
performance, untrusted third party servers may be utilized to cache
encrypted private data. A special key would be created to encrypt the
data rather than using the key of a particular principal; this key
would then only be distributed to authorized users. One path to
implementing such an application would be the use of Active
Networks [Tennenhouse & Wetherall 1996] where intelligent routers can be
utilized to perform the caching.
- Large Scale Remote Execution: Principals should be able to
exploit global resources to run large scale computations. For
example, NASA is placing petabytes of satellite image data on-line for
use by earth scientists in predicting global warming. It is
impractical to access this information using the current Internet
``pull'' model; scientists need to be able to run filters remotely at
the data storage site to determine which data is useful for download.
These filters should have access to necessary input (e.g., the filter
executables) and output files (e.g., files into which the results are
to be stored) on the scientist's machine, but to no other potentially
sensitive data. Further, the remote computation environment should be
protected from any bugs in the filters written by the scientists.
Next: Background
Up: The CRISIS Wide Area Architecture
Previous: Introduction
Amin Vahdat
12/10/1997