While integrity is a binary property, integrity is a relative property that depends on the verifier's view of the ability of a program to protect itself. Biba defines that integrity is compromised when a program depends on (i.e., reads or executes) low integrity data [3]. In practice, programs often process low integrity data without being compromised (but not all programs, all the time), so this definition is too restricted. Clark-Wilson define a model in which integrity verification procedures verify integrity at system startup and high integrity data is only modified by transformation procedures that are certified to maintain integrity even when their inputs include low integrity data [4]. Unfortunately, the certification of applications is too expensive to be practical.
More recent efforts focus on measuring code and associating integrity semantics with the code. The IBM 4758 explicitly defines that the integrity of a program is determined by the code of the program and its ancestors [5]. In practice, this assumption is practical because the program and its configuration are installed in a trusted manner, it is isolated from using files that can be modified by other programs, and it is assumed to be capable of handling low integrity requests from the external system. To make this guarantee plausible, the IBM 4758 environment is restricted to a single program with a well-defined input state and the integrity is enforced with secure boot. However, even these assumptions have not been sufficient to prevent compromise of applications running on the 4758 which cannot handle low integrity inputs properly [6]. Thus, further measurement of low integrity inputs and their impact appear to be likely.
The key differences in this paper are that: (1) we endeavor to define practical integrity for a flexible, traditional systems environment under the control of a potentially untrusted party and (2) the only special hardware that we leverage is the root of trust provided by the Trusted Computing Group's Trusted Platform Module (TCG/TPM). In the first case, we may not assume that all programs are loaded correctly simply by examining the hash because the untrusted party may try to change the input data that the program uses. For example, many programs enable configuration files to be specified in the command line. Ultimately, applications define the semantics of the inputs that they use, so it is difficult for an operating system to detect whether all inputs have been used in an appropriate manner by an application if its environment is controlled by an untrusted party. However, a number of vulnerabilities can be found by the operating system alone, and it is fundamental that the operating system collect and protect measurements.
Second, the specialized hardware environment of the IBM 4758 enables secure boot and memory lockdown, but such features are either not available or not practical for current PC systems. Secure boot is not practical because integrity requirements are not fixed, but defined by the remote challengers. If remote parties could determine the secure boot properties of a system, systems would be vulnerable to a significant denial-of-service threat. Instead the TCG/TPM supports trusted boot, where the attesting system is measured and the measurements are used by the challengers to verify their integrity requirements. Since trusted boot does not terminate a boot when a low integrity process is loaded, all data could be subject to attack during the ``untrusted'' boot. Since multiple applications can run in a discretionary access control environment concurrently, it is difficult to determine whether the dynamic data of a system (e.g., a database) is still acceptable. Discretionary integrity mechanisms, such as sealed storage [7], do not solve this problem in general.