Check out the new USENIX Web site.
FCW '11 Banner Tab
USENIX WIOV '11 Banner

SESSION ABSTRACTS

Tuesday, June 14, 2011
9:00 a.m.–10:30 a.m.

SplitX: Split Guest/Hypervisor Execution on Multi-Core
Back to Program
Current virtualization solutions often bear an unacceptable performance cost, limiting their use in many situations, and in particular when running I/O intensive workloads. We argue that this overhead is inherent in Popek and Goldberg's trap-and-emulate model for machine virtualization, and propose an alternative virtualization model for multi-core systems, where unmodified guests and hypervisors run on dedicated CPU cores. We propose hardware extensions to facilitate the realization of this split execution (SplitX) model and provide a limited approximation on current hardware. We demonstrate the feasibility and potential of a SplitX hypervisor running I/O intensive workloads with zero overhead.

Flash Memory Performance on a Highly Scalable IOV System
Back to Program
We present an enterprise-class I/O virtualization (IOV) system, discuss the architecture, and share performance characterization results using extremely high performance flash memory as a load generator. This work describes an IOV system built on the PCI-Express over Ethernet (PCIeOE) protocol, which combines these two ubiquitous, standardized technologies in a novel fashion. By preserving the PCI-Express (PCIe) device and software model, computers interface to the system without modifications to hardware or software. By utilizing 10G Ethernet as a transport, the system integrates with enterprise environments, achieves very high scalability and benefits from the favorable economics of the Ethernet ecosystem. Further, we present a thorough characterization of the latency and performance of the system using very high performance flash memory as an endpoint device. Flash memory serves as a high-intensity traffic generator, and also represents a compelling application of the PCIeOE technology.

VAMOS: Virtualization Aware Middleware
Back to Program
Machine virtualization is undoubtedly useful, but does not come cheap. The performance cost of virtualization, for I/O intensive workloads in particular, can be heavy. Common approaches to solving the I/O virtualization overhead focus on the I/O stack, thereby missing optimization opportunities in the overall stack. We propose VAMOS, a novel software architecture for middleware, which runs middleware modules at the hypervisor level. VAMOS reduces I/O virtualization overhead by cutting down on the overall number of guest/hypervisor switches for I/O intensive workloads. Middleware code can be adapted to VAMOS at only a modest cost, by exploiting existing modular design and abstraction layers. Applying VAMOS to a database workload improved its performance by up to 32%.

1:30 p.m.–3:00 p.m.

Revisiting the Storage Stack in Virtualized NAS Environments
Back to Program
Cloud architectures are moving away from a traditional data center design with SAN and NAS attached storage to a more flexible solution based on virtual machines with NAS attached storage. While VM storage based on NAS is ideal to meet the high scale, low cost, and manageability requirements of the cloud, it significantly alters the I/O profile for which NAS storage is designed. In this paper, we explore the storage stack in a virtualized NAS environment and highlight corresponding performance implications.

Nested QoS: Providing Flexible Performance in Shared IO Environment
Back to Program
The increasing popularity of storage and server consolidation introduces new challenges for resource management. In this paper we propose a Nested QoS service model that offers multiple response time guarantees for a workload based on its burstiness. The client workload is filtered into classes based on the Service Level Objective (SLO) and scheduled to provide requests in each class a stipulated response time guarantee. The Nested QoS model provides an intuitive, enforceable, and verifiable SLO between provider and client. The server capacity in the nested model is reduced significantly over a traditional SLO while the performance is only marginally affected.

Gatekeeper: Supporting Bandwidth Guarantees for Multi-tenant Datacenter Networks
Back to Program
Cloud environments should provide network performance isolation for co-located untrusted tenants in a virtualized datacenter. We present key properties that a performance isolation solution should satisfy, and present our progress on Gatekeeper, a system designed to meet these requirements. Experiments on our Xen-based implementation of Gatekeeper in a datacenter cluster demonstrate effective and flexible control of ingress/egress link bandwidth for tenant virtual machines under both TCP and greedy unresponsive UDP traffic.

?Need help? Use our Contacts page.

Last changed: 17 May 2011 jel