Check out the new USENIX Web site.
FAST '11 Banner


Overview | Tutorial Descriptions

  Half-Day Morning Tutorials   (9:00 a.m.–12:30 p.m.)

T1 Storage in Virtual Environments NEW!
Mostafa Khalil, VMware

With the growth of virtualization platforms, the demand for shared storage solutions and storage virtualization has grown significantly. This tutorial will discuss the following:

  • Evolution of Virtualization Platforms: The history begins as early as the 1960s, with the IBM mainframe, and moves the development of the virtual machine monitor (VMM), the birth of the x86-based virtual platform in the late 1990s (VMware Workstation) and its evolution into an enterprise platform. This opened the field for other implementations such as Microsoft Hyper-V, Citrix Xen Server, and Oracle Virtual Iron.
  • Dedicated and Shared Storage in Virtual Environments: Storage is an integral element of any virtual platform, used for virtual disks, configuration files, logs, etc. The storage design morphed from simple local storage into a complex mix of media, protocols, and file systems comprising an enterprise shared storage environment.
  • VMware File System (VMFS): VMware Virtual Infrastructure and, recently, vSphere 4.0 and 4.1 introduced advanced features that require and depend on a high-performing clustered file system, VMFS. We will cover the different types of VMFSes associated with the maturity stages of VMware Virtual Platform.
  • The Birth of Storage Virtual Appliances: The need for making shared storage available to small and medium-sized businesses resulted in the creation of storage virtual appliances that turn inexpensive unused space on local storage into shared storage with performance that meets the needs of that market segment. These appliances are provided in the Open Virtualization Format standard, which makes them usable on any virtual platform that complies with this standard. We will explain the standard and the architecture and uses of these virtual appliances and for which I/O requirements they are suitable.
  • The Role of Storage in Business Continuity/Disaster Recovery: We will share some high-level information about the role storage plays in the ever crucial BC/DR design and the requirements for its use in the vSphere 4.x virtual environment.

Mostafa Khalil is a senior staff engineer with VMware Global Support Services. He has worked for VMware for over 12 years and has supported all VMware virtualization products since Workstation for Linux 1.0 beta. His focus has been on storage integration in the virtual environment. He has a Bachelor's degree in medicine from Cairo, Egypt, as well as numerous professional certifications and training: VCDX, VCP, VCAP4 DCA, MCSE, Master CNE, HP ASE, IBM CSE, and Lotus CLP. He has worked with most major storage vendors' solutions and received engineering-level training from most of the enterprise storage vendors. Mostafa has presented at all VMworld events since its inception, as well as at VMware Partner Exchange (formerly Technical Solution Exchange) and in the VMware User Group. He is currently writing a comprehensive illustrated book on storage design and integration into vSphere 4.x Virtual Environment.

T2 Clustered and Parallel Storage System Technologies NEW!
Brent Welch, Panasas

Cluster-based parallel storage technologies are now capable of delivering performance scaling from 10s to 100s of GB/sec. This tutorial will examine current state-of-the-art high-performance file systems and the underlying technologies employed to deliver scalable performance across a range of scientific and industrial applications.

The tutorial has two main sections. The first section will describe the architecture of clustered, parallel storage systems, including the Parallel NFS (pNFS) and Object Storage Device (OSD) standards. We will compare several open-source and commercial parallel file systems, including Panasas, Lustre, GPFS, and PVFS2. We will also discuss the impact of solid-state disk technology on large-scale storage systems. The second half of the tutorial will cover performance, including what benchmarking tools are available, how to use them to evaluate a storage system correctly, and how to optimize application I/O patterns to exploit the strengths and weaknesses of clustered, parallel storage systems.

Brent Welch is Director of Software Architecture at Panasas. Panasas has developed a scalable, high-performance, object-based distributed file system that is used in a variety of HPC environments, including many of the Top500 supercomputers. He has previously worked at Xerox PARC and Sun Microsystems laboratories. Brent has experience building software systems from the device-driver level up through network servers, user applications, and graphical user interfaces. While getting his PhD at UC Berkeley, he designed and built the Sprite distributed file system. Brent participates in the IETF NFSv4 working group and is co-author of the pNFS Internet drafts that specify parallel I/O extensions for NFSv4.1.

  Half-Day Afternoon Tutorials (1:30 p.m.–5:00 p.m.)

T3 Cloud Storage Systems NEW!
Benjamin Reed, Yahoo! Research; Prasenjit Sarkar, IBM Research

Cloud computing has given architects new ways of using distributed systems. At the same time, the scale and elastic nature of the cloud have caused us to rethink how we design and use these systems. This tutorial explores the storage aspect of cloud computing to show how cloud has changed the ways we look at and use storage. It is intended for architects and researchers interested in using cloud storage or pursuing research in this area.

Although cloud storage solutions provide the same storage functionality as classical distributed systems at a high level, there are important differences that greatly influence both the design of the applications that use them and the implementation of the storage itself. We will compare the storage APIs of distributed file systems with cloud storage APIs such as S3, EBD, and HDFS. We will also compare the structured data access APIs of the cloud, such as NOSQL and distributed databases.

Cloud storage and storage solutions from high performance computing (HPC) have similar scale and performance requirements. We will examine the differences and commonalities of the deployment environment and the storage requirements of cloud storage, HPC storage, and SAN-based storage in general.

Finally, we will do a survey of current research topics in cloud storage, including RAID strategies, green computing, failure handling, and processing models. We will also point out some disruptive technologies on the horizon, including solid state storage, new network technologies, system balance, and new storage models.

Benjamin Reed is a Research Scientist at Yahoo! Research, where he works on some of the largest distributed systems. His work on workflow languages (Pig) and tools for building distributed systems (ZooKeeper, BookKeeper, and Hedwig) have been open sourced and used both inside and outside Yahoo! He has two decades of industry exprience, ranging from developing shipping and receiving applications in OS/2, AIX, and CICS (Sears), to operations (Motorola), system administration research and Java frameworks (he is an OSGi Fellow) at IBM Almaden Research (11 years), and, finally, large-scale systems at Yahoo! Research (4 years ago). He finished his PhD work in distributed storage systems at the University of California, Santa Cruz, in 2000. His main interests now are large-scale processing environments and highly available and scalable systems.

Prasenjit Sarkar is a computer science researcher and Master Inventor with IBM Research who focuses on the storage cloud, autonomic data storage resource management, and storage networking protocols. Dr. Sarkar received his Bachelor's degree in computer science and engineering from the Indian Institute of Technology, Kharagpur, in 1992 and his Master's and PhD in computer science from the University of Arizona in 1994 and 1998, respectively. After graduation, he joined the research staff at IBM Almaden.

T4 System Design Impacts of Storage Technology Trends NEW!
Steven R. Hetzler, IBM Almaden Research Center

Designing storage systems without a solid understanding of the future behavior of the underlying storage technologies can be problematic. For example, there has been much excitement recently surrounding the effort to bring solid state storage into the IT storage hierarchy. NAND flash has largely displaced hard disk drives (HDD) in mobile consumer applications, effectively eliminating the use of HDDs in the sub 1.8 form factors. Based on these events, it is commonly assumed that solid state storage is poised to make significant inroads in the IT storage space. However, this revolution has been slow in coming. In this tutorial we will learn why. We will also examine how system reliability is impacted by technology shifts.

This tutorial will introduce tools for identifying the market potential for storage technologies, which leads to an understanding of how to exploit them in the design of storage systems. We will examine the economic foundations of storage technologies, including an analysis of the capital costs required to produce storage. We will demonstrate how to forecast the evolution of hard disk and flash storage in the IT space. The primary focus will be on solid state storage in IT systems, but broader application will be shown as well.

We will examine how the changes in storage technologies affect system reliability. For example, the behavior trends of flash and hard disk technologies result in different requirements for system designs. We will discuss how to determine which parameters affect reliability and how to model their impact. Finally, we will look at the future density growth prospects for various storage technologies.

Steven R. Hetzler is an IBM Fellow at IBM's Almaden Research Center (San Jose, CA), where he manages the Storage Architecture Research Group. He was educated at the California Institute of Technology, where he received his PhD and Master's degree in applied physics in 1986 and 1982, respectively. He joined IBM Research in November 1985 and was named an IBM Fellow in 1998. He is currently focusing on novel architectures for storage systems and on applications for non-volatile storage. Most recently, he developed Chasm Analysis, a methodology for analyzing market potential for storage technologies using economic data. It has proven successful in forecasting the limits of solid state storage adoption. Previously, he initiated work on the IP storage protocol that is now known as iSCSI, which he later named. Steven has worked in a number of different fields, including data storage systems and architecture, error correction coding, power management, hard disk storage, and solid state physics.

?Need help? Use our Contacts page.

Last changed: 9 Dec. 2010 jp