|
ACTIVITIES
HECURA File Systems and
I/O Status Reports
Tuesday, February 13, 7:00 p.m.9:30 p.m.
Wednesday, February 14, 5:00 p.m.6:55 p.m.
Thursday, February 15, 7:30 p.m.9:30 p.m.
Tuesday, February 13 | Wednesday, February 14 |
Thursday, February 15
|
7:00 p.m.7:25 p.m.
|
Introduction of HECURA
Gary Grider, Los Alamos National Lab
The High End Computing Interagency Working Group (HECIWG) on File Systems, I/O, and Storage will be introduced
as well as progress this working group has made towards increasing and better coordinating the
government funded R&D in the file systems, I/O, and storage area. The HECURA file systems and I/O
call was one of the vehicles used to increase the government funded support of this area. Future plans
of the HECIWG will be covered as well.
|
|
7:30 p.m.8:25 p.m.
|
End-to-End Performance Management for Large Distributed Storage
Scott A. Brandt, University of California, Santa Cruz
Storage systems for large and distributed clusters of compute servers are themselves large and distributed. Their complexity and scale make it hard to manage these systems and, in particular, to ensure that applications using them get good, predictable performance. At the same time, shared access to the system from multiple applications, users, and internal system activities leads to a need for predictable performance.
This project investigates mechanisms for improving storage system performance in large distributed storage systems through mechanisms that integrate the performance aspects of the path that I/O operations take through the system, from the application interface on the compute server, through the network, to the storage servers. We focus on five parts of the I/O path in a distributed storage system: I/O scheduling at the storage server, storage server cache management, client-to-server network flow control, client-to-server connection management, and client cache management.
|
|
8:30 p.m.9:25 p.m.
|
SAM² Toolkit: Scalable and Adaptive Metadata Management for High-End Computing
Hong Jiang, University of Nebraska
The increasing demand for Exa-byte-scale storage capacity by high end computing applications requires a higher level of scalability and dependability than that provided by current file and storage systems. The proposal deals with file systems research for metadata management of scalable cluster-based parallel and distributed file storage systems in the HEC environment. It aims to develop a scalable and adaptive metadata management (SAM²) toolkit to extend features of and fully leverage the peak performance promised by state-of-the-art cluster-based parallel and distributed file storage systems used by the high performance computing community.
The project involves the following components: 1. Develop multi-variable forecasting models to analyze and predict file metadata access patterns. 2. Develop scalable and adaptive file name mapping schemes using the duplicative Bloom filter array technique to enforce load balance and increase scalability 3. Develop decentralized, locality-aware metadata grouping schemes to facilitate the bulk metadata operations such as prefetching. 4. Develop an adaptive cache coherence protocol using a distributed shared object model for client-side and server-side metadata caching. 5. Prototype the SAM² components into the state-of-the-art parallel virtual file system PVFS2 and a distributed storage data caching system, set up an experimental framework for a DOE CMS Tier 2 site at University of Nebraska-Lincoln and conduct benchmark, evaluation and validation studies.
|
Tuesday, February 13 | Wednesday, February 14 | Thursday, February 15
|
5:00 p.m.5:55 p.m.
|
Active Storage Networks
John A. Chandy, University of Connecticut
Recent developments in object-based storage systems and other parallel I/O systems with separate data and control paths have demonstrated an ability to scale aggregate throughput very well for large data transfers. However, there are I/O patterns that do not
exhibit strictly parallel characteristics. For example, HPC applications typically use reduction operations that funnel multiple data streams from many storage nodes to a single compute node. In addition, many applications, particularly non-scientific applications, use small data transfers that can not take advantage of existing parallel I/O
systems. In this project, we suggest a new approach called active storage networks (ASN) - namely putting intelligence in the network along with smart storage devices to enhance storage network performance. These active storage networks can potentially improve not only storage capabilities but also computational performance for certain classes of operations. The main goals of this project will include investigation of ASN topologies and architectures, creation of ASN switch from reconfigurable components, studying HEC applications for ASNs, protocols to support programmable active storage network functions, and storage system optimizations for ASNs.
|
|
6:00 p.m.6:55 p.m.
|
Concurrent I/O Management for Cluster-based Parallel Storages
Kai Shen, University of Rochester
High-end parallel applications that store and analyze large scientific datasets demand scalable I/O capacity. One recent trend is to support high-performance parallel I/O using
clusters of commodity servers, storage devices, and communication networks. When many processes in a parallel program initiate I/O operations simultaneously, the resulted
concurrent I/O workloads present challenges to the storage system. At each individual storage server, concurrent I/O may induce frequent disk seek/rotation and thus degrade the I/O efficiency. Across the whole storage cluster, concurrent I/O may incur synchronization delay across multiple server-level actions that belong to one parallel I/O operation.
This project investigates system-level techniques to efficiently support concurrent I/O workloads on cluster-based parallel storages. Our research will study the effectiveness of I/O prefetching and scheduling techniques at the server operating system level. We will also investigate storage cluster level techniques (particularly co-scheduling techniques) to support better synchronization of parallel I/O operations. In parallel to developing new techniques, we plan to develop an understanding on the performance behavior of complex parallel I/O systems and explore automatic ways to help identify causes of performance anomalies in these systems.
|
Tuesday, February 13 | Wednesday, February 14 | Thursday, February 15
|
7:30 p.m.8:25 p.m.
|
Toward Automated Problem Analysis of Large Scale Storage Systems
Greg Ganger for Priya Narasimhan, Carnegie Mellon University
This research explores methodologies and algorithms for automating analysis of failures and performance degradations in large-scale storage systems. Problem analysis includes such crucial tasks as identifying which component(s) misbehaved, likely root causes, and supporting evidence for any conclusions. Automating problem analysis is crucial to achieving cost-effective storage at the scales needed for tomorrow's high-end
computing systems, whose scale will make problems common rather than anomalous. Moreover, the distributed software complexity of such systems make by-hand analysis increasingly untenable.
Combining statistical tools with appropriate instrumentation, the investigators hope to significantly reduce the difficulty of analyzing performance and reliability problems in deployed storage systems. Such tools, integrated with automated reaction logic, also provide an essential building block for the longer-term goal of self-healing. The research
involves understanding which statistical tools work and how well in this context for the problems of problem detection/prediction, identifying which components need attention, finding root causes, and diagnosing performance problems. It will also involve quantifying the impact of instrumentation detail on the effectiveness of those tools so as to guide justification for associated instrumentation costs. Explorations will be done primarily in the context of the Ursa Minor/Major cluster-based storage systems via fault injection and analysis of case studies observed in its deployment.
|
|
8:30 p.m.9:25 p.m.
|
Performance Insulation and Predictability for Shared Cluster Storage
Greg Ganger, Carnegie Mellon University
This research explores design and implementation strategies for insulating
the performance of high-end computing applications sharing a cluster storage
system. In particular, such sharing should not cause unexpected inefficiency.
While each application may see lower performance, due to only getting a
fraction of the total attention of the I/O system, none should see less
work accomplished than the fraction it receives. Ideally, no I/O resources
should be wasted due to interference between applications, and the I/O
performance achieved by a set of applications should be predictable fractions
of their non-sharing performance. Unfortunately, neither is true of most
storage systems, complicating administration and penalizing those that
share storage infrastructures.
Accomplishing the desired insulation and predictability requires cache
management, disk layout, disk scheduling, and storage-node selection
policies that explicitly avoid interference. This research combines
and builds on techniques from database systems (e.g., access pattern
shaping and query-specific cache management) and storage/file systems
(e.g., disk scheduling and storage-node selection). Two specific
techniques are: (1) Using prefetching and write-back that is aware of
the applications associated with data and requests, efficiency-reducing
interleaving can be avoided; (2) Partitioning the cache space based on
per-workload benefits, determined by recognizing each workload's access
pattern, one application's data cannot get an unbounded footprint in
the storage server cache.
|
|
|