|
WORKSHOP PROGRAM ABSTRACTS
|
Sunday, October 3, 2010
|
9:30 a.m.–10:30 p.m. |
Dynamic Voltage and Frequency Scaling: The Laws of Diminishing Returns Back to Program
Dynamic voltage and frequency scaling (DVFS) is a commonly-used power-management technique where the clock frequency of a processor is decreased to allow a corresponding reduction in the supply voltage. This reduces power consumption, which can lead to significant reduction in the energy required for a computation, particularly for memory-bound workloads.
However, recent developments in processor and memory technology have resulted in the saturation of processor clock frequencies, larger static power consumption, smaller dynamic power range and better idle/sleep modes. Each of these developments limit the potential energy savings resulting from DVFS. We analyse this trend by examining the potential of DVFS across three platforms with recent generations of AMD processors. We find that while DVFS is effective on the older platforms, it actually increases energy usage on the most recent platform, even for highly memory-bound workloads.
A Case for Opportunistic Embedded Sensing in Presence of Hardware Power Variability Back to Program
The system lifetime gains provided by the various power management techniques in embedded sensing systems are a strong function of the active and sleep mode power consumption of the underlying hardware platform. However, power consumption characteristics of hardware platforms exhibit high variability across different instances of the platform, diverse ambient conditions, and over passage of time. The factors underlying this variability include increased manufacturing variations and aging effects due to shrinking transistor geometries, and deployment of embedded devices in extreme environments. Our experimental measurements show that large variability in sleep mode power is already present in commonly used embedded processors, and technology trends suggest that the variability will grow even more over time and affect active mode power as well. Such variability results in suboptimal lifetime and service quality. We therefore argue for energy management approaches that learn and model the power characteristics of the specific instance of the hardware platform, and adapt accordingly.
|
11:00 a.m.–Noon |
Analyzing Performance Asymmetric Multicore Processors for Latency Sensitive Datacenter Applications Back to Program
The semiconductor industry is continuing to harness performance gains through Moore's Law by developing multicore chips. While thus far these architectures have incorporated symmetric computational components, asymmetric multicore processors (AMPs) have been proposed as a possible alternative to improve power efficiency. To quantify the tradeoffs and benefits of these designs, in this paper we perform an opportunity analysis of performance asymmetric multicore processors in the context of datacenter environments where applications have associated latency SLAs. Specifically, we define two use cases for asymmetric multicore chips, and adopt an analytical approach to quantify gains in power consumption over area equivalent symmetric multicore designs. Based upon our findings, we discuss the practical merits of performance asymmetric chips in datacenters, including the issues that must be addressed in order to realize the theoretical benefits.
Energy Conservation in Multi-Tenant Networks through Power Virtualization Back to Program
In the service-centric Internet, multiple virtual services (tenants) are overlayed on top of the same infrastructure (both in wide-area networks and in datacenter networks). We propose conserving energy, in this setting, by virtualizing network power consumed by each tenant, feeding back that information to the tenant, and incentivizing the tenant to conserve energy by making their bill proportional to this virtual power. However, virtualizing power in these multi-tenant networks is tricky since the network is not energy-proportional, i.e., the energy consumption and its monetary expenditure do not reduce with a decrease in load per component. We overcome this limitation by proposing a simple heuristic for billing, that further motivates tenants to align their workload in a manner conducive to optimization by the infrastructure provider.
|
1:30 p.m.–3:00 p.m. |
Energy Savings in Privacy-Preserving Computation Offloading with Protection by Homomorphic Encryption Back to Program
This paper investigates energy savings on mobile systems in privacy-preserving computation offloading. Offloading computation-intensive programs to servers can save energy but data must be protected for privacy concerns. The protection schemes must guarantee operations performed on the protected data remain meaningful and the results are still acceptable. The protection cannot require excessive amounts of energy overhead. We propose to adopt homomorphic encryption to protect data in image retrieval before sending data to servers. We implement our method on a PDA and evaluates the retrieval performance and energy savings.
Green Server Design: Beyond Operational Energy to Sustainability Back to Program
"Green" server and datacenter design requires a focus on environmental sustainability. Prior studies have focused on operational energy consumption as a proxy for sustainability, but this metric only captures part of the environmental impact. In this paper, we argue that to understand the total impact, we need to examine the entire lifecycle of the system, beyond operational energy to also include material use and manufacturing. We make two main contributions. We present a methodology that allows such a lifecycle analysis, specifically providing attribution of sustainability bottlenecks to individual system architecture components. Using this methodology, we compare the sustainability tradeoffs between popular energy-efficiency optimizations and discuss sustainability bottlenecks and optimizations for future system designs.
GreenHDFS: Towards an Energy-Conserving, Storage-Efficient, Hybrid Hadoop Compute Cluster Back to Program
Hadoop Distributed File System (HDFS) presents unique challenges to the existing energy-conservation techniques and makes it hard to scale-down servers. We propose an energy-conserving, hybrid, logical multi-zoned variant of HDFS for managing data-processing intensive, commodity Hadoop cluster. Green HDFS's data-classification-driven data placement allows scale-down by guaranteeing substantially long periods (several days) of idleness in a subset of servers in the datacenter designated as the Cold Zone. These servers are then transitioned to high-energy-saving, inactive power modes. This is done without impacting the performance of the Hot zone as studies have shown that the servers in the data-intensive compute clusters are under-utilized and, hence, opportunities exist for better consolidation of the workload on the Hot Zone. Analysis of the traces of a Yahoo! Hadoop cluster showed significant heterogeneity in the data's access patterns which can be used to guide energy-aware data placement policies. The trace-driven simulation results with three-month-long real-life HDFS traces from a Hadoop cluster at Yahoo! show a 26% energy consumption reduction by doing only Cold zone power management. Analytical cost model projects savings of $14.6 million in 3-year total cost of ownership (TCO) and simulation results extrapolate savings of $2.4 million annually when GreenHDFS technique is applied across all Hadoop clusters (amounting to 38000 servers) at Yahoo.
|
3:30 p.m.–5:00 p.m. |
Demystifying 802.11n Power Consumption Back to Program
We report what we believe to be the first measurements of the power consumption of an 802.11n NIC across a broad set of operating states (channel width, transmit power, rates, antennas, MIMO streams, sleep, and active modes). We find the popular practice of racing to sleep (by sending data at the highest possible rate) to be a useful heuristic to save energy, but that it does not always hold. We contribute three other useful heuristics: wide channels are an energy-efficient way to increase rates; multiple RF chains are more energy-efficient only when the channel is good enough to support the highest MIMO rates; and single antenna operation is always most energy-efficient for short packets.
Chaotic Attractor Prediction for Server Run-time Energy Consumption Back to Program
This paper proposes a chaotic time series model of server system-wide energy consumption to capture the dynamics present in observed sensor readings of underlying physical systems. Based on the chaotic model, we have developed a real-time predictor that estimates actual server energy consumption according to its overall thermal envelope. This chaotic time series regression model relates processor power, bus activity, and system ambient temperatures for real-time prediction of power consumption during job execution to enable run-time control of their thermal impacts. An experimental case study compares our Chaotic Attractor Predictor (CAP) against previous prediction models constructed according to other statistical methods. Our CAP is found to be accurate within an average error of 2% (or 7%) and the worst case error of 7% (or 20%) for the AMD Opteron processor (or for the Intel Nehalem processor), based on executing a set of SPEC CPU2006 benchmarks.
Automatic Server to Circuit Mapping with the Red Pills Back to Program
As fine grained power monitoring becomes crucial in data centers, a challenge raises on how to correctly map server identities to power circuits. Power provisioning, power capping, and power tracking all depend on accurately accounting which server consumes power from which circuit. Manual survey is cumbersome and error prone. We describe a solution, called Red Pill, that can systematically and automatically identify the mapping. The idea is to generate a power consumption pattern (a.k.a. a signature) by controlling CPU utilization, and to reliably detect it from circuit-level power measurements. We describe our implementation of the Red Pill system and evaluate it with real data traces.
|
|