Persistent Link:
http://hdl.handle.net/10150/193653
Title:
Adaptive Power and Performance Management of Computing Systems
Author:
Khargharia, Bithika
Issue Date:
2008
Publisher:
The University of Arizona.
Rights:
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
Abstract:
With the rapid growth of servers and applications spurred by the Internet economy, power consumption in today's data centers is reaching unsustainable limits. This has led to an imminent financial, technical and environmental crisis that is impacting the society at large. Hence, it has become critically important that power consumption be efficiently managed in these computing power-houses of today. In this work, we revisit the issue of adaptive power and performance management of data center server platforms. Traditional data center servers are statically configured and always over-provisioned to be able to handle peak load. We transform these statically configured data center servers to clairvoyant entities that can sense changes in the workload and dynamically scale in capacity to adapt to the requirements of the workload. The over-provisioned server capacity is transitioned to low-power states and they remain in those states for as long as the performance remains within given acceptable thresholds. The platform power expenditure is minimized subject to performance constraints. This is formulated as a performance-per-watt optimization problem and solved using analytical power and performance models. Coarse-grained optimizations at the platform-level are refined by local optimizations at the devices-level namely - the processor & memory subsystems. Our adaptive interleaving technique for memory power management yielded about 48.8% (26.7 kJ) energy savings compared to traditional techniques measured at 4.5%. Our adaptive platform power and performance management technique demonstrated 56.25% energy savings for memory-intensive workload, 63.75% savings for processor-intensive workload and 47.5% savings for a mixed workload while maintaining platform performance within given acceptable thresholds.
Type:
text; Electronic Dissertation
Keywords:
Green Data center; Power Management; Autonomic Computing; Optimization; Platform Power Management; Adaptive Interleaving; Memory
Degree Name:
PhD
Degree Level:
doctoral
Degree Program:
Electrical & Computer Engineering; Graduate College
Degree Grantor:
University of Arizona
Advisor:
Hariri, Salim
Committee Chair:
Hariri, Salim

Full metadata record

DC FieldValue Language
dc.language.isoENen_US
dc.titleAdaptive Power and Performance Management of Computing Systemsen_US
dc.creatorKhargharia, Bithikaen_US
dc.contributor.authorKhargharia, Bithikaen_US
dc.date.issued2008en_US
dc.publisherThe University of Arizona.en_US
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.en_US
dc.description.abstractWith the rapid growth of servers and applications spurred by the Internet economy, power consumption in today's data centers is reaching unsustainable limits. This has led to an imminent financial, technical and environmental crisis that is impacting the society at large. Hence, it has become critically important that power consumption be efficiently managed in these computing power-houses of today. In this work, we revisit the issue of adaptive power and performance management of data center server platforms. Traditional data center servers are statically configured and always over-provisioned to be able to handle peak load. We transform these statically configured data center servers to clairvoyant entities that can sense changes in the workload and dynamically scale in capacity to adapt to the requirements of the workload. The over-provisioned server capacity is transitioned to low-power states and they remain in those states for as long as the performance remains within given acceptable thresholds. The platform power expenditure is minimized subject to performance constraints. This is formulated as a performance-per-watt optimization problem and solved using analytical power and performance models. Coarse-grained optimizations at the platform-level are refined by local optimizations at the devices-level namely - the processor & memory subsystems. Our adaptive interleaving technique for memory power management yielded about 48.8% (26.7 kJ) energy savings compared to traditional techniques measured at 4.5%. Our adaptive platform power and performance management technique demonstrated 56.25% energy savings for memory-intensive workload, 63.75% savings for processor-intensive workload and 47.5% savings for a mixed workload while maintaining platform performance within given acceptable thresholds.en_US
dc.typetexten_US
dc.typeElectronic Dissertationen_US
dc.subjectGreen Data centeren_US
dc.subjectPower Managementen_US
dc.subjectAutonomic Computingen_US
dc.subjectOptimizationen_US
dc.subjectPlatform Power Managementen_US
dc.subjectAdaptive Interleavingen_US
dc.subjectMemoryen_US
thesis.degree.namePhDen_US
thesis.degree.leveldoctoralen_US
thesis.degree.disciplineElectrical & Computer Engineeringen_US
thesis.degree.disciplineGraduate Collegeen_US
thesis.degree.grantorUniversity of Arizonaen_US
dc.contributor.advisorHariri, Salimen_US
dc.contributor.chairHariri, Salimen_US
dc.contributor.committeememberZeigler, Bernarden_US
dc.contributor.committeememberRozenblit, Jerzyen_US
dc.identifier.proquest2708en_US
dc.identifier.oclc659749716en_US
All Items in UA Campus Repository are protected by copyright, with all rights reserved, unless otherwise indicated.