Thermal Management and Data Archiving in Data Centers
Type of DegreeDissertation
MetadataShow full item record
This dissertation is focused on thermal and resource management of data centers. Recognizing that there is the lack of comprehensive benchmarks for thermal management in the context of cluster computing in data centers, we propose a thermal effi ciency benchmark -ThermoBench - for clusters. ThermoBench evaluates the thermal effi ciency of computing and storage clusters deployed in data centers. We shed light on the criteria, metrics and challenges of developing a thermal effi ciency benchmark. We also pay particular attention to clusters running scalable client-server enterprise applications in data centers. We apply ThermoBench to evaluate the thermal effi ciency of a real-world cluster by running TPC-W benchmark with changing transactional arrival rate and mix percentage. ThermalBench provides a simple yet powerful benchmark solution for assessing thermal behaviours of computing clusters in data centers. In the second part of this dissertation research, we build a self-adjusting model called TERN to predict thermal behaviours of hardware resources for client sessions. Our TERN contains two major components: (1) a resource utilization model being responsible for estimating hardware usage based on the number of running client transactions, and (2) a thermal model that discovers correlation between resource utilization and their temperatures. TERN is conducive to predicting thermal trends of diverse workload conditions with a changing transaction mix. TERN judiciously adjusts the models to maintain prediction accuracy for dynamically changing request patterns. The experimental results show that TERN provides a simple yet powerful solution for resource provisioning in thermal-aware data centers where exist rapidly changing workload conditions. In the last part of this dissertation, we propose an erasure-coded data archival system called aHDFS for Hadoop clusters, where RS(k+r, k) Codes are employed to archive rarely accessed replicas in the Hadoop distributed file system or HDFS to achieve storage e fficiency in data centers. We develop two archival strategies (i.e., aHDFS−Grouping and aHDFS−Pipeline ) in aHDF S to speed up the data archival process. aHDFS−Grouping keeps each mapper's intermediate output key−value pairs in a local key−value store. With the local store in place, aHDFS−Grouping merges all the intermediate key−value pairs with the same key into one single key−value pair, followed by shu ffling the single key−value pair to reducers to generate final parity blocks. aHDFS−Pipeline forms a data archival pipeline using multiple data node in a Hadoop cluster. Unlike aHDFS−Grouping 's shuffl e and reduce phases, aHDFS−Pipeline delivers the merged single key−value pair to a subsequent node's local key-value store. Last node in the pipeline is responsible for outputting parity blocks. The experimental results show that aHDFS can signi ficantly improve the overall archival performance of the Baseline system.