|dc.description.abstract||Energy efficiency and carbon footprint control have grabbed significant attention worldwide. Today, the world has become intensely data intensive. The cloud computing infrastructures and the data centers are providing massive storage and uninterrupted computational power to deal with these data-intensive applications. However, such great technological development comes at the cost of an exponentially high power supply and two percent of the global carbon footprint. In the year the total power consumption of US-based data centers became $250$ TWh and it is expected to rise $270$ TWh by 2022. At the same time, the energy price and the carbon footprint hiked exponentially. Online social media platforms like Instagram, Facebook are consuming enormous power for viewing posts, scrolling through the feed and story upload; as a result, producing massive carbon dioxide and other gases in the air. Cloud computing infrastructures like Amazon, Adobe, Google are taking significant measures like virtualization, VM migration, workload consolidation, geographical load balancing, etc. to cap power consumption and the carbon footprint of the data centers while maintaining the Quality of Service(QoS) and Service level agreement(SLA). Most of these energy-efficient techniques like virtualization, virtual machine migration, or geographical load balancing require resource usage prediction. Based on this resource usage prediction the network or power engineers perform several energy-efficient techniques in a cloud computing infrastructure. Now, predicting resources in a cloud computing infrastructure is a grand challenge due to its heterogeneous workload. In the first part of this dissertation research, we perform several experiments on virtual machines running on a cloud computing infrastructure to measure the resource utilization for several most commonly used benchmark applications on a cloud computing infrastructure. Based on the data we train several resource utilization models to predict the resource usage of virtual machines in a cloud computing infrastructure.
The second part of this research is based on important domains of sustainable computing and energy efficiency; the traditional data centers and the green data centers. Even in the 21st century, the US-based data centers are still using a massive amount of non-renewable energy resources like coal, petroleum, fossil gas, etc. These data centers burn a hefty amount of brown energies to meet the never-ending demand of IoT. Due to the fossil fuel addiction of traditional data centers, sustainability is in danger. Research claims if left unchecked by 2030, we won't have any coal energy resources; at the same time, the energy price has skyrocketed along with a steep curve of greenhouse gas emission. We construct a power consumption model of a traditional center by incorporating more clean energy resources and cutting back the brown energy resources while maintaining the data center power demand. Deploying this optimized power model that includes mostly clean energy resources, traditional data centers are expected to meet their power demand and protect the environment.
To bring panacea with regards to traditional data centers, leading-edge technologies are introduced to green data centers, which use the maximum amount of green energy resources to supply the power-hunger of the data center. But our research reveals that even green data centers produce a carbon footprint due to their green and brown energy usage. One of the crucial factors, in this case, is the peak time during which green data centers increase their brown energy usage to meet the power demand and thus causing a hike in energy price and carbon footprint. We propose, in the last part of the dissertation, a carbon footprint model that optimizes the energy utilization by these green data centers, thereby minimizing the carbon footprint. Our findings point out the flaws of green data centers in terms of clean energy usage as well as opportunities to optimize the energy usage by data centers.||en_US