|dc.description.abstract||The smart grid (SG) has emerged as an important form of the Internet of Things (IoT). Despite the high promises of renewable energy in the SG, it brings about great challenges to the existing power grid due to its nature of intermittent and uncontrollable generation. In order to fully harvest the high potential of SG, accurate forecasting of renewable power generation is indispensable for effective power management. In this dissertation, we propose a least absolute shrinkage and selection operator (LASSO) based forecasting model and algorithm for solar power generation forecasting. We compare the proposed scheme with two representative schemes using three real world datasets. We find that the LASSO-based algorithm achieves a considerably higher accuracy comparing to the existing methods, using fewer training data, and being robust to anomaly data points in the training data. LASSO's variable selection capability also offers a convenient trade-off between computational complexity and accuracy. These advantages all make the proposed LASSO based approach a highly competitive solution to forecasting solar power generation.
With the development of the photovoltaic industry, solar power forecasting using weather data has become more and more important. Due to weather data’s random and massive nature, many machine learning (ML) algorithms have been proposed. Among these, deep neural networks (DNN) is one of the most widely used ML algorithm. However, some recent studies show that certain algorithms are extremely vulnerable to adversarial examples, which are maliciously generated by cyber attackers. Such tampered examples can fool the DNN to produce some completely different result. In actual situation, the attacker can manipulate the weather data stored or to be transferred to the forecast model. The adversarial examples will greatly affect the original forecast values, which will cause power outage or even severe power grid disaster. In this dissertation, results point out that certain attacks are effective for both black box attack to DNN base models and white box attack to other algorithms. Through simulations, we will show that small perturbations introduced by adversarial examples could lead to distinct outcomes, which will allow the attacker to cause a maximized loss while staying undetected. Moreover, we will use two types of adversarial attacks to show that the effect can be improved by iterative methods. Finally, we will implement the adversarial examples to our LASSO-based algorithm to demonstrate the effect of white box attacks.||en_US