Applications of Stochastic Gradient Descent for Optimization of Complex Systems with Uncertainty
Type of DegreePhD Dissertation
Mathematics and Statistics
MetadataShow full item record
The Stochastic Gradient Descent (SGD) method in all its variations has gained popularity with the recent progress in Machine learning. In this dissertation we implement and analyze two modifications of the SGD method. In the first half of the dissertation we investigate the adaptive gradient descent (AdaGrad) method in the context to the optimal distributed control of parabolic partial differential equations with uncertain parameters. This stochastic optimization method achieves an improved convergence rate through adaptive scaling of the gradient stepsize. We prove the convergence of the algorithm for this infinite dimensional problem under suitable regularity, convexity, and finite variance conditions, and relate these to verifiable properties of the underlying system parameters. Finally, we apply our algorithm to the optimal thermal regulation of lithium battery systems under uncertain loads. In the second half of the dissertation we look at the design, implementation and analysis of Stochastic Alternating Least Squares (SALS) as a method that approximates the canonical decomposition of averages of sampled random tensors. Its simplicity and efficient memory usage make SALS an ideal tool for decomposing tensors in an online setting. We show, under mild regularization and readily verifiable assumptions on the boundedness of the data, that the SALS algorithm is globally convergent. Numerical experiments validate our theoretical findings and demonstrate the algorithm’s performance and complexity.