Deep Learning Methods Using Levenberg-Marquardt with Weight Compression and Discrete Cosine Transform Spectral Pooling
Type of DegreeMaster's Thesis
DepartmentElectrical and Computer Engineering
MetadataShow full item record
State-of-the-Art Machine Learning methods achieve powerful results on practical problems in all data-driven industries. As these methods are improved, the hardware and training time necessary to implement them have grown expensive. It is desired to find Machine Learning methods that achieve near-optimal results in practical time using affordable hardware. This thesis addresses two separate artificial neural network issues resulting in practical regression, classification, and image compression algorithms. First, this thesis proposes a second-order Weight Compression Algorithm, implemented in Levenberg-Marquardt with Weight Compression (LM-WC), which combats the flat spot problem by compressing neuron weights to push neuron activation out of the saturated region and close to the linear region. The presented algorithm requires no additional learned parameters and contains an adaptable compression parameter, which is adjusted to avoid training failure and increase the probability of neural network convergence. Second, this thesis implements spectral pooling with the discrete cosine transform for image classification and compression. By using these pooling methods, greater information can be retained compared to spatial pooling while using the same network parameter reduction. These improved convolutional neural networks are found to converge faster and achieve high performance on standard benchmarks.