Algorithms for Optimal Construction and Training of Radial Basis Function Neural Networks
View/ Open
Date
2015-04-30Type of Degree
DissertationDepartment
Electrical Engineering
Metadata
Show full item recordAbstract
Machine Learning and Computational Intelligence are rapidly growing fields of research in both academia and industry. Artificial neural networks are at the heart of much of this research. Efficiently constructing and training artificial neural networks is of utmost importance to advancing the field. It has been shown that compact architectures show better generalization performance to networks containing many computational nodes. Furthermore, special neurons consisting of a Radial Basis Function can be used to improve local performance of ANNs. Many algorithms such as Support Vector Regression, Error Backpropagation, and Extreme Learning Machines can be used to train networks once an architecture is chosen. Other algorithms such as RAN, MRAN, and GGAP can train networks as they are constructed. However, many of these algorithms have limitations that lead to an excessive network size. Two new RBF network construction algorithms are introduced with the aim of increasing error convergence rates with fewer computational nodes. The first method is introduced in Chapter 3 and expands on the popular Incremental Extreme Learning Machine algorithms by adding a Nelder-Mead simplex optimization to the process. The second algorithm, described in Chapter 4, uses a Levenberg-Marquardt algorithm to optimize the positions and heights of RBF units as they are added to a network. These algorithms are compared to many state of the art algorithms on difficult benchmarks and real-world problems. The results demonstrate that more compact networks with superior error performance are created.