Advanced Learning Algorithms of Neural Networks
Type of Degreedissertation
MetadataShow full item record
The concept of “learn to behave” gives very vivid description of functionalities of neural networks. Specifically, a group of observations, each of which consists of inputs and desired outputs, are directly applied to neural networks, and the networks parameters (called “weights”) are adjusted iteratively according with the differences (called “error”) between desired network outputs and actual network output. The parameter adjustment process is called “learning” or “training”. After the errors converging to expected accuracy, the trained networks can be used to analyze the input dataset which are in the same range of observations, for classification, recognition and prediction. In neural network realm, network architectures and learning algorithms are the major research topics, and both of them are essential in designing well-behaved neural networks. In the dissertation, we are focused on the computational efficiency of learning algorithms, especially second order algorithms. Two algorithms are proposed to solve the memory limitation problem and computational redundancy problem in second order computations, including the famous Hagan and Menhaj Levenberg Marquardt algorithm and the recently developed neuron-by-neuron algorithm. The dissertation consists of seven chapters. The first chapter demonstrates the attractive properties of neural network with two examples, by comparing with several other methods of computational intelligence and human beings. The second chapter introduces background of neural networks, including the history of neural networks, basic concepts, network architectures, learning algorithms, generalization ability and the recently developed neuron-by-neuron algorithm. The third chapter discusses the current problems in second order algorithms. The fourth chapter describes another way of gradient vector and quasi Hessian matrix computation for implementing Levenberg Marquardt algorithm. With the similar computational complexity, the improved second order computation solves the memory limitation in second order algorithms. The fifth chapter presents the forward-only algorithm. By replacing the backpropagation process with extra calculation in forward process, the forward-only algorithm improves the training efficiency, especially for networks with multiple outputs. Also, the forward-only algorithm can handle networks consisting of arbitrarily connected neurons. The sixth chapter introduces the computer software implementation of neural networks, using C++ based on Visual C++ 6.0 platform. All the algorithms introduced in the dissertation are implemented in the software. The seventh chapter concludes the dissertation and also introduces our recent work.
- Advanced Training Algorithms of Neural Networks - 20110826.pdf