This Is AuburnElectronic Theses and Dissertations

Hybrid Learning of Feedforward Neural Networks for Regression Problems




Wu, Xing

Type of Degree



Electrical Engineering


Inspired by the structure and functional aspects of the biological neural networks, the Artificial Neural Network (ANN) is a very popular model in the machine learning fields to learn complex relationships in the data. The Feedforward Neural Networks (FNN) are the basic and most common type of ANN used in the supervised learning area. The research of the FNN consists of two major issues: architecture selection and learning. The architecture of the FNN mainly includes Multilayer Perceptron (MLP) and Bridged Multilayer Perceptron (BMLP). As the simplest MLP with only one hidden layer, the Single Layer Feedforward Neural Networks (SLFN) had attracted much attention among the shallow models. When a BMLP has all the bridge connections, it becomes the special deep narrow architecture, called Fully Connected Cascade Networks (FCCN), which had been widely applied in different fields since it was proposed. In this dissertation, we explored the learning of these two special types of FNN in details for regression problems. When applied to the regression problems, the output neuron of the FNN is usually set with linear activation function. With this character, the SLFN and FCCN architectures have much in common and their most learning algorithms could share to each other. The FCCN could be viewed as a SLFN with nest connections in the single hidden layer. Because the output neuron is linear, all the output parameters (weights) are linear related. Taking advantage of this relationship, a new hybrid learning algorithm is proposed for these two types of FNN by combining the efficient Levenberg-Marquardt (LM) algorithm and Least Square (LS) method. In order to search the optimal network size, the hybrid algorithm is extended to the construction scheme. Two hybrid constructive algorithms are proposed for the SLFN and FCCN learning, namely HC1 and HC2 algorithm. The HC1 algorithm constructs the SLFN or FCCN by adding randomly initialized hidden neuron one by one, each time the preceding hybrid algorithm is carried out to train the entire network. The HC2 algorithm can be considered as an enhanced version of HC1. Each time adding the new neuron, its initial parameters are picked in a more sophisticated way. Similar to the Orthogonal Least Square (OLS) algorithm, a contribution objective function of the new neuron is derived. The Particle Swarm Optimization (PSO) is cooperated to search the best set of parameters leading to the biggest contribution. Both the HC1 and HC2 algorithms are practiced on several classical function approximation benchmarks for SLFN and FCCN construction. The experiment results illustrated the proposed hybrid constructive strategies can obtain more compact networks with good generalization ability compared with other popular learning algorithms.