Advancements and Implementation of Polynomial-Based Learning Machines for Data Processing and System Modeling
Pukish, Michael, III
Type of Degreedissertation
MetadataShow full item record
At the present time, the need in all disciplines for efficient and powerful algorithms for the handling of large and complex datasets is certainly at its highest. Extremely large multi-dimensional datasets are commonplace in archival climatology and weather prediction, image processing, biology, genetics, industrial electronics, financial analysis and forecasting, telecommunications, cyber security, and throughout the social sciences. In addition to the size and high dimensionality of the data, agile real-time systems are needed to process such information for interpolation and extrapolation implementations applied toward control systems, data streaming and filtering, and simulation and modeling. In the interest of analysis and manipulation of the “big data” associated with such disciplines and tasks, certain techniques have come and gone over time, leading to a current subset of prevalent Computational Intelligence (CI) techniques. Throughout the fields of computer science and electrical engineering, these particular techniques have risen to their present popularity largely due to their existing familiarity and positive track record among researchers and engineers. Such techniques include fuzzy systems, Artificial Neural Networks (ANN), Radial Basis Function (RBF) networks, Support Vector Machines (SVM), Gaussian Processes (GP), and Evolutionary Computation (EC) (of which Genetic Algorithms (GA) are a predominant subset). Specific variants of some of these methods include Support Vector Regression (SVR) and a currently popular subset of RBF-based neural networks known as Extreme Learning Machines (ELM). Both of those variants and some of the more general techniques will be highlighted further in this work. Historically, Polynomial-Based Learning Machines (PLM) had been used for the same classes of problems mentioned thus far. However, unwieldy kernel functions (in the form of large, high-order polynomials) and relatively limited computer speed and capacity had limited the use of PLMs to comparatively small problems with low dimensionality and simple functional relationships among inputs and outputs. Thus, polynomial-based solutions within CI have, for the most part, drifted out of vogue for at least two decades. This work attempts to reinvigorate the interest and viability of PLMs for use throughout all applications of CI by introducing enhancements for their implementation. It will be shown that once certain algorithms are applied to the generation, “training”, and functional operation of PLMs, PLMs compete on par with the predominant methods currently in use, and in many cases perform with superior efficiency, compute time, and accuracy. Functional enhancements will be explained, and seven variants of a new generation of PLMs will be compared alongside the predominant CI techniques, through experimentation with a variety of problem types ranging from real-time industrial applications to approximation of benchmark “big data” sets.