Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.603304
Title: Advanced methods for neural modelling and applications
Author: Zhang, Long
Awarding Body: Queen's University Belfast
Current Institution: Queen's University Belfast
Date of Award: 2013
Availability of Full Text:
Full text unavailable from EThOS. Thesis embargoed until 01 Dec 2018
Abstract:
Due to the simple structure and global approximation ability, single hidden layer neural networks have been widely used in many areas. These neural models have a standard structure consisting of one hidden layer and one output layer with linear output weights. Subset selection and gradient methods are widely used modelling methods. However, the former is not optimal and the latter may converge slowly. This thesis mainly focuses on addressing these problems. Least squares methods play a fundamental role in subset selection and gradient methods for parameter estimation and matrix inversion. In this thesis, it is found that five least squares methods are closely related as a small modification on each least squares method can lead to the formula for another one. To improve model compactness, a two-stage algorithm using orthogonal least squares methods is proposed where the first stage is equal to the forward subset selection and the second stage employs an refinement procedure to replace those insignificant terms, leading to a more compact model. Further, the idea of two-stage method is extended to leave-one-out cross validation and regularized approach to prevent the over-fitting problems when the training data is noisy. To speed up the convergence, the proposed discrete-continuous Levenberg-Marquardt algorithm considers the correlation between the hidden nodes and output weights, which is achieved by translating the output weights to dependent parameters, and optimizes all the parameters simultaneously. Computational complexity analysis is given to confirm the new method is more computationally efficient than the continuous fast algorithm. The continuous-discrete scheme is also extended to the alternative conjugate gradient and Newton methods. The advantages of all the proposed algorithms in the thesis are demonstrated by comparative results on a number of benchmark examples and a practical application.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.603304  DOI: Not available
Share: