Use this URL to cite or link to this record in EThOS:
Title: Higher-order central-moment neural architectures
Author: Elliott, Philip
ISNI:       0000 0004 2741 1272
Awarding Body: University of Reading
Current Institution: University of Reading
Date of Award: 2012
Availability of Full Text:
Access from EThOS:
The encapsulation of weighted statistical processing directly into the adaptive neural architecture enables explicit representation of a sig- nificantly greater variety of functions, describing a more diverse set of relationships between the inputs. Here a novel method is developed to allow utilisation of weighted statistical measures for sub-sets of inputs in an arbitrary layer of neu- ral processing. The incorporation of significance weighted statistical measures enables neural processing to consider not just the net acti- vations, but also to take into account distribution information, while maintaining the ability to train the networks through means of back- propagation. The incorporation of weighted statistical processing gives rise to two new forms of neural elements. One is a superset of the standard MLP framework, and the other is an extension of the RBF frame- work. One of the novel representational capacities introduced is the ability to perform recognition of shapes for sub-sets of the input vec- tor with bias/scale invariance and adjustable sensitivity/invariance to bias/scale. A statistical neuron is developed that makes use of the novel weighted statistics to allow learning functions not just of the cumulative/mean but also including distribution information in the form of the standard deviation of the significance weighted in- put. This can be used for example to learn either "optimistic" or "pessimistic" estimators, where diversity/uncertainty in the weighted inputs can either increase of decrease the normal output. The statistical neuron is shown to have significantly increased rep- resentational capacity relative to the first order perceptron, enabling the representation of non-linearly separable problems, yet still only using a single parameter per input. Results show that the statistical neuron is broadly applicable, more often than not producing statis- tically significant improvements in classification performance without any statistically significant degradation in classification performance. The shape neuron's design is intended for more specialised purposes, to enable bias and scale invariant pattern recognition, yet the final suggested form of the shape neuron is a superset of both the WRBF and therefore also the RBF and the perceptron. The prototype form of the shape neuron tested gives significant improvements in perfor- mance, but on a smaller number of datasets than the statistical neu- ron. However, the final suggested form of the shape neuron, being ca- pable of both absolute pattern and relative shape based recognition, significantly extends the representational capacity and applicability of the neural model. In order to choose the optimal size of network for best generalisation accuracy, without relying on potentially false assumptions about the data, the generalisation accuracy is tested on an unseen validation subset of the data. However, for this test to give meaningful results the validation subset must be representative of the whole set. So a novel set splitting method is developed to form representative subsets which are required for the network structural optimisation algorithm. For optimal accuracy, or in the absence of representative subsets, large numbers of samples must be taken such as in leave one out cross-validation in order to ensure that the results are representative. This is a very computationally expensive process, and for the work presented here would result in nested cross-validations leading to an exponential increase in the processing time. The use of carefully cho- sen representative subsets, as opposed to random set splitting, allows for more robust accuracy measurements with fewer repetitions.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available