Use this URL to cite or link to this record in EThOS:
Title: Analogue VLSI neural networks for phoneme recognition
Author: Gatt, Edward
ISNI:       0000 0001 3492 3717
Awarding Body: University of Surrey
Current Institution: University of Surrey
Date of Award: 2004
Availability of Full Text:
Access from EThOS:
Access from Institution:
This thesis presents the implementation of three VLSI neural network systems: A chip for implementing Self-Organising Maps, a Radial Basis Function chip and a Back-Propagation Learning chip. The first chip was implemented using mixed mode technology, while the other two chips used analogue technology. The chips have been designed and applied successfully to the task of phoneme recognition. Cascadability was the most important feature included in the design of the chips as the main intention was to allow as much flexibility as possible in order to test the functionality of the different topologies and architectures. Also, a design strategy to reuse components whenever possible was employed to reduce the possibility of design errors, design time and silicon area. The main goal of the study was to design a VLSI neural network, which exploits the strong points of different algorithms. The following characteristics were desirable - namely low training times, which was obtained for radial basis function network learning, high recognition rates normally associated with the back-propagation algorithm and also the benefit of competitive learning. In order to achieve the above it was decided to combine the three chips together in order to implement a time-delay radial basis function neural network. The recognition rates obtained compares well with the recognition rates obtained for the time-delay neural network for the same phoneme set, while its training efficiency is very close to that of the radial basis function network. The chips that were fabricated consume very little power, with synapses and neurons requiring only tens and hundreds of microW to operate. Most of the individual building blocks can be operated with a supply rail of +1V and -1V, but in order to allow the implementation of complex neural network architectures, a supply rail of +2V and -2V was adopted when networks topologies were created.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available