Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.645308
Title: Uniform convergence and learnability
Author: Anthony, Martin Henry George
Awarding Body: London School of Economics and Political Science (University of London)
Current Institution: London School of Economics and Political Science (University of London)
Date of Award: 1991
Availability of Full Text:
Access through EThOS:
Full text unavailable from EThOS. Please try the link below.
Access through Institution:
Abstract:
This thesis analyses some of the more mathematical aspects of the Probably Approximately Correct (PAC) model of computational learning theory. The main concern is with the sample size required for valid learning in the PAC model. A sufficient sample-size involving the Vapnik-Chervonenkis (VC) dimension of the hypothesis space is derived; this improves the best previously known bound of this nature. Learnability results and sufficient sample-sizes can in many cases be derived from results of Vapnik on the uniform convergence (in probability) of relative frequencies of events to their probabilities, when the collection of events has finite VC dimension. Two simple new combinatorial proofs of each of two of Vapnik's results are proved here and the results are then applied to the theory of learning stochastic concepts, where again improved sample-size bounds are obtained. The PAC model of learning is a distribution-free model; the resulting sample sizes are not permitted to depend on the usually fixed but unknown probability distribution on the input space. Results of Ben-David, Benedek and Mansour are described, presenting a theory for distribution-dependent learnability. The conditions under which a feasible upper bound on sample-size can be obtained are investigated, introducing the concept of polynomial Xo-finite dimension. The theory thus far is then applied to the learnability of formal concepts, defined by Wille. A learning algorithm is also presented for this problem. Extending the theory of learnability to the learnability of functions which have range in some arbitrary set, learnability results and sample-size bounds, depending on a generalization of the VC dimension, are obtained and these results are applied to the theory of artificial neural networks. Specifically, a sufficient sample-size for valid generalization in multiple-output feedforward linear threshold networks is found.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.645308  DOI: Not available
Share: