Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.561728
Title: Exploration and inference in learning from reinforcement
Author: Wyatt, Jeremy
Awarding Body: University of Edinburgh
Current Institution: University of Edinburgh
Date of Award: 1998
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
Recently there has been a good deal of interest in using techniques developed for learning from reinforcement to guide learning in robots. Motivated by the desire to find better robot learning methods, this thesis prsents a number of novel extensions to existing techniques for controlling exploration and inference in reinforcement learning. First I distinguish between the well known exploration-exploitation trade-off and what I term exploration for future exploitation. it is argued that there are many tasks where it is more appropriate to maximise this latter measure. In particular it is appropriate when we want to employ learning algorithms as part of the process of designing a controller. Informed by this insight I develop a number of novel measures of the probability of a particular course of action being the optimal ourse of action. Estimators are developed for this measure for boolean and non-boolean processes. These are used in turn to develp probability matching techniques for guiding the exploration-exploitation trade-off. A proof is presented that one such method will converge in the limit to the optimal policy. Following this I develop an engropic measure of task-knowledg, based on the previous measure.
Supervisor: Hayes, Gillian. ; Hallam, John. Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.561728  DOI: Not available
Share: