Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.788944
Title: Online multimodal autonomous learning of robots internal models
Author: Zambelli, Martina
ISNI:       0000 0004 8499 3889
Awarding Body: Imperial College London
Current Institution: Imperial College London
Date of Award: 2018
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
Robots can learn new skills by autonomously acquiring internal models that can be used for action planning and control. The ability of learning internal models with no prior information allows robots to be fully autonomous not only in the acquisition of such models and motor skills, but also in adapting to new environments and working set-ups. This is particularly important for robots interacting with humans in unconstrained environments. Autonomous learning eases the engineering work of pre-programming each robotic system for each particular task, while endowing robots with flexibility, adaptability and versatility. This thesis investigates how the use of multiple sources of information can influence such autonomous learning process. In particular, multiple prediction hypotheses provided by different prediction models, as well as information available to a robot from multiple sensory modalities (such as vision, touch, proprioception) are leveraged to enhance the learning process. Through autonomous exploration a robot can bootstrap internal models of its own sensorimotor system that enable it to predict the consequences of its actions (forward models) or to generate new actions to reach target states (inverse models). This thesis studies how multiple information can enhance the bootstrapping process of these models or their use in environments and tasks that involve integration of different types of data. It is shown that the use of multiple sources of information benefits the learning process. The combination of multiple predictors allows to enhance forward models' accuracy. The use of multiple sensory modalities is fundamental to perform tasks that are inherently multimodal, such as playing a piano keyboard. Also, multimodal integration allows a versatile applicability of the model learned. Furthermore, the learned multimodal model can be deployed in learning and control frameworks to predict the robot and other agents' motion, and to plan the robot's actions.
Supervisor: Demiris, Yiannis Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.788944  DOI:
Share: