Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.615387
Title: Perception and processing of self-motion cues
Author: Smith, Michael Thomas
Awarding Body: University of Edinburgh
Current Institution: University of Edinburgh
Date of Award: 2013
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
The capacity of animals to navigate through familiar or novel environments depends crucially on the integration of a disparate set of self motion cues. The study begins with one of the most simple, planar visual motion, and investigates the cortical organisation of motion sensitive areas. It finds evidence of columnar organisation in hMT+ and a large scale map in V1. Chapter 3 extends this by using stimuli designed to emulate visual and auditory forward motion. It finds that participants are able to determine their direction with a precision close to that predicted by Bayesian integration. Predictions were made regarding neural processing through a modified divisive normalisation model, which was also used to fit the behavioural adaptation results. The integration of different modalities requires visual and auditory streams to combine at some stage within the sensory processing hierarchy. Previous research suggests the ventral intraparietal region (VIP) may be the seat of such integration. Chapter 4 tests whether VIP does combine these cues and whether the correlation between VIP and the unimodal regions changes depending on the coherence of unimodal stimuli. The presence of such modulation is predicted by some models, such as the divisive normalisation model. The processing of such egocentric self motion cues leads to the updating of allocentric representations, these are believed to be encoded by head direction cells and place cells. The experiment in chapter 5 uses a virtual reality stimulus during fMRI scanning to give participants the sense of moving and navigating. Their location in the virtual environment was decoded above chance from voxels in the hippocampus. No head direction signal was classified above chance from any of the three cortical regions investigated. We tentatively conclude that head direction is considerably more difficult to classify from the BOLD signal, possibly due to the homogeneous organisation of head direction cells.
Supervisor: Van Rossum, Mark; Wolbers, Thomas Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.615387  DOI: Not available
Keywords: Self motion ; fMRI ; hMT+ ; V1
Share: