Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.560302
Title: Modality based perception for selective rendering
Author: Harvey, Carlo
Awarding Body: University of Warwick
Current Institution: University of Warwick
Date of Award: 2011
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Restricted access.
Access from Institution:
Abstract:
A major challenge in generating high-fidelity virtual environments for use in Virtual Reality (VR) is to be able to provide interactive rates of realism. The high-fidelity simulation of light and sound wave propagation is still unachievable in real-time. Physically accurate simulation is very computationally demanding. Only recently has visual perception been used in high-fidelity rendering to improve performance by a series of novel exploitations; to render parts of the scene that are not currently being attended by the viewer at a much lower quality with-out the difference being perceived. This thesis investigates the effect spatialised directional sounds, both discrete and converged and smells have on the visual attention of the user towards rendered scene images. These perceptual artefacts are utilised in selective rendering pipelines via the use of multi-modal maps. This work verifies the worth of investigating subliminal saccade shifts (fast movements of the eyes) from directional audio impulses via a pilot study to eye track participant's free viewing a scene with and without an audio impulse and with and without a congruency for that impulse. This experiment showed that even without an acoustic identifier in the scene, directional sound provides an impulse to guide subliminal saccade shifts. A novel technique for generating interactive discrete acoustic samples from arbitrary geometry is also presented. This work is extrapolated by investigating whether temporal auditory sound wave saliencies can be used as a feature vector in the image rendering process. The method works by producing image maps of the sound wave flux and attenuating this map via these auditory saliency feature vectors. Whilst selectively rendering, the method encodes spatial auditory distracters into the standard visual saliency map. Furthermore, this work investigates the effect various smells have on the visual attention of a user when free viewing a set of images whilst being eye tracked. This thesis explores these saccade shifts to a congruent smell object. By analysing the gaze points, the time spent attending a particular area of a scene is considered. The work presents a technique derived from measured data to modulate traditional saliency maps of image features to account for the observed results for smell congruences and shows that smell provides an impulse on visual attention. Finally, the observed data is used in applying modulated image saliency maps to address the additional effects cross-modal stimuli has on human perception when applied to a selective renderer. These multi-modal maps, derived from measured data for smells, and from sound spatialisation techniques attempt to exploit the extra stimuli presented in multi-modal VR environments and help to re-quantify the saliency map to account for observed cross-modal perceptual features of the human visual system. The multi-modal maps are tested through rigorous psychophysical experiments to examine their applicability to selective rendering algorithms, with a series of fixed cost rendering functions, and are found to perform better than image saliency maps that are naively applied to multi-modal virtual environments.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.560302  DOI: Not available
Keywords: QA76 Electronic computers. Computer science. Computer software ; TA Engineering (General). Civil engineering (General)
Share: