A toolkit of resource-sensitive, multimodal widgets
This thesis describes an architecture for a toolkit of user interface components which allows the presentation of the widgets to use multiple output modalities - typically, audio and visual. Previously there was no toolkit of widgets which would use the most appropriate presentational resources according to their availability and suitability. Typically the use of different forms of presentation was limited to graphical feedback with the addition of other forms of presentation, such as sound, being added in an ad hoc fashion with only limited scope for managing the use of the different resources. A review of existing auditory interfaces provided some requirements that the toolkit would need to fulfil for it to be effective. In addition, it was found that a strand of research in this area required further investigation to ensure that a full set of requirements was captured. It was found that no formal evaluation of audio being used to provide background information has been undertaken. A sonically-enhanced progress indicator was designed and evaluated showing that audio feedback could be used as a replacement for visual feedback rather than simply as an enhancement. The experiment also completed the requirements capture for the design of the toolkit of multimodal widgets. A review of existing user interface architectures and systems, with particular attention paid to the way they manage multiple output modalities presented some design guidelines for the architecture of the toolkit. Building on these guidelines a design for the toolkit which fulfils all the previously captured requirements is presented. An implementation of this design is given, with an evaluation of the implementation showing that it fulfils all the requirements of the design.