Title:
|
Hybrid human-computer interfaces for effective communication and independent living
|
Human-computer interface (HCI) and brain-computer interface (BCI) based assistive technologies (ATs) can provide novel communication mediums that can aid in removing many barriers that people with disabilities face. Specifically, eye-tracking-based HCIs and non-invasive BCIs open up new pathways of interaction for speech, motor, and cognitively impaired people. Eye-tracking-based ATs can be designed by using a dedicated eye-tracking device which acquires and processes eye-gaze. Similarly, BCI-based ATs can be designed by decoding electroencephalography (EEG) signals over the sensorimotor cortex of a user by performing motor imagery (MI) tasks. However, there are several challenges to overcome before eye-tracking-based HCIs and MI-based BCIs become suitable for wider practical use. The usability of eye-tracking-based HCIs is limited due to factors such as low accuracy of detection of the eye-gaze coordinates, difficulties in accurate quantification of user's intentions, and involuntary eye movements. Likewise, the main challenges with current BCI systems are the limited number of commands, the selection of the most appropriate brain activities, environmental noise, and usability issues in real-world scenarios. These challenges can be better addressed by designing a hybrid-multimodal system that involves a combination of complementary neurophysiological and other physiological signals, which is the main aim of this thesis. This thesis involves four major contributions towards the design of robust hybrid-multimodal HCI systems with applications in ATs for speech and motor impaired people. First, a feasibility study to combine the BCI and eye-tracking technologies is undertaken by designing a hybrid system that can increase the number of commands with a combination of eye-gaze and MI. Second, a novel adaptive augmentative and alternative communication (AAC) system with an application to eye-gaze based virtual keyboards is designed and optimised for a combination of various portable non-invasive and low-cost input devices. Third, a new approach for optimisation of the graphical user interface (GUI) of the multimodal eye-gaze virtual keyboards is proposed and evaluated empirically with a Hindi alphabet virtual keyboard. Fourth, the GUI of virtual keyboard application is translated for multimodal eye-gaze control of wheelchair-based independent living applications. Overall, the research in the thesis has made significant contributions to advance beyond the state-of-the-art HCI-based ATs.
|