Multisensory processing in the human brain
Perception has traditionally been studied as a modular function where different sensory systems operate as separate and independent modules. However, multisensory integration is essential for the perception of a coherent and unified representation of the external world that we experience phenomenologically. Mounting evidence suggests that the senses do not operate in isolation but that the brain processes and integrates information across modalities. A standing debate is at what level in the processing hierarchy the sensory streams converge, for example, if multisensory speech information converges first in higher-order polysensory areas such as STS and is then fed back to sensory areas, or if information is already integrated in primary and secondary sensory areas at the early stages of sensory processing. The studies in this thesis aim to investigate this question by focussing on the spatio-temporal aspects of multisensory processing, as well as investigating phonetic and non-phonetic integration in the human brain during auditory-visual speech perception.