Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556477
Title: Anticipation, event plausibility and scene constraints : evidence form eye movements
Author: Joergensen, Gitte Henssel
Awarding Body: University of York
Current Institution: University of York
Date of Award: 2011
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
We often use language to refer to items within our immediate proximity whereby the constraints of the visual context serves to restrict the number of possible referents, making it easier to anticipate which item will most likely be referred to next. However, we also use language to refer to past, future, or even imagined events. In such cases, anticipation is no longer restricted by the visual context and may now be influenced by real-world knowledge. In a set of eye-tracking experiments we explored the mapping of language onto internal representations of visually available scenes, as well as previously viewed scenes. Firstly, we were interested in how event-plausibility is able to influence our internal representations of described events and secondly, how these representations might be modulated by the nature of the visual context (as present or absent). Our findings showed that when describing events in the context of a concurrent scene the eye movement patterns during the unfolding language indicated that participants anticipated both plausible and implausible items. However, when the visual scene was removed immediately before the onset of spoken language participants anticipated plausible items, but not implausible items – only by providing a more constraining linguistic context did we find anticipatory looks to the implausible items. This suggests that in the absence of a visual context we require a more constraining linguistic context to achieve the same degree of constraint provided by a concurrent visual scene. We conclude that the conceptual representations activated during language processing in a concurrent visual context are quantitatively different from those activated when the visual context to which that language applies is absent.
Supervisor: Altmann, Gerry Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.556477  DOI: Not available
Share: