Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.784204
Title: A multimodal textual analysis of non-literary texts : a critical stylistic approach
Author: Khuzaee, Shatha
ISNI:       0000 0004 7969 768X
Awarding Body: University of Huddersfield
Current Institution: University of Huddersfield
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
This thesis proposes a version of the Critical Stylistics model that accounts for meaning-making in multimodal online news articles, as non-literary texts, each composed of a linguistic text and still images. A framework integrating Critical Stylistics and Visual Grammar models suggests three multimodal textual conceptual functions developed from Jeffries (2010a): Naming and Describing; Representing Events/Actions/States; and Prioritising which are tested to analyse the images, of the news articles, as texts. Applying Jeffries' (2014) concept of textual meaning, the analysis shows that the linguistic text and images are two independent texts contributing differently but collaboratively to the meanings made and projected in the multimodal texts. The findings of the search for patterns in the data are that: 1. Images reinforce meanings made by the linguistic text 2. Images extend meanings made by the linguistic text 3. Images add/suppress meanings made by the linguistic text I argue that a critical stylistic approach is applicable to images, but it needs an equivalent visual model to propose a toolkit that can analyse meaning-making in non-literary multimodal texts. I adopt Jeffries'(2010a) critical stylistic approach and adapt it for images, making use of Kress and van Leeuwen's (2006) model of visual grammar and drawing on their notion that images are texts to create a model for the analysis of multimodal news texts. The model can show how the linguistic text and the accompanying images, while using resources specific to their underlying structures construct textual meanings that result in a coherent portrayal of the world of events reported. The multimodal textual conceptual functions use the notion of co-text to reduce the number of the possible interpretations an image might suggest, producing a more systematic and replicable analysis.
Supervisor: Jeffries, Lesley ; McIntyre, Dan Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.784204  DOI: Not available
Keywords: PN Literature (General)
Share: