Use this URL to cite or link to this record in EThOS:
Title: Teacher rating of class essays written by students of English as a Second Language : a qualitative study of criteria and process
Author: Alghannam, Manal Saleh Mohammad
ISNI:       0000 0004 7427 2404
Awarding Body: University of Essex
Current Institution: University of Essex
Date of Award: 2018
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
This study is concerned with a neglected aspect of the study of L2 English writing: the processes which teachers engage in when rating essays written by their own students for class practice, not exams, with no imposed rating/assessment scheme. It draws on writing assessment process research literature, although, apart from Huot (1993) and Wolfe et al. (1998), most work has been done on scoring writing in exam conditions using a set scoring rubric, where all raters rate the same essays. Eight research questions were answered from data gathered from six teachers, with a wide range of relevant training, but all teaching university pre-sessional or equivalent classes. Instruments used were general interviews, think aloud reports while rating their own students' essays, and follow up immediate retrospective interviews. Extensive qualitative coding was undertaken using NVivo. It was found that the teachers did not vary much in the core features that they claimed to recognise in general as typical of ‘good writing’, but varied more in what criteria they highlighted in practice when rating essays, though all used a form of analytic rating. Two thirds of the separate criteria coded were used by all the teachers but there were differences in preference for higher versus lower level criteria. Teachers also differed a great deal in the scales they used to sum up their evaluations, ranging from IELTS scores to just evaluative adjectives, and most claimed to use personal criteria, with concern for the consequential pedagogical value of their rating for the students more than achieving a test-like reliable score. A wide range of information sources was used to support and justify the rating decisions made, beyond the essay text, including background information about the writer and classmates, and teacher prior instruction. Teacher comments also evidenced concern with issues arguably not central to rating itself but rather exploring implications for the teacher and writer. Similar to Cumming et al. (2002), three broad stages of the rating process were identified: reading and exploiting information such as the writer’s name and the task prompt as well as perhaps skimming the text; reading and rereading parts of the essay, associated with interpretation and judgment; achievement of a summary judgment. In detail, however, each teacher had their own individual style of reading and of choice and use of criteria.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available
Keywords: L Education (General) ; P Philology. Linguistics