Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.653134
Title: An item bank for testing English language proficiency : using the Rasch model to construct an objective measure
Author: Jones, Neil
Awarding Body: University of Edinburgh
Current Institution: University of Edinburgh
Date of Award: 1992
Availability of Full Text:
Access through EThOS:
Full text unavailable from EThOS. Please try the link below.
Access through Institution:
Abstract:
This study describes the construction of an instrument for testing English language proficiency: a bank of about a thousand quite heterogeneous items, covering a range from beginner level to advanced. The software is specially written, to enable teachers to make tests easily, choosing level and content areas; it also supports computer-adaptive testing, with more task variety than has been usual. The Rasch item response model is used to locate the items on a single difficulty scale. Rasch analysis makes possible the objective measurement of psychological traits, which means essentially that constructs having no physical counterpart, like language proficiency, can be treated analogously to physical objects, quantities of which can be measured in conventional fixed units. The question is asked whether language proficiency can be conceived of in simple enough terms to make objective measurement feasible. A review of the fields of second-language acquisition studies, language testing and teaching concludes that language proficiency (in some aspect) is a reasonable candidate for the construction of a unidimensional trait. Analysis of the items in the bank confirms that they fit to a unidimensional trait, and that the Rasch model performs satisfactorily, although calibrations of badly-targetted items are distorted. A multiple regression analysis is used to investigate item difficulty, and thus what it is that the bank really measures. A causal model in which an item's content (the language problem tested) is placed first finds method facts (e.g. the form of response) to be weak predictors of difficulty. What makes language test items difficult, it is concluded, is mostly the difficulty of the language problems tested. Qualitative analysis of items grouped by content is also informative. It appears that item difficulty is largely (though not entirely) explicable in terms of factors that should be included in a theory of language learning.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.653134  DOI: Not available
Share: