Use this URL to cite or link to this record in EThOS:
Title: Target monitoring and evaluation : measuring the impact of educational psychology interventions
Author: Connor, Tom
ISNI:       0000 0004 2706 256X
Awarding Body: Institute of Education, University of London
Current Institution: University College London (University of London)
Date of Award: 2010
Availability of Full Text:
Access from EThOS:
Access from Institution:
The aim of this research is to evaluate the effectiveness of a recently developed tool for measuring perceptions of the effectiveness of Educational Psychology (EP) interventions. The research project was derived following an adaptation of Goal Attainment Scaling into a revised format known as Target Monitoring and Evaluation (TME). Evidence was sought as to its utility within an EP service by investigating the reliability and validity of TME and whether or not this system could be used as a means to evaluate the efficacy of EP-led interventions in schools. Effective service delivery issues were considered by investigating the usability of TME, and evidence was sought from EP and school based colleagues with experience of using TME in order to investigate the practical, operational and commitment issues. Within a mixed methods design the research aim was to compare the quantitative objective utility of TME (in which outcomes for children derived from TME were set against measurements of change from more "conventional" assessment tools) with the qualitative perceived utility of TME (including EP and teacher opinions of the efficacy of TME). The intention was to investigate the reliability and validity — and therefore credibility — of the TME approach by using an external point of reference and comparing perceptions of change measured by TME, compared to a more conventional quantitative measure of change. The research focused specifically on clearly defined and related interventions, in order that TME measures of change could be compared with existing conventional measurement tools. These focused on a total of 24 TME cases completed for children within Key Stage 2 in mainstream primary schools. Quantitative "objective" data relating to both baseline and outcome measures were collected using either a standardised literacy assessment or observation schedule. These were contrasted with teacher-based perceptions of baseline and change at outcome as measured by the TME process. The quantitative outcomes were investigated against the qualitative perceptions of the utility of TME via individual interviews with 10 EPs and 8 Special Educational Needs Coordinators (SENCOs) from schools who had experience of using TME. Each interview was transcribed and analysed using thematic analysis. In the analysis, where positive progress was noted using TME, this was also usually observed using the more conventional forms of evaluation. However, there were inconsistencies in relation to the level of change in each case. The outcomes suggest TME was well regarded as a tool for assisting the process of setting up interventions and as a framework for the discussion at review. TME appeared less well regarded as an evaluative tool to measure outcomes for the EP service, and there were implications for increased support and training. From an evidence-based perspective, it may seem logical that the key element of EP evaluation ought to be based upon successful outcomes for children. However, according to the qualitative analysis, there were many bigger questions about such evaluations, for example, defining the nature of the EP's role, the difficulty in separating elements of influence, and the delivery of services through others.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available