Title:
|
Development and psychometric validation of a framework for medication-related consultations
|
This research set out to develop a framework to evaluate the consultation skills of healthcare
practitioners undertaking medication-related consultations. A medication-related consultation
framework would facilitate the teaching and evaluation of consultation skills and provide a
structured format for feedback. Furthermore, it would allow to identify practitioners' learning
needs in order to target areas for improvement. The aims of this research were (i) to develop a
standardised framework outlining key competencies that should be undertaken in a medicationrelated
consultation, (ii) to assess the framework's psychometric properties (validity, reliability),
and (iii) to produce guidelines to accompany the framework to facilitate its use and educational
impact.
To achieve these aims the research was divided into three parts. The first was concerned with
the generation of the framework competencies. A critical review of relevant healthcare
consultation literature identified key components of patient-centred consultations. The second
part involved the testing of the framework's psychometric properties. Face and content validity
were explored using a systematic approach to gain views of experts in the field of practitionerpatient
consultations, student pharmacists, 'expert patients' and a framework development
panel. Discriminant validity, inter and intra-assessor reliability and internal consistency were
investigated using data obtained from 150 assessments following the application of the
framework by ten assessors to fifteen video-taped simulated consultations of varying quality
(good, satisfactory, poor). Any issues which arose as a result of the assessors' use of the
framework were collated and addressed in the guidelines developed in part three of the study.
The final consultation framework consisted of forty-six key competencies divided into five main
sections. These were (A) Introduction (6 items), (B) Data Collection & Problem Identification (15
items), (C) Actions & Solutions (8 items), (D) Closing (3 items), and (E) Consultation Behaviours
(14 items). Appropriate adjustments were made following the initial systematic review to
improve its face and content validity. Use of the framework resulted in the assessment of the
quality of a consultation on three levels; a rating for each individual competency (1 =not at all to
4=very good), a global rating for each section (5-point scale with the middle and extreme points
anchored by explicit descriptors) and an overall rating for the entire conSUltation (5-point scale,
1 =poor to 5=very good). Additional space for qualitative comments was provided. The
framework was found to discriminate between the rating of consultations at the overall level, i.e.
between good, satisfactory and poor (Kruskal Wallis Chi-square=12.5; df=2; p<0.01) and to
have moderate to high inter-assessor reliability at this level (rho=0.49 to 0.76). Inter-assessor
reliability was low to moderate on the global assessment level (rho=0.26 to 0.68) and
consistently low on the individual competency level (rhosO.39). Intra-assessor reliability was
found to be generally higher than inter-assessor reliability with high agreements on the overall
level (rho=0.59-0.95) and moderate to high on the global level (rho=0.42 to 0.94). The
agreements on the individual competency level were inconsistent and ranged from low to high
(rhosO.39 to ;::0.70). The framework's internal consistency was found to be acceptable for each
section as indicated by moderate to high positive correlations between individual competencies
and the corresponding global rating (rho=0.40 to 0.94) and by satisfactory Cronbach's alpha
coefficients (ranging from a=0.58 to 0.97).
This framework meets key criteria necessary for a formative assessment instrument in that it
possesses good face, content and discriminant validity. Whilst the framework demonstrated
acceptable inter-assessor and intra-assessor (test-retest) reliability on the overall assessment
level and moderate agreement on the global assessment level, this was not the case on the
individual competency level. This is acceptable for instruments used for formative assessments
where the emphasis is placed on the identification of a practitioner's relative strengths and
weaknesses and where specific strategies for improvement are to be fed back to the
practitioner. However, in summative assessments where 'pass' or 'fail' decisions about a
candidate's performance are made, the possession of high validity and reliability at all
assessment levels is important. Further work is needed to test whether the use of the specific
guidelines developed to support the framework and additional assessor training improves the
framework's reliability when used by multiple assessors. Additionally, further validation studies
need to be undertaken.
|