Using a differential test battery to illustrate a multi-dimensional theory of intelligence
This study sets out to examine the premises of differential validity, or the use of score differences and patterns as predictors. This presupposes a view of ability or intellect as multidimensional, and therefore regards multidimensional patterns or profiles of scores, in addition to or irrespective of the actual levels of single test scores or weighted composites, as having predictive or classificatory uses. The advantages of taking a multidimensional view of intellect as assessed by differential testing are contrasted with the advantages of a unitary approach to intelligent performance, which assumes that a test or weighted test composite must be used to create a single index of performance. The study also considers the possibility that psychometric testing, as it is commonly used in selection and development, over stresses levels of performance and under-utilises the amount of information that can be gained from studying patterns of test scores. The differential test battery which is examined, the Morrisby Profile, is standardised and validated as part of this study by the author while working for the Morrisby Organisation, except where the assistance of others is specifically acknowledged. Methods for validating it both as a traditional and as a differential battery are examined, and various possible indices of differential efficiency are discussed, using multiple regression, discriminant function analysis and MANOVA. A further method is devised and presented for displaying the differential performance of a battery, using deviation scores. As deviation scores have the effect of making the measure at least partly ipsative, some issues of ipsativity are addressed, and arguments presented to justify the use of statistical techniques with partially ipsative data. Data is presented to show the relative effectiveness of different indices of validity, employing multiple regression, discriminant function analysis, MANOVA and the use of deviation scores. Although coefficients based on differentials alone rarely equal those based on score levels, combined coefficients are more effective than either, and their use is advocated. It is also argued that there may be real, if less easily quantifiable, advantages to the differential manner of presentation, with particular reference to groups commonly disadvantaged by traditional tests, especially in the field of development and guidance. The data sets examined included occupational groups, (engineers, technicians, managers, careers guidance officers and teachers), school students with academic criteria, applicants for engineering technician posts, insurance salespeople and managers whose promotional ratings had been assessed. Both categorical and noncategorical criteria are used. The battery is found to be an effective measure in both a traditional and a differential sense, although against the criteria available it is not possible to establish the absolute superiority of the differential approach in terms of predictive validity in the absence of all information relating to level of scores. It is shown that, when scalar and differential methods are combined, more of the variance is explained than when either method is used alone. In view of the possible disadvantages of traditional validation methods, it is suggested that there would be social advantages in utilising a differential method of testing. The implications of differential testing in the context of current perceptions of human abilities are discussed, and possible developments for a differential approach are indicated.