An analysis of score distributions and design pattern interaction for object-oriented metrics
One method suggested for improving software quality has been that of collecting metric scores for a given design and refactoring in response to what are deemed to be unsatisfactory metric values. In the case of object orientation, a considerable number of metrics have been proposed in the literature, with the intention of highlighting the possible 'misuse' of concepts such as inheritance and polymorphism. The aim is to produce systems which are more easily maintainable than may otherwise be the case (in terms of characteristics such as reduced modification times or increased class reusability). Subsequent to this, a major requirement for promoting the widescale adoption of such design metrics has been to establish their validity beyond mere intuitive appeal. The theoretical approach to validation has been limited, relying on the use of measurement axioms as an initial filter to rule out inconsistent measures. Given the profusion of possible systems, empirical studies must be seen to represent limited sampling, and taken as a whole seem to produce an excess of (sometimes conflicting) correlations. The aim as therefore been to establish a complementary approach to the attempts at validation so far undertaken. One aspect of this activity is to examine the theoretical nature of inter-metric dependencies, and a technique for evaluating levels of metric interaction is presented. This addresses a significant issue as highly correlated metrics may imply redundancy, while conflicting measures can cause problems if they are simultaneously applied as part of a suite. To this end, a matrix based technique for metric score generation and comparison is introduced. In addition, the interaction between metrics and a sample of design patterns is considered, to provide an assessment of whether these two approaches to improving software quality are in fact compatible. Methods of analysis are presented which gauge the effects of applying various patterns on certain metric scores, highlighting cases where viewpoints for the metric, the pattern (or indeed both) could be anomalous. The results generated suggest that only minor levels of redundancy and conflict exist amongst commonly quoted metrics, although the application of certain design patterns can actually work in opposition to the viewpoints for particular metrics. On this basis, overall recommendations regarding the selection and application of measures for designs are then made.