Use this URL to cite or link to this record in EThOS:
Title: Effectiveness of explanation facilities for intelligent systems
Author: Darlington, Keith
ISNI:       0000 0004 5357 784X
Awarding Body: London South Bank University
Current Institution: London South Bank University
Date of Award: 2014
Availability of Full Text:
Access from EThOS:
This report has been prepared as the cover paper for submission for the award of a “PhD by Publication”. The dissertation investigates the effectiveness of explanation facilities for intelligent systems as a PhD submission by publication. The report is based on a series of my publications. There are thirteen chosen publications in this portfolio (see Appendices 2-14), two of which are chapters taken from two of my published books and a brief summary and description of these publications is given in appendix 1 (PhD by Publication: Registration Paper). The purpose of this report is to provide an account of the themes that give the publications described within this portfolio their coherence regarding the effectiveness of explanation facilities for intelligent systems. The papers described in this portfolio are not necessarily presented chronologically because I wanted to present them in accordance with the best conceptual way in which, I believe, the contents naturally dovetail. My main contributions to knowledge were in three areas: general explanation design methods for symbolic expert systems, applications of explanation facilities in the healthcare domain, and the general applicability of explanation facilities in other intelligent system technologies. My main findings show that a strong case can be made for the inclusion of explanation facilities in expert systems – particularly providing justification explanations types. Furthermore, it is recommended that designers of explanation for healthcare expert systems should give careful consideration to both the stakeholders and the nature of the clinical tasks undertaken. This recommendation could apply to other application domains. Finally, symbolic AI methods, such as heuristic rule-based expert systems or case-based reasoning techniques are better suited to explanation than non-symbolic paradigms, such as neural networks. Rule extraction techniques offer an effective way in which opaque technologies can deliver explanation facilities by mapping output data to rules which are amenable to natural explanation. Applications of XML, such as Rule ML can be used transform and disseminate these applications on the World Wide Web.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available