Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.816086
Title: Neural networks and search landscapes for software testing
Author: Joffe, Leonid
ISNI:       0000 0004 9353 3868
Awarding Body: UCL (University College London)
Current Institution: University College London (University of London)
Date of Award: 2020
Availability of Full Text:
Access from EThOS:
Full text unavailable from EThOS. Please try the link below.
Access from Institution:
Abstract:
Search-Based Software Testing (SBST) methods are popular for improving the reliability of software. They do, however, suffer from challenges: poor representation, ineffective fitness functions and search landscapes, and restricted testing strategies. Neural Networks (NN) are a machine learning technique that can process complex data, they are continuous by design, and they can be used in a generative capacity. This thesis explores a number of approaches that leverage these properties to tackle the challenges of SBST. The first use case is for defining fitness functions to target specific properties of interest. This is showcased by first training an NN to classify an execution trace as crashing or non-crashing. The estimate is then used to prioritise previously unseen executions that are deemed more likely to crash by the NN. This fitness function yields more efficient crash discovery than a baseline. The second proposition is to use NNs to define a search space for a diversity driven testing strategy. The space is constructed by encoding execution traces into an n-dimensions, where distance represents the degree of feature similarity. This thesis argues that this notion of similarity can drive a diversification driven testing strategy. Finally, an application of a generative model for SBST is presented. Initially, random inputs are fed to the program and execution traces are collected and encoded. Redundant executions are culled, distinct ones are kept and the loop is repeated. Over time, this mechanism discovers new program behaviours and these are added to the ever more diverse training dataset. Although this approach does not yet compete with existing tools, experiments show that the notion of similarity is meaningful, the generated program inputs are sensible and faults are found. Much of the work presented in this thesis is exploratory and is meant to serve as basis for future research.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.816086  DOI: Not available
Share: