Use this URL to cite or link to this record in EThOS: http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.698528
Title: Generalising plans to influence landscapes for robust agent execution in virtual worlds
Author: Dicken, Luke
ISNI:       0000 0004 5991 5942
Awarding Body: University of Strathclyde
Current Institution: University of Strathclyde
Date of Award: 2016
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
Artificial Intelligence is one of the most promising areas of modern technology, with great potential for changing the face of the modern world. However almost universally we use one of two paradigms in AI - either techniques that are very efficient but which lack good long-term reasoning, or techniques that are exceptionally good at providing long-term solutions but which are slow to execute and reasonably inflexible. This is a problem because the natural world does not decompose so neatly into one or other box, many situations require fast problem-solving that is also cognisant of long-term objectives and motivations. A great example of this kind of problem is frequently encountered in the video game industry, where the computational load is predominantly tied up simulating an environment and rendering this graphically to the player. As a consequence there is very limited processing power available for the AI systems that power the activity within that environment. However, there is also a need for the actions being taken by the agents in the game to at least appear to be intelligent, and for many games that intelligence needs to be exhibited in a dynamic, rapidly changing world. This is a clear example of an area where efficient long-term reasoning is necessitated, and cannot adequately be solved with either of the two existing families of algorithms. The core hypothesis of the work is derived from the need to bridge the gap between the two paradigms and states that an architecture that operates in this way "will be demonstrably more robust and efficient than those that are either purely Reactive or purely Deliberative." Additionally, as a technique intrinsically bound with an industrial need, it is also essential that any proposed architecture be viable in an industry setting. The work firstly presents an extensive literature review focusing on techniques for both Reactive and Deliberative reasoning, with a particular emphasis on those that are in use in the video game industry. By understanding contemporary techniques, along with their strengths and weaknesses, a context for the problem is provided from which a new framework will be created that draws strengths from both paradigms and mitigates the potential weaknesses. The result is the Integrated Influence Architecture (I2A) which uses a combination of techniques from both types of reasoning and leveraging the nature of video games to make use of the large amount of resources available during their development, rather than relying on the comparatively small amount available at runtime. The I2A functions primarily by creating a "Common Representation" that is generated from a symbolic description of the game world, formatted in such a way that it lends itself to mathematical manipulation (as opposed to more traditional symbolic representations that are better suited to search). The premise is that by using such a representation, information that has been generated from either Reactive or Deliberative types of reasoning can be combined to provide a holistic view of the situation, informed by both paradigms. This bulk of the work sets out the overall methodology for the I2A, specifically the manner in which a problem can be described using a symbolic description language (PDDL) and from this the Common Representation can be calculated. During execution, the CR is used much like an Influence Map with sources of influence coming from the different reasoning systems. Additionally time is spent on evaluating the proposed I2A by the criteria established. This is done by demonstrating that the processes underpinning it are sound, that the technique can be used in an industry setting, that it is capable of reacting to threats within the context of long term reasoning and that the I2A as a whole is an efficient process. Potential future directions that the work could be taken in in order to have wider applicability and better results are also discussed.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.698528  DOI: Not available
Share: