Use this URL to cite or link to this record in EThOS:
Title: A learning based approach to modelling bilateral adaptive agent negotiations
Author: Narayanan, Vidya
ISNI:       0000 0001 3439 5651
Awarding Body: University of Southampton
Current Institution: University of Southampton
Date of Award: 2008
Availability of Full Text:
Full text unavailable from EThOS.
Please contact the current institution’s library for further details.
In large multi-agent systems, individual agents often have conflicting goals, but are dependent on each other for the achievement of these objectives. In such 'situations, negotiation between the agents is a key means of resolving conflicts and reaching a compromise. Hence it is imperative to develop good automated negotiation techniques to enable effective interactions. However this problem is made harder by the fact that such environments are invariably dynamic (e.g. the bandwidth available for COmInUnications can fluctuate, the. availability of computation resources can change, and the time available for negotiations can change). Moreover, these changes can have a direct eff~ct on the negotiation process. Thus an agent has to adapt its negotiation behaviour in response to changes in the environment and its opponent's behaviour if it is to be effective. Given this, this research has developed negotiation mechanisms that enable an agent to perform effectively in a particular class of negotiation encounters; namely, .~ bilateral negotiation in which a service provider and a service consumer interact to fix the price of the service. In more detail, we use both reinforcement and Bayesian learning methods to derive an optimal agent strategy for bilateral negotiations in dynamic environments with incomplete information. Specifically, an agent models the change in its opponent's behaviour using Markov Chains and determines an optimal policy to use in response to changes in the environment. Also using the Markov chain framework, the agent updates its prior knowledge of the opponent by observing successive offers using Bayesian inference and hence strategically responds to its opponent. This framework for adaptive negotiation in non-stationary environments incorporates two novel learning algorithms that use reinforcement and Bayesian learning techniques to respond to the various forms of dynamism. Having devised the algorithms, we analytically show that the former learns an optimal policy for negotiating in a non-stationary environment and the latter converges over repeated encounters to the opponent's true strategic model. These empirical results show that the reinforcement learning.algorithm successfully concludes 83% of the negotiations in dynamic scenarios and that when using the Bayesian algorith?l the opponent learns the true model of an adaptive opponent's behaviour in 95% of the encounters. Both of these results compare very favourably with the previous state of art. We have also done a comparison of the these two algorithms. The empirical results show that using reinforcement learning a very high percentage (90%) of dynamic. ne- .' gotiation encounters end in agreement, whereas using Bayesian learning techniques the agent eams a large share ofthe profits (89%) in the negotiation process.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available