Use this URL to cite or link to this record in EThOS:
Title: Multi-agent negotiation using trust and persuasion
Author: Ramchurn, Sarvapali Dyanand
ISNI:       0000 0001 3506 1813
Awarding Body: University of Southampton
Current Institution: University of Southampton
Date of Award: 2004
Availability of Full Text:
Access through EThOS:
Access through Institution:
In this thesis, we propose a panoply of tools and techniques to manage inter-agent dependencies in open, distributed multi-agent systems that have significant degrees of uncertainty. In particular, we focus on situations in which agents are involved in repeated interactions where they need to negotiate to resolve conflicts that may arise between them. To this end, we endow agents with decision making models that exploit the notion of trust and use persuasive techniques during the negotiation process to reduce the level of uncertainty and achieve better deals in the long run. Firstly, we develop and evaluate a new trust model (called CREDIT) that allows agents to measure the degree of trust they should place in their opponents. This model reduces the uncertainty that agents have about their opponents' reliability. Thus, over repeated interactions, CREDIT enables agents to model their opponents' reliability using probabilistic techniques and a fuzzy reasoning mechanism that allows the combination of measures based on reputation (indirect interactions) and confidence (direct interactions). In so doing, CREDIT takes a wider range of behaviour-influencing factors into account than existing models, including the norms of the agents and the institution within which transactions occur. We then explore a novel application of trust models by showing how the measures developed in CREDIT ca be applied negotiations in multiple encounters. Specifically we show that agents that use CREDIT are able to avoid unreliable agents, both during the selection of interaction partners and during the negotiation process itself by using trust to adjust their negotiation stance. Also, we empirically show that agents are able to reach good deals with agents that are unreliable to some degree (rather than completely unreliable) and with those that try to strategically exploit their opponent. Secondly, having applied CREDIT to negotiations, we further extend the application of trust to reduce uncertainty about the reliability of agents in mechanism design (where the honesty of agents is elicited by the protocol). Thus, we develop \acf{tbmd} that allows agents using a trust model (such as CREDIT) to reach efficient agreements that choose the most reliable agents in the long run. In particular, we show that our mechanism enforces truth-telling from the agents (i.e. it is incentive compatible), both about their perceived reliability of their opponent and their valuations for the goods to be traded. In proving the latter properties, our trust-based mechanism is shown to be the first reputation mechanism that implements individual rationality, incentive compatibility, and efficiency. Our trust-based mechanism is also empirically evaluated and shown to be better than other comparable models in reaching the outcome that maximises all the negotiating agents' utilities and in choosing the most reliable agents in the long run. Thirdly, having explored ways to reduce uncertainties about reliability and honesty, we use persuasive negotiation techniques to tackle issues associated with uncertainties that agents have about the preferences and the space of possible agreements. To this end, we propose a novel protocol and reasoning mechanism that agents can use to generate and evaluate persuasive elements, such as promises of future rewards, to support the offers they make during negotiation. These persuasive elements aim to make offers more attractive over multiple encounters given the absence of information about an opponent's discount factors or exact payoffs. Specifically, we empirically demonstrate that agents are able to achieve a larger number of agreements and a higher expected utility over repeated encounters when they are given the capability to give or ask for rewards. Moreover, we develop a novel strategy using this protocol and show that it outperforms existing state of the art heuristic negotiation models. Finally, the applicability of persuasive negotiation and CREDIT is exemplified through a practical implementation in a pervasive computing environment. In this context, the negotiation mechanism is implemented in an instant messaging platform (JABBER) and used to resolve conflicts between group and individual preferences that arise in a meeting room scenario. In particular, we show how persuasive negotiation and trust permit a flexible management of interruptions by allowing intrusions to happen at appropriate times during the meeting while still managing to satisfy the preferences of all parties present.
Supervisor: Not available Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID:  DOI: Not available