Title:
|
Exploiting vagueness for multi-agent consensus
|
The ultimate objective of artificial intelligence is to develop intelligent agents that can think and act rationally. In intelligent systems, agents rarely exist in isolation, but instead form part of a larger group of agents all sharing the same (or similar) goals. As such, a population of agents needs to be able to reach an agreement about the state of the world efficiently and accurately, and in a distributed manner, so that they can then make collective decisions. In this thesis we attempt to exploit vagueness in natural language so as to allow agents to be more effective in forming consensus. In classical logic, a proposition can be either true or false, which inevitably leads to situations in which agents that disagree about the truth of a proposition cannot resolve their inconsistencies in an intuitive manner. By adopting an intermediate truth state in cases where there is direct conflict between the beliefs of agents (i.e., where one believes the proposition to be true, and the other believes it to be false), we can combine the beliefs of agents in order to form consensus. We can then repeat this process across the population by forming consensus between agents in an iterative manner, until the population converges to a single, shared belief. This forms the basis of our initial model. We then extend this model of consensus for vague beliefs to take account of epistemic uncertainty. After demonstrating strong convergence properties of both models, we apply our work to a swarm of 400 Kilobot robots,and study the resulting convergence in such a setting. Finally, we propose a model of consensus in which agents attempt to reach an agreement about a set of compound sentences, rather than just a set of propositional variables.
|