Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.780859
Title: When do we cooperate with robots?
Author: Zanatto, Debora
ISNI:       0000 0004 7966 4984
Awarding Body: University of Plymouth
Current Institution: University of Plymouth
Date of Award: 2019
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
Robotic usage is entering the world into many diverse ways, from advanced surgical areas to assistive technologies for disabled persons. Robots are increasingly designed and developed to assist humans with everyday tasks. However, they are still perceived as tools to be manipulated and controlled by humans, rather than complete and autonomous helpers. One of the main reasons can be addressed to the development of their capabilities to appear credible and trustworthy. This dissertation explores the challenge of interactions with social robots, investigating which specific situations and environments lead to an increase in trust and cooperation between humans and robots. After discussing the multifaceted concept of anthropomorphism and its key role on cooperation through literature, three open issues are faced: the lack of a clear definition of anthropomorphic contribution to robots acceptance, the lack of defined anthropomorphic boundaries that should not be crossed to maintain a satisfying interaction in HRI and the absence of a real cooperative interaction with a robotic peer. In Chapter 2, the first issue is addressed, demonstrating that robots credibility can be affected by experience and anthropomorphic stereotype activation. Chapter 3, 4, 5 and 6 are focussed in resolving the remaining two issues in parallel. By using the Economic Investment Game in four different studies, the emergence of human cooperative attitudes towards robots is demonstrated. Finally, the limits of anthropomorphism are investigated through comparisons of social human-like behaviours with machine-like static nature. Results show that the type of payoff can selectively affect trust and cooperation in HRI: in case of low payoff participants' increase their tendency to look for the robots anthropomorphic cues, while a condition of high payoff is more suitable for machine-like agents.
Supervisor: Not available Sponsor: THRIVE, Air Force Office of Scientific Research
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.780859  DOI: Not available
Keywords: Human-Robot Interaction ; Trust ; Cooperation ; Anthropomorphism ; Joint Attention ; Decision Making ; Imitation
Share: