A Value-based Trust Assessment Model for Multi-agent Systems

A Value-based Trust Assessment Model for Multi-agent Systems

Kinzang Chhogyal, Abhaya Nayak, Aditya Ghose, Hoa K. Dam

Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 194-200. https://doi.org/10.24963/ijcai.2019/28

An agent's assessment of its trust in another agent is commonly taken to be a measure of the reliability/predictability of the latter's actions. It is based on the trustor's past observations of the behaviour of the trustee and requires no knowledge of the inner-workings of the trustee. However, in situations that are new or unfamiliar, past observations are of little help in assessing trust. In such cases, knowledge about the trustee can help. A particular type of knowledge is that of values - things that are important to the trustor and the trustee. In this paper, based on the premise that the more values two agents share, the more they should trust one another, we propose a simple approach to trust assessment between agents based on values, taking into account if agents trust cautiously or boldly, and if they depend on others in carrying out a task.
Keywords:
Agent-based and Multi-agent Systems: Trust and Reputation
Agent-based and Multi-agent Systems: Agent Theories and Models
Humans and AI: Ethical Issues in AI
Knowledge Representation and Reasoning: Reasoning about Knowlege and Belief