Abstract

Proceedings Abstracts of the Twenty-Fourth International Joint Conference on Artificial Intelligence

Quantifying and Improving the Robustness of Trust Systems / 4405
Dongxia Wang
PDF

Trust systems are widely used to facilitate interactions among agents based on trust evaluation. These systems may have robustness issues, that is, they are affected by various attacks. Designers of trust systems propose methods to defend against these attacks. However, they typically verify the robustness of their defense mechanisms (or trust models) only under specific attacks. This raises problems: first, the robustness of their models is not guaranteed as they do not consider all attacks. Second, the comparison between two trust models depends on the choice of specific attacks, introducing bias. We propose to quantify the strength of attacks, and to quantify the robustness of trust systems based on the strength of the attacks it can resist.Our quantification is based on information theory, and provides designers of trust systems a fair measurement of the robustness.