Peer-Prediction in the Presence of Outcome Dependent Lying Incentives

Peer-Prediction in the Presence of Outcome Dependent Lying Incentives

Naman Goel, Aris Filos-Ratsikas, Boi Faltings

Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence
Main track. Pages 124-131. https://doi.org/10.24963/ijcai.2020/18

We derive conditions under which a peer-consistency mechanism can be used to elicit truthful data from non-trusted rational agents when an aggregate statistic of the collected data affects the amount of their incentives to lie. Furthermore, we discuss the relative saving that can be achieved by the mechanism, compared to the rational outcome, if no such mechanism was implemented. Our work is motivated by distributed platforms, where decentralized data oracles collect information about real-world events, based on the aggregate information provided by often self-interested participants. We compare our theoretical observations with numerical simulations on two public real datasets.
Keywords:
Agent-based and Multi-agent Systems: Economic Paradigms, Auctions and Market-Based Systems
Agent-based and Multi-agent Systems: Trust and Reputation
Humans and AI: Human Computation and Crowdsourcing
Trust, Fairness, Bias: General