On the Efficiency of Data Collection for Crowdsourced Classification

On the Efficiency of Data Collection for Crowdsourced Classification

Edoardo Manino, Long Tran-Thanh, Nicholas R. Jennings

Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 1568-1575. https://doi.org/10.24963/ijcai.2018/217

The quality of crowdsourced data is often highly variable. For this reason, it is common to collect redundant data and use statistical methods to aggregate it. Empirical studies show that the policies we use to collect such data have a strong impact on the accuracy of the system. However, there is little theoretical understanding of this phenomenon. In this paper we provide the first theoretical explanation of the accuracy gap between the most popular collection policies: the non-adaptive uniform allocation, and the adaptive uncertainty sampling and information gain maximisation. To do so, we propose a novel representation of the collection process in terms of random walks. Then, we use this tool to derive lower and upper bounds on the accuracy of the policies. With these bounds, we are able to quantify the advantage that the two adaptive policies have over the non-adaptive one for the first time.
Keywords:
Machine Learning: Active Learning
Machine Learning: Classification
Machine Learning: Probabilistic Machine Learning
Humans and AI: Human Computation and Crowdsourcing