Experimental Design under the Bradley-Terry Model
Experimental Design under the Bradley-Terry Model
Yuan Guo, Peng Tian, Jayashree Kalpathy-Cramer, Susan Ostmo, J.Peter Campbell, Michael F.Chiang, Deniz Erdogmus, Jennifer Dy, Stratis Ioannidis
Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence
Main track. Pages 2198-2204.
https://doi.org/10.24963/ijcai.2018/304
Labels generated by human experts via comparisons exhibit smaller variance compared to traditional sample labels. Collecting comparison labels is challenging over large datasets, as the number of comparisons grows quadratically with the dataset size. We study the following experimental design problem: given a budget of expert comparisons, and a set of existing sample labels, we determine the comparison labels to collect that lead to the highest classification improvement. We study several experimental design objectives motivated by the Bradley-Terry model. The resulting optimization problems amount to maximizing submodular functions. We experimentally evaluate the performance of these methods over synthetic and real-life datasets.
Keywords:
Machine Learning: Active Learning
Machine Learning: Classification