Scaling Active Search using Linear Similarity Functions

Scaling Active Search using Linear Similarity Functions

Sibi Venkatesan, James K. Miller, Jeff Schneider, Artur Dubrawski

Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence
Main track. Pages 2878-2884. https://doi.org/10.24963/ijcai.2017/401

Active Search has become an increasingly useful tool in information retrieval problems where the goal is to discover as many target elements as possible using only limited label queries. With the advent of big data, there is a growing emphasis on the scalability of such techniques to handle very large and very complex datasets. In this paper, we consider the problem of Active Search where we are given a similarity function between data points. We look at an algorithm introduced by Wang et al. [Wang et al., 2013] known as Active Search on Graphs and propose crucial modifications which allow it to scale significantly. Their approach selects points by minimizing an energy function over the graph induced by the similarity function on the data. Our modifications require the similarity function to be a dot-product between feature vectors of data points, equivalent to having a linear kernel for the adjacency matrix. With this, we are able to scale tremendously: for n data points, the original algorithm runs in O(n^2) time per iteration while ours runs in only O(nr + r^2) given r-dimensional features. We also describe a simple alternate approach using a weighted-neighbor predictor which also scales well. In our experiments, we show that our method is competitive with existing semi-supervised approaches. We also briefly discuss conditions under which our algorithm performs well.
Keywords:
Machine Learning: Active Learning
Machine Learning: Semi-Supervised Learning