Abstract

Learning a Distance Metric by Empirical Loss Minimization
Learning a Distance Metric by Empirical Loss Minimization
Wei Bian, Dacheng Tao
In this paper, we study the problem of learning ametric and propose a loss function based metriclearning framework, in which the metric is estimatedby minimizing an empirical risk over a trainingset. With mild conditions on the instance distributionand the used loss function, we prove that theempirical risk converges to its expected counterpartat rate O(1/\sqrt{n}), wherein n is the cardinality of the training set. In addition, with the assumption thatthe best metric that minimizes the expected risk isbounded, we prove that the learned metric is consistent. Two example algorithms are presented by usingthe proposed loss function based metric learningframework, each of which uses a log loss functionand a smoothed hinge loss function, respectively. Experimental results on data sets from the UCI machine learning repository suggest the effectivenessof the proposed algorithms.