Abstract

LIFT: Multi-Label Learning with Label-Specific Features
LIFT: Multi-Label Learning with Label-Specific Features
Min-Ling Zhang
Multi-label learning deals with the problem where each training example is represented by a single instance while associated with a set of class labels. For an unseen example, existing approaches choose to determine the membership of each possible class label to it based on identical feature set, i.e. the very instance representation of the unseen example is employed in the discrimination processes of all labels. However, this commonly-used strategy might be suboptimal as different class labels usually carry specific characteristics of their own, and it could be beneficial to exploit different feature sets for the discrimination of different labels. Based on the above reflection, we propose a new strategy to multi-label learning by leveraging label-specific features, where a simple yet effective algorithm named LIFT is presented. Briefly, LIFT constructs features specific to each label by conducting clustering analysis on its positive and negative instances, and then performs training and testing by querying the clustering results. Extensive experiments across sixteen diversified data sets clearly validate the superiority of LIFT against other well-established multi-label learning algorithms.