Minimally Supervised Contextual Inference from Human Mobility: An Iterative Collaborative Distillation Framework

Minimally Supervised Contextual Inference from Human Mobility: An Iterative Collaborative Distillation Framework

Jiayun Zhang, Xinyang Zhang, Dezhi Hong, Rajesh K. Gupta, Jingbo Shang

Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence
Main Track. Pages 2450-2458. https://doi.org/10.24963/ijcai.2023/272

The context about trips and users from mobility data is valuable for mobile service providers to understand their customers and improve their services. Existing inference methods require a large number of labels for training, which is hard to meet in practice. In this paper, we study a more practical yet challenging setting—contextual inference using mobility data with minimal supervision (i.e., a few labels per class and massive unlabeled data). A typical solution is to apply semi-supervised methods that follow a self-training framework to bootstrap a model based on all features. However, using a limited labeled set brings high risk of overfitting to self-training, leading to unsatisfactory performance. We propose a novel collaborative distillation framework STCOLAB. It sequentially trains spatial and temporal modules at each iteration following the supervision of ground-truth labels. In addition, it distills knowledge to the module being trained using the logits produced by the latest trained module of the other modality, thereby mutually calibrating the two modules and combining the knowledge from both modalities. Extensive experiments on two real-world datasets show STCOLAB achieves significantly more accurate contextual inference than various baselines.
Keywords:
Data Mining: DM: Mining spatial and/or temporal data
Machine Learning: ML: Semi-supervised learning