Safe Weakly Supervised Learning
Safe Weakly Supervised Learning
Yu-Feng Li
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Early Career. Pages 4951-4955.
https://doi.org/10.24963/ijcai.2021/701
Weakly supervised learning (WSL) refers to learning from a large amount of weak supervision data. This includes i) incomplete supervision (e.g., semi-supervised learning); ii) inexact supervision (e.g., multi-instance learning) and iii) inaccurate supervision (e.g., label noise learning). Unlike supervised learning which typically achieves performance improvement with more labeled data, WSL may sometimes even degenerate performance with more weak supervision data. It is thus desired to study safe WSL, which could robustly improve performance with weak supervision data. In this article, we share our understanding of the problem from in-distribution data to out-of-distribution data, and discuss possible ways to alleviate it, from the aspects of worst-case analysis, ensemble-learning, and bi-level optimization. We also share some open problems, to inspire future researches.
Keywords:
Machine Learning: Semi-Supervised Learning
Machine Learning: Weakly Supervised Learning