Non-Cheating Teaching Revisited: A New Probabilistic Machine Teaching Model
Non-Cheating Teaching Revisited: A New Probabilistic Machine Teaching Model
Cèsar Ferri, José Hernández-Orallo, Jan Arne Telle
Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence
Main Track. Pages 2973-2979.
https://doi.org/10.24963/ijcai.2022/412
Over the past decades in the field of machine teaching, several restrictions have been introduced to avoid ‘cheating’, such as collusion-free or non-clashing teaching. However, these restrictions forbid several teaching situations that we intuitively consider natural and fair, especially those ‘changes of mind’ of the learner as more evidence is given, affecting the likelihood of concepts and ultimately their posteriors. Under a new generalised probabilistic teaching, not only do these non-cheating constraints look too narrow but we also show that the most relevant machine teaching models are particular cases of this framework: the consistency graph between concepts and elements simply becomes a joint probability distribution. We show a simple procedure that builds the witness joint distribution from the ground joint distribution. We prove a chain of relations, also with a theoretical lower bound, on the teaching dimension of the old and new models. Overall, this new setting is more general than the traditional machine teaching models, yet at the same time more intuitively capturing a less abrupt notion of non-cheating teaching.
Keywords:
Machine Learning: Learning Theory
Machine Learning: Probabilistic Machine Learning