Learning Temporal Plan Preferences from Examples: An Empirical Study

Learning Temporal Plan Preferences from Examples: An Empirical Study

Valentin Seimetz, Rebecca Eifler, Jörg Hoffmann

Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence
Main Track. Pages 4160-4166. https://doi.org/10.24963/ijcai.2021/572

Temporal plan preferences are natural and important in a variety of applications. Yet users often find it difficult to formalize their preferences. Here we explore the possibility to learn preferences from example plans. Focusing on one preference at a time, the user is asked to annotate examples as good/bad. We leverage prior work on LTL formula learning to extract a preference from these examples. We conduct an empirical study of this approach in an oversubscription planning context, using hidden target formulas to emulate the user preferences. We explore four different methods for generating example plans, and evaluate performance as a function of domain and formula size. Overall, we find that reasonable-size target formulas can often be learned effectively.
Keywords:
Planning and Scheduling: Planning and Scheduling