Explanatory Capabilities of Large Language Models in Prescriptive Process Monitoring (Extended Abstract)
Explanatory Capabilities of Large Language Models in Prescriptive Process Monitoring (Extended Abstract)
Kateryna Kubrak, Lana Botchorishvili, Fredrik Milani, Alexander Nolte, Marlon Dumas
Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence
Sister Conferences Best Papers. Pages 10901-10905.
https://doi.org/10.24963/ijcai.2025/1213
Prescriptive Process Monitoring (PrPM) systems recommend interventions in ongoing business process cases to improve performance. However, performance gains only materialize if users follow the recommendations. Prior research has shown that users are more likely to follow recommendations when they understand them. In this paper, we explore the use of Large Language Models (LLMs) to generate explanations for PrPM recommendations. We developed a prompting method based on typical user questions and integrated it into an existing PrPM system. Our evaluation indicates that LLMs can help users of PrPM systems to better understand the recommendations. The results indicate that LLMs can help users of PrPM systems to better understand the recommendations, and to produce recommendations that have sufficient detail and fulfill their expectations. However, the explanations fall short in addressing the underlying "why" and do not always support users in assessing the trustworthiness of the recommendations.
Keywords:
Sister Conferences Best Papers: Multidisciplinary Topics and Applications
Sister Conferences Best Papers: Humans and AI
