Authors:
Ana Maita
1
;
Marcelo Fantinato
1
;
Sarajane Peres
1
and
Fabrizio Maggi
2
Affiliations:
1
School of Arts, Science and Humanities, University of Sao Paulo, Rua Arlindo Bettio 1000, Sao Paulo, Brazil
;
2
Faculty of Computer Science, Free University of Bozen-Bolzano, Bozen-Bolzan, Italy
Keyword(s):
Process Mining, Event Logs, Explainable Machine Learning, XAI, Interpretable Machine Learning, Predictive Process Mining.
Abstract:
The majority of the state-of-the-art predictive process monitoring approaches are based on machine learning techniques. However, many machine learning techniques do not inherently provide explanations to business process analysts to interpret the results of the predictions provided about the outcome of a process case and to understand the rationale behind such predictions. In this paper, we introduce a business-oriented approach to visually support the interpretability of the results in predictive process monitoring. We take as input the results produced by the SP-LIME interpreter and we project them onto a process model. The resulting enriched model shows which features contribute to what degree to the predicted result. We exemplify the proposed approach by visually interpreting the results of a classifier to predict the output of a claim management process, whose claims can be accepted or rejected.