
involving stages of discovery to deeply understand the
problem space.
During the discover phase, we found that explain-
ability in not a binary term, and that something can be
explainable to one groups of users while not necessar-
ily being explainable to another. As a result, adding
good explanations requires a study of the target users
in terms of their needs and goals when interacting
with the entire XAI system. In the define phase of
the problem space, we found that the main goals of
users of DP Applications is to 1) Make better deci-
sions and 2) Trust the predictions they get from the
system. More specifically, they want to know why
certain predictions are made, and what happens to a
prediction if certain features change.
In conclusion, this paper contributes to the evolv-
ing field of explainable AI in supply chain manage-
ment by providing a user-focused description of ex-
plainability and identifying the specific needs of de-
mand planning application users. This discovery built
a foundation for implementing explainable AI solu-
tions that can enhance user trust, satisfaction, and
decision-making in demand planning processes.
REFERENCES
Adadi, A. and Berrada, M. (2018). Peeking inside the
black-box: a survey on explainable artificial intelli-
gence (xai). IEEE access, 6:52138–52160.
altexsoft (2022). Demand forecasting methods: Using ma-
chine learning to see the future of sales.
Apley, D. W. and Zhu, J. (2020). Visualizing the effects
of predictor variables in black box supervised learn-
ing models. Journal of the Royal Statistical Society:
Series B (Statistical Methodology).
Berkovsky, S., Taib, R., and Conway, D. (2017). How
to recommend?: User trust factors in movie recom-
mender systems. In Proceedings of the 22nd Inter-
national Conference on Intelligent User Interfaces,
pages 287–300, United States. Association for Com-
puting Machinery (ACM).
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J.,
and Shadbolt, N. (2018). “it’s reducing a human being
to a percentage”: Perceptions of justice in algorithmic
decisions. In Proceedings of the 2018 CHI Conference
on Human Factors in Computing Systems, CHI ’18,
page 1–14. ACM.
Box, G. E. P. and Jenkins, G. M. (1970). Time Series Analy-
sis: Forecasting and Control. Holden-Day, San Fran-
cisco.
Bussone, A., Stumpf, S., and O’Sullivan, D. (2015). The
role of explanations on trust and reliance in clinical
decision support systems. Proceedings - 2015 Ieee
International Conference on Healthcare Informatics.
Butz, R., Schulz, R., Hommersom, A., and van Eeke-
len, M. (2022). Investigating the understandability
of xai methods for enhanced user experience: When
bayesian network users became detectives. Artificial
Intelligence in Medicine, 134:102438.
Cahour, B. and Forzy, J.-F. (2009). Does projection into
use improve trust and exploration? an example with
a cruise control system. Safety Science, Volume 47,
Issue 9.
Deck, L., Schom
¨
acker, A., Speith, T., Sch
¨
offer, J., K
¨
astner,
L., and K
¨
uhl, N. (2024). Mapping the potential of
explainable ai for fairness along the ai lifecycle.
Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous
science of interpretable machine learning.
Friedman, J. H. (2001). Greedy function approximation: A
gradient boosting machine. Annals of Statistics.
Gedikli, F., Jannach, D., and Ge, M. (2014). How should i
explain? a comparison of different explanation types
for recommender systems. International Journal of
Human-Computer Studies, 72(4):367–382.
GeeksforGeeks (2024). Difference between statistical
model and machine learning.
Hoffman, R. R., Mueller, S. T., Klein, G., and Litman, J.
(2019). Metrics for explainable ai: Challenges and
prospects.
Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J.,
and Baesens, B. (2011). An empirical evaluation of
the comprehensibility of decision table, tree and rule
based predictive models. Decision Support Systems,
51(1):141–154.
Hyndman, R. J. and Khandakar, Y. (2008). Automatic time
series forecasting: The forecast package for r. Journal
of Statistical Software.
IBM (2017). What is demand planning?
Kulesza, T., Stumpf, S., Burnett, M., Wong, W.-K., Riche,
Y., Moore, T., Oberst, I., Shinsel, A., and McIntosh,
K. (2010). Explanatory debugging: Supporting end-
user debugging of machine-learned programs. In 2010
IEEE Symposium on Visual Languages and Human-
Centric Computing, pages 41–48.
Lepri, B., Oliver, N., Letouz
´
e, E., Pentland, A., and Vinck,
P. (2018). Fair, transparent, and accountable algorith-
mic decision-making processes: The premise, the pro-
posed solutions, and the open challenges. Philosophy
& Technology, 31(4):611–627.
Liao, Q. V., Gruen, D., and Miller, S. (2020). Question-
ing the ai: informing design practices for explainable
ai user experiences. In Proceedings of the 2020 CHI
conference on human factors in computing systems,
pages 1–15.
Liao, Q. V. and Varshney, K. R. (2022). Human-centered
explainable ai (xai): From algorithms to user experi-
ences.
Lim, B. Y., Dey, A. K., and Avrahami, D. (2009). Why
and why not explanations improve the intelligibility of
context-aware intelligent systems. Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems.
Lopes, P., Silva, E., Braga, C., Oliveira, T., and Rosado,
L. (2022). Xai systems evaluation: A review of hu-
man and computer-centred methods. Applied Sci-
ences, 12(19).
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
1252