Authors:
Inti Mendoza
1
;
Vedran Sabol
1
;
2
and
Johannes Hoffer
3
Affiliations:
1
Know-Center GmbH, Sandgasse 36, Graz, Austria
;
2
Graz University of Technology - Institute of Interactive Systems and Data Science, Sandgasse 36, Graz, Austria
;
3
voestalpine B ÖHLER Aerospace GmbH & Co KG, Mariazellerstraße 25, Kapfenberg, Austria
Keyword(s):
eXplainable AI, human-AI Interface Design, Explanations, Personalization, Process Engineering.
Abstract:
Advanced Machine Learning models now see usage in sensitive fields where incorrect predictions have serious consequences. Unfortunately, as models increase in accuracy and complexity, humans cannot verify or validate their predictions. This ineffability foments distrust and reduces model usage. eXplainable AI (XAI) provides insights into AI models’ predictions. Nevertheless, scholar opinion on XAI range from ”absolutely necessary” to ”useless, use white box models instead”. In modern Industry 5.0 environments, AI sees usage in production process engineering and optimisation. However, XAI currently targets the needs of AI experts, not the needs of domain experts or process operators. Our Position is: XAI tailored to user roles and following social science’s guidelines on explanations is crucial in AI-supported production scenarios and for employee acceptance and trust. Our industry partners allow us to analyse user requirements for three identified user archetypes - the Machine Operat
or, Field Expert, and AI Expert - and experiment with actual use cases. We designed an (X)AI-based visual UI through multiple review cycles with industry partners to test our Position. Looking ahead, we can test and evaluate the impact of personalised XAI in Industry 5.0 scenarios, quantify its benefits, and identify research opportunities.
(More)