Authors:
Jasper van der Waa
;
Jurriaan van Diggelen
;
Mark Neerincx
and
Stephan Raaijmakers
Affiliation:
TNO, Netherlands
Keyword(s):
Machine learning, Trust, Certainty, Uncertainty, Explainable Artificial Intelligence.
Related
Ontology
Subjects/Areas/Topics:
Artificial Intelligence
;
Computational Intelligence
;
Evolutionary Computing
;
Industrial Applications of AI
;
Knowledge Discovery and Information Retrieval
;
Knowledge-Based Systems
;
Machine Learning
;
Soft Computing
;
Symbolic Systems
;
Uncertainty in AI
Abstract:
End-users of machine learning-based systems benefit from measures that quantify the trustworthiness of the
underlying models. Measures like accuracy provide for a general sense of model performance, but offer no
detailed information on specific model outputs. Probabilistic outputs, on the other hand, express such details,
but they are not available for all types of machine learning, and can be heavily influenced by bias and lack of
representative training data. Further, they are often difficult to understand for non-experts. This study proposes
an intuitive certainty measure (ICM) that produces an accurate estimate of how certain a machine learning
model is for a specific output, based on errors it made in the past. It is designed to be easily explainable to
non-experts and to act in a predictable, reproducible way. ICM was tested on four synthetic tasks solved by
support vector machines, and a real-world task solved by a deep neural network. Our results show that ICM is
bot
h more accurate and intuitive than related approaches. Moreover, ICM is neutral with respect to the chosen
machine learning model, making it widely applicable.
(More)