Authors:
Olurotimi Seton
1
;
Pegah Haghighi
1
;
Mohammed Alshammari
2
and
Olfa Nasraoui
1
Affiliations:
1
Knowledge Discovery and Web Mining Lab, Computer Science and Engineering Dept, University of Louisville, U.S.A.
;
2
Department of Computer Science, Faculty of Computing and Information Technology, Northern Border University Arar, Saudi Arabia
Keyword(s):
Matrix Factorization, Model Explainability, User Generated Tags.
Abstract:
Black-box AI models tend to be more accurate but less transparent and scrutable than white-box models. This poses a limitation for recommender systems that rely on black-box models, such as Matrix Factorization (MF). Explainable Matrix Factorization (EMF) models are “explainable” extensions of Matrix Factorization, a state of the art technique widely used due to its flexibility in learning from sparse data and accuracy. EMF can incorporate explanations derived, by design, from user or item neighborhood graphs, among others, into the model training process, thereby making their recommendations explainable. So far, an EMF model can learn a model that produces only one explanation style, and this in turn limits the number of recommendations with computable explanation scores. In this paper, we propose a framework for EMFs with multiple styles of explanation, based on ratings and tags, by incorporating EMF algorithms that use scores derived from tagcentric graphs to connect rating neighb
orhood-based EMF techniques to tag-based explanations. We used precalculated explainability scores that have been previously validated in user studies that evaluated user satisfaction with each style individually. Our evaluation experiments show that our proposed methods provide accurate recommendations while providing multiple explanation styles, without sacrificing the accuracy of the recommendations.
(More)