recommendation explanation styles, such as Neighbor
Style Explanation (NSE), Influence Style Explanation
(ISE), and Keyword Style Explanation (KSE), it has
also been explored by (Symeonidis et al., 2008). Her-
locher et al. (Herlocker et al., 2000) argued that ex-
planations are needed to enhance the performance of
CF recommender systems. In their work, they explo-
red 21 explanation interfaces, where they eliminated
the recommended items and kept only the explanati-
ons for users to choose from, They found that, from a
promotion point of view, the best interface that users
voted for was a histogram-like explanation interface.
Other interfaces included past performance, table of
neighbors’ ratings, and similarity to other movies ra-
ted. Later, (Vig et al., 2009) used community tags to
explain recommendations. The researchers categori-
zed explanations into three types, as follows: item-
based, where an explanation is created based on ot-
her similar items; user-based, where the system relies
on other similar users to explain its recommendation;
and feature-based, where features, such as genre, are
used to justify the output. It is worth mentioning that
this work used the KSE explanation style. An exam-
ple of an explanation could be as follows: This mo-
vie is being recommended to you because it is tag-
ged with mystery which is present in the tags of mo-
vies you liked before. Another study that used KSE
as the explanation style is (McCarthy et al., 2004) in
which the researchers designed a Content-based Fil-
tering model for recommending digital cameras. This
system explains recommendations by converting ca-
meras’ components, such as memory size and reso-
lution, into sentences. Then, users can choose what
set of the explained features meet their requirements.
In (Zhang et al., 2014), the authors built a CF recom-
mender system that relies on the Latent Factor Models
technique to produce accurate recommendations with
attached explanations that are generated using senti-
ment analysis of users’ reviews. Moreover, a solution
was proposed in (Abdollahi and Nasraoui, 2016) and
(Abdollahi and Nasraoui, 2016b) for black box MF
using the ratings in a user’s neighborhood to generate
explanations. An explanation is generated based on
how neighbors rated the recommended item, and the
explanation style is NSE.
3 PROPOSED METHOD
Semantic data represents a rich source of knowledge
about both users and items. For instance, it is possible
to identify users who clearly show an interest in mo-
vies where certain actors play leading roles. Such me-
aningful knowledge can be used to generate meaning-
ful explanations for recommended movies. However,
to maintain transparency, it is desired to have these ex-
planations consistent with the actual MF model that is
built from rating data. In other words, we would like
to build a MF model that takes into account not only
user preference ratings but also potentially meaning-
ful explanations for these ratings. For this purpose,
we propose including available semantic knowledge
that could later be used for explanations, in the pro-
cess of learning a low-dimensional latent space repre-
sentation of users and items. This process will need to
incorporate information from two different domains,
namely the domain of semantic knowledge for the ex-
planations, and the domain of ratings for recommen-
dations. One approach for accomplishing this multi-
domain task is using Asymmetric MF (BenAbdallah
et al., 2010) (Abdollahi and Nasraoui, 2014) which
is a two step, multi-domain process. In the first step,
a semantic latent space model is built using the ex-
planation semantics of either or both users and items.
Then, the semantic latent space model vectors from
the first step are transferred to the second MF step,
where users’ explicit preference, such as rating, are
used to update the final recommendation model. In
this way, the final latent space vectors will strive to re-
construct the ratings used as input data in the second
step, while being anchored in the semantic explana-
tion data used in the first step of the factorization.
The flowchart of the proposed method, namely
Asymmetric Semantic Explainable MF with User-
Item-based (ASEMF UIB) semantic explainability
graph, is shown in Figure 1. The method consists
of two phases, as follows: the knowledge foundation
phase and model-building phase. In the first phase
(Knowledge Foundation), both the semantic explai-
nability graph and known ratings are prepared to be
used by the model-building algorithm in the second
phase, which will be devoted to learning the MF mo-
del using these semantics. The first semantic explai-
nability graph for all users relative to all items is con-
structed based on a specific semantic feature (such as
the actor for movie items).
First, an item by a semantic feature matrix is built
as follows:
S
I
f ,i
=
(
1 i f f possessed byi,
0 otherwise.
(1)
where f represents a semantic feature, such as an ac-
tor; i denotes an item (in this paper, a movie); and I is
the set of all items. We then compute a second matrix
for each user and semantic feature as follows:
S
U
f ,u
=
(
N f possessed by itemsliked by u,
0 otherwise.
(2)
A Semantically Aware Explainable Recommender System using Asymmetric Matrix Factorization
269