and unexpectedness, and that turned out to be better
than other collaborative and content-based
algorithms for recommendation. An interesting
characteristic of their study was the measure of
surprise done actively through the analysis of the
users’ face expressions. This analysis is performed
using Noldus FaceReaderTM. That way, implicit
feedback about the users’ reactions will be gathered
towards the recommendations that they are given.
(de Gemmis et al., 2015)
In his model for news recommendations (Jenders
et al., 2015), Jenders suggests many ranking
algorithms and models and compares them. The
serendipitous ranking uses a boosting algorithm to
re-rank articles. Those articles are previously ranked
according to an unexpectedness model and another
model based on the cosine similarity between the
items and a source article. This ranking system
gained the highest mean surprise ratings per
participant.
Reviglio in his study, states that serendipity
cannot be created on demand (Reviglio, n.d.).
Instead, it should be cultivated by creating
opportunities for it. These opportunities would be
present in a learning environment that can be
physical or digital. He elaborates his concept
through social media. He affirms that by pushing the
user to burst from the bubble, we give the people the
power to discover and by doing this, we create
balance by giving freedom and mystery. As a
continuation for what was previously said, Son et al.
through their observation noted that microblogging
communities provide a suitable context to observe
the presence and effect of serendipity (Sun et al.,
n.d.). In fact, their experiment revealed a high ratio
of serendipity due to retweeting. They remarked that
this serendipitous diffusion of information affects
the user’s activity and engagement positively.
Some practitioners are trying to create systems
where the design enhances serendipity. Two
examples can be Google’s theoretical serendipity
engine and EBay’s test in serendipitous shopping
(Sun et al., n.d.). Another recommender framework
that tries to introduce serendipity is Auralist (Zhang
et al., 2012). This system attempts not only to
balance between accuracy, diversity, novelty and
serendipity in the recommendation of music, but
also to improve them simultaneously. Observation of
the systems reflects how users are ready willingly
sacrifice some accuracy willingly to improve all the
rest.
In order to better expect the unexpected,
Adamopoulos et al. proposed a method to generate
unexpected recommendations while maintaining
accuracy (Adamopoulos and Tuzhilin, n.d.). We
used their algorithm in our study, and therefore, we
will be explaining it later.
3 IMPLEMENTATION
ENVIRONMENT
In this section, we present the algorithm used
followed by the dataset.
3.1 Strategies
In order to test the optimal number of serendipitous
recommendations in the accurate list of
recommendations, we started by choosing an
algorithm for both our base strategy and the
serendipity strategy. For the base strategy, we picked
a non-personalized single-heuristic strategy. Our
base study, which is supposed to generate accurate
recommendations, is based on the popularity. In this
strategy, the selection of the items is done in a
descending order of popularity (i.e. number of
ratings).
As for the serendipity strategy, which is
personalized, it takes into consideration three factors
in order to select the item and add it to the
recommendation list: the quality, the unexpectedness
and the utility. Certain restrictions and boundaries
are placed in order to test if the item’s quality is
above a certain lower limit, and if it is farther
enough from the expectations of the user (not too
much, not too little).
Six cases were subject to our testing. In each
case, we varied the number of recommendations
generated by each of the two strategies previously
mentioned. Starting from case one where all the
items are generated by the base strategy, till the last
case where all items are serendipitous, we changed
the number of items as follows:
• Case 1: Strategy_10B_0S:
10 recommendations from the base strategy
No recommendation from the serendipity strategy
• Case 2: Strategy_8B_2S
8 recommendations from the base strategy
2 recommendations from the serendipity strategy
• Case 3: Strategy_6B_4S
6 recommendations from the base strategy
4 recommendations from the serendipity strategy
• Case 4: Strategy_4B_6S
4 recommendations from the base strategy
6 recommendations from the serendipity strategy
• Case 5: Strategy_2B_8S
Adaptive Serendipity for Recommender Systems: Let It Find You
741