alyzing correlations of the behaviour within groups
of trustees, which is robust to cope with malicious,
noisy, or inaccurate third-party information. The
MET (Jiang et al., 2013) model resists unfair rating
attacks by using evolutionary operators to generate a
trust network over time. The ACT model (Yu et al.,
2014) uses reinforcement learning to cope with biased
testimonies by automatically adjusting critical param-
eters.
However, classical models still have many draw-
backs. For example, BRS and iCLUB are vul-
nerable to Sybil attacks; TRAVOS and HABIT are
vulnerable to Camouflage attacks; The personalized
model (Zhang and Cohen, 2008) is susceptible to
Sybil Whitewashing attacks. Therefore, we propose
a Ranking-based Partner Selection (RPS) model to
solve the challenging problem of unfair rating attacks.
RPS has two advantages: (1) Introducing the rank-
ing of trustees as a supplement for ratings, which im-
proves the accuracy of partner selection, especially in
environments with a high ratio of dishonest advisors;
(2) Introducing an online learning method, which
helps to update model parameters based on feedback
in real-time; (3) Introducing behaviour monitoring,
which helps to cope with dynamic changing attacks
like camouflage.
The rest of this paper is organized as follows. Sec-
tion 2 introduces related work. Section 3 describes
the problem and presents the formal definitions. Sec-
tion 4 describes the principle of the model and gives
the detailed design of the Partner Selection and Pa-
rameter Adjustment modules. Section 5 demonstrates
experiment settings and results. Section 6 concludes
the paper and outlines future work.
2 RELATED WORK
In recent years, some researchers use information the-
ory to cope with the problems of unfair rating at-
tacks. For example, the ITC model (Wang et al.,
2015) uses two information-theoretic to measure the
quality of recommendations, including the true obser-
vations (true interaction history) of the advisor about
the seller and the true integrity (trustworthiness) of
the seller, respectively. Besides, ITC considers two
types of worst-case unfair rating attacks performed
by advisors. Experiments show that the recommen-
dations might bring information even in the worst-
case unfair rating attacks. Therefore, ITC outper-
forms TRAVOS(Teacy et al., 2006), BLADE(Regan
et al., 2006), and MET(Jiang et al., 2013), which can-
not provide accurate trust evaluation under the worst-
case unfair rating attacks.
Wang et al (Wang et al., 2019) propose a prob-
abilistic model to solve the problem of unfair rating
attacks, which applies information theory to measure
the impact of attacks. In particular, the model identi-
fies the attack with the worst impact. The paper con-
sists of two parts. First, attacks brought by honest
and objective advisors are studied, and a probabilistic
model and an information-leakage method are used to
study the unfair rating attacks. Then, the worst-case
attack strategies are found. Second, attacks brought
by honest but subjective advisors are investigated, and
the results are compared with the earlier ones. Ex-
periments show that subjectivity makes it easier for
attackers to hide the truth completely, and the more
subjective rating makes a system less robust against
unfair rating attacks.
Besides, some researchers try to construct robust
models that are simple to implement to solve un-
fair rating problems. For example, the ITEA model
(Parhizkar et al., 2019; Parhizkar et al., 2020) aims to
cope with deceptive agents, where the learner aggre-
gates predictions made by a group of experts (advi-
sors) in a weighted average, and the weights are up-
dated based on the most recent forecasts. ITEA ne-
glects the individual losses incurred by advisors in
previous interactions because the weights reflect the
past performance of advisors cumulatively. There-
fore, ITEA is more simple, efficient, and robust than
TRAVOS, MET, and ACT. Considering the ITEA
model is simple to implement and performs better
than current models, we use it as a comparison model.
3 PROBLEM DESCRIPTION AND
DEFINITIONS
3.1 Definitions
We use a Multi-Agent System (MAS) to represent
the partner selection environments composed of three
types of agents: trustor, trustee and advisor. The
formal definitions are presented below.
Definition 1. Trustees represent agents willing to of-
fer services to perform tasks, defined as S = {s
j
| j =
1,...,m}. Each trustee s
j
has a reliability rb
j
∈ [0,1],
representing the probability of s
j
to provide qualified
services.
Definition 2. Trustors represent agents seeking ser-
vice to perform tasks, defined as B = {b
i
|i = 1, ..., x}.
Definition 3. Advisors represent agents having direct
interactions with trustees and willing to share infor-
mation with trustors, defined as A = {a
k
|k = 1,...,n}.
Each advisor has a label c ∈ {0,1,...,y}, where c = 0
ICAART 2023 - 15th International Conference on Agents and Artificial Intelligence
232