resources are stochastically distributed among all the
agents with the same probability. In this way also
malicious agents have new resources to share, and
they will send out inauthentic files only for those
resources they do not own. In the idealized world
modelled in this simulation, since agents are 50
malicious and 50 loyal, and since the ones with
higher reputation are preferred when asking for a
file, it’s straightforward that malicious agents’
reputation fly away, and that an high percentage of
files in the system are inauthentic (about 63%).
Experiment 1 shows how a simple RMS, with quite
a light punishing factor (3) is already sufficient to
lower the percentage of inauthentic files in the
network over time. We can see a positive trend,
reaching about 28% after 2000 time steps, which is
an over 100% improvement compared to the
situation in which there was no punishment for
inauthentic files. In this experiment the verification
percentage is at 30%. This is quite low, since it
means that 70% of the files remain unchecked
forever (downloaded, but never used). In order to
show how much the human factor can influence the
way in which a RMS works, in experiment 2 the
verification percentage has been increased up to
40%, leaving the negative payoff still at 3. The
result is surprisingly good: the inauthentic/total ratio
is dramatically lowered after few turns (less than
10% after 200), reaching less than 1% after 2000
steps. Since 40% of files checked is quite a realistic
percentage for a P2P user, this empirically proves
that even the simple RMS proposed here
dramatically helps in reducing the number of
inauthentic files. In order to assign a quantitative
weight to the human factor, in experiment 3, the
negative payoff is moved from 3 to 4, while bringing
back the verification percentage to 30%. Even with a
higher punishing factor, the ratio is worse than in
experiment 2, meaning that it’s preferable to have a
higher verification rate, compared to a higher
negative payoff. Experiment 6 shows the opposite
trend: the negative payoff is lighter (2), but the
verification rate is again at 40%, as in experiment 2.
The trend is very similar – just a bit worse - to that
of experiment 3. In particular, the ratio of
inauthentic files, after 2000 turns, is about 16%. At
this point, it gets quite interesting to find the “break
even point” among the punishing factor and the
verification rate. After some empirical simulations,
we have that, compared with 40% of verification and
3 negative payoff, if now verification is just at 30%,
the negative payoff must be set to a whopping value
of 8, in order to get a comparable trend in the ratio.
This is done in experiment 4: after 2000 turns,
there’s 1% of inauthentic files with a negative
payoff of 3 and a verification percentage of 40%,
and about 0.7 with 8 and 30% respectively. This
clearly indicates that human factor (the files
verification) is crucial for a RMS to work correctly
and give the desired aggregate results (few
inauthentic files over a P2P network). In particular, a
slightly higher verification rate (from 30% to 40%)
weights about the same of a heavy upgrade of the
punishing factor (from 3 to 8).
Besides considering the ratio of inauthentic files
moving on a P2P network, it’s also crucial to verify
that the proposed RMS algorithm could punish the
agents that maliciously share inauthentic files,
without involving too much unwilling accomplices,
which are loyal users that unconsciously spread the
files created by the former ones. In the agent based
simulation, this can be considered by looking at the
average reputation of the agents, at the end of the
2000 time steps. In the worst case scenario, the
malicious agents, that are not punished for
producing inauthentic files, always upload the file
they are asked for (be it authentic or not). In this
way, they soon gain credits, topping the loyal ones.
Since in the model the users with a higher reputation
are preferred when asking files, this phenomenon
soon triggers an explosive effects: loyal agents are
marginalized, and never get asked for files. This
results in a very low average reputation for loyal
agents (around 70 after 2000 turns) and a very high
average value for malicious agents (more than 2800)
at the same time. In experiment 1 the basic RMS
presented here, changes this result; even with a low
negative payoff (3) the average reputations after
2000 turns, the results are clear: about 700 for loyal
agents and slightly more than 200 for malicious
ones. The algorithm preserves loyal agents, while
punishing malicious ones. In experiment 2, with a
higher verification percentage (human factor), we
see a tremendous improvement for the effectiveness
of the RMS algorithm. The average reputation for
loyal agents, after 2000 steps, reaches almost 1400,
while all the malicious agents go under the lower
threshold (they can’t either download or share
resources), with an average reputation of less than 9
points. Experiment 3 explores the scenario in which
the users just check 30% of the files they download,
but the negative payoff is raised from 3 to 4. The
final figure about average reputations is again very
good. Loyal agents, after 2000 steps, averagely
reach a reputation of over 1200, while malicious
ones stay down at about 40. This again proves the
proposed RMS system to be quite effective, though,
with a low verification rate, not all the malicious
WEBIST 2009 - 5th International Conference on Web Information Systems and Technologies
10