Figure 7: Successful interactions the fourth run.
The second group has very good results, maintain-
ing a success rate over 96%, while the first one has
a drop towards the end of the simulation. Because
the buyers in the first group completely ignore the be-
haviour of a seller in other roles than the one that is
required they calculate trust using limited data and as
a result they have a higher rate of false positives which
leads to the drop in success rate.
4 CONCLUSIONS
This paper presented a simulation framework for the
evaluation of trust policies based on direct experience.
The simulation is based on formal models for the rep-
resentation of trust values and trust policies and can
be used to compare trust policies within clear scenar-
ios or to evaluate how one policy adapts to different
scenarios. Several parameters are monitored: the lo-
cal interaction histories of principals, their calculated
trust values, and their behavior during the simulation;
which can be used to analyze different properties of
the policies. Future work will extend the framework
to policies that take both direct experience and recom-
mendations from others into consideration.
The notion of time from the extended SECURE
model is supported by the simulation framework, and
allows principals to interact asynchronously. Sessions
can start at any time and there can be an unlimited
number of sessions active for one principal at any
moment. By changing simulation parameters, aspects
like the Interaction Frequency or Encounter Factor of
a CS can be evaluated. Future work will consider ad-
ditional aspects of CSs like technical trustworthiness
for the principals interactions.
While several principal behaviors can be repre-
sented by the model used in the simulation, the re-
quirement that future states only depend on the cur-
rent one may not hold true for complex malicious
behaviors which might change their states based on
analysis of other principals.
Many communication spaces use fully connected
network topologies where any two principals can
communicate directly, as such the simulation offers
only this topology. However there are other topolo-
gies that may prove interesting like peer to peer. Also
in order to simulate P2P networks events would most
likely require parameters, e.g. share(what), since
having a event for each shareable resource would
quickly become unmaintainable.
REFERENCES
Arbanowski, S., Ballon, P., David, K., Droegehorn, O.,
Eertink, H., Kellerer, W., van Kranenburg, H.,
Raatikainen, K., and Popescu-Zeletin, R. (2004).
I-centric communications: personalization, ambient
awareness, and adaptability for future mobile services.
IEEE Communications Magazine, 42(9):63–69.
Axelrod, R. (1984). The Evolution of Cooperation. New
York: Basic Books.
Eilers, F. and Nestmann, U. (2009). Deriving trust from ex-
perience. Submitted to the FAST International Work-
shop.
Fullam, K. K., Klos, T. B., Muller, G., Sabater, J., Schlosser,
A., Topol, Z., Barber, K. S., Rosenschein, J. S., Ver-
couter, L., and Voss, M. (2005). A specification of the
agent reputation and trust (art) testbed: experimenta-
tion and competition for trust in agent societies. In
AAMAS ’05: Proceedings of the fourth international
joint conference on Autonomous agents and multia-
gent systems, pages 512–518, New York, NY, USA.
ACM.
Kemeny, J. G. and Snell, J. L. (1983). Finite Markov
Chains. Springer.
Krukow, K. (2006). Towards a Theory of Trust for the
Global Ubiquitous Computer. PhD thesis, University
of Aarhus, Denmark.
Ries S., Kangasharju J., M. M. (2006). A classification of
trust systems. In On the Move to Meaningful Internet
Systems 2006: OTM 2006 Workshops, pages 894–903.
Sabater, J. (2004). Toward a test-bed for trust and reputation
models. In 7th International Workshop on Trust in
Agent Societies, pages 101–105.
Schlosser, A., Voss, M., and Brckner, L. (2005). On the
simulation of global reputation systems. Journal of
Artificial Societies and Social Simulation, 9.
Sun, Y., Han, Z., Yu, W., and Liu, K. (2006). Attacks on
trust evaluation in distributed networks. In 40th An-
nual Conference on Information Sciences and Systems
(CISS), pages 1461–1466.
T, L. H., Hylands, C., Lee, E., Liu, J., Liu, X., Neuendorf-
fer, S., Xiong, Y., Zhao, Y., and Zheng, H. (2003).
Overview of the ptolemy project.
EVALUATION OF TRUST POLICIES BY SIMULATION
183