reputation. It is still a challenge to enforce this norm
since detection of replication is not trivial in a dis-
tributed system. We changed Norm 4 in Table 2 to
impose a small penalty for agents replicating jobs.
6 RELATED AND FUTURE
WORK
This work is part of wider research in the area of
norms in multi-agent systems. However, our focus
is more on improving system performance by using
norms than researching the characteristics of norms
(Singh, 1999). We use the same widely acknowledged
conditional norm structure as described in (Balke
et al., 2013). Most of our norms can be characterised
as ”prescriptions” based on (von Wright, 1963), be-
cause they regulate actions. Our norms are gener-
ated by a central elected component representing all
agents which classifies them as a ”r-norm” according
to (Tuomela and Bonnevier-Tuomela, 1995).
Assuming we could detect extreme situations, we
want to improve the system behaviour by chang-
ing the decision-making during runtime using norms.
This would allow us to motivate agents to cooperate in
case of a trust breakdown (Castelfranchi and Falcone,
2010) by giving a larger incentive to do so. We could
also encorage agents to work with existing peers and
temporarily ignore newcomers by lowering the incen-
tive to work with newcomers. However, we do not
want to limit our agents too much to allow them to
keep their autonomy.
To improve fairness in the Trusted Desktop Grid,
it may be useful to have a monetary component in ad-
dition to the reputation for every agent (Huberman
and Clearwater, 1995). Agents would get a mone-
tary incentive for every finished job and would need
to pay other agents for the calculation of their jobs.
Trust would be used to prevent malicious behaviour
and allow better money exchange.
7 CONCLUSIONS
Making norms explicit helped us to understand the
needed behaviour for our system to perform well. It
allowed us to detect and to fix potential loopholes,
which could be exploited by attackers. Addition-
ally, it gives us the ability to change the expected be-
haviour at runtime to react to collusion attacks. We
plan to experiment with different incentives to adjust
the norms to fit the system goals.
REFERENCES
Balke, T., Pereira, C. d. C., Dignum, F., Lorini, E., Rotolo,
A., Vasconcelos, W., and Villata, S. (2013). Norms
in MAS: Definitions and Related Concepts. In Nor-
mative Multi-Agent Systems, volume 4 of Dagstuhl
Follow-Ups, pages 1–31. Schloss Dagstuhl–Leibniz-
Zentrum fuer Informatik.
Bernard, Y., Klejnowski, L., Bluhm, D., H
¨
ahner, J., and
M
¨
uller-Schloer, C. (2012). An Evolutionary Ap-
proach to Grid Computing Agents. In Italian Work-
shop on Artificial Life and Evolutionary Computation.
Bernard, Y., Klejnowski, L., Cakar, E., Hahner, J., and
Muller-Schloer, C. (2011). Efficiency and Robust-
ness Using Trusted Communities in a Trusted Desktop
Grid. In Self-Adaptive and Self-Organizing Systems
Workshops (SASOW), 2011 Fifth IEEE Conference on.
Bernard, Y., Klejnowski, L., H
¨
ahner, J., and M
¨
uller-Schloer,
C. (2010). Towards Trust in Desktop Grid Systems.
Cluster Computing and the Grid, IEEE Int. Sympo-
sium on, 0:637–642.
Cakar, E. and M
¨
uller-Schloer, C. (2009). Self-Organising
Interaction Patterns of Homogeneous and Heteroge-
neous Multi-Agent Populations. In Self-Adaptive and
Self-Organizing Systems, 2009. SASO ’09. Third IEEE
Int. Conference on, pages 165–174.
Castelfranchi, C. and Falcone, R. (2010). Trust Theory: A
Socio-Cognitive and Computational Model. Wiley.
Huberman, B. A. and Clearwater, S. H. (1995). A multi-
agent system for controlling building environments. In
Proceedings of the First International Conference on
Multiagent Systems, pages 171–176.
Kantert, J., Bernard, Y., Klejnowski, L., and M
¨
uller-
Schloer, C. (2013). Estimation of reward and decision
making for trust-adaptive agents in normative environ-
ments. accepted at ARCS2014.
M
¨
uller-Schloer, C. and Schmeck, H. (2011). Organic Com-
puting - Quo Vadis? In Organic Computing - A
Paradigm Shift for Complex Systems, chapter 6.2,
page to appear. Birkh
¨
auser Verlag.
Singh, M. P. (1999). An ontology for commitments in
multiagent systems. Artificial Intelligence and Law,
7(1):97–113.
Tuomela, R. and Bonnevier-Tuomela, M. (1995). Norms
and agreements. European Journal of Law, Philoso-
phy and Computer Science, 5:41–46.
von Wright, G. H. (1963). Norms and action: a logical
enquiry. Routledge & Kegan Paul.
InfluenceofNormsonDecisionMakinginTrustedDesktopGridSystems-MakingNormsExplicit
283