addition to that, agents’ knowledge bases were
reduced in size on average of about 2.5 times
(measured in step 500). This means that agents have
much less knowledge for creating opinions, but the
knowledge is of higher quality (clear profiling on the
good agent). The smaller knowledge base also
significantly contributed to the speed and efficiency
of agents’ activity.
5 CONCLUSIONS
This paper presents a multi-agent model simulating
the behavior of the heterogeneous group of (people-
like) entities in the process of creating their opinion
about the world on the basis of information acquired
through communication with other agents. The
reason for the constructing the model is to
investigate the dynamics of trust in an environment
with limited possibilities to verify the transmitted
information. Local metadata from previous contacts
with the partner are used for establishing trust as
well as the knowledge base of the agent. Presenting
techniques to evaluate this information and use it in
trust settings were presented as well. The model is
highly adjustable with global and agent-specific
parameters.
The performed experiments were focused on
scenarios where the limited possibilities to verify the
information caused that the agent's knowledge base
could be built on completely incorrect information.
The results also showed that the later availability of
verified information changes the knowledge base
and hence the attitudes of the individuals very
slowly. It was also shown that forgetting older
clauses from the knowledge base leads to quicker
trust profiling of simple agents, and to accepting
knowledge primarily from the verified source.
The created model will be further developed and
tested especially on the basis of ideas from papers
cited in section 2. Future work should be
concentrated on analyzing the problem of dynamics
of the community structure. Special attention will be
laid on model validation, where it will be necessary
(due to specific model parameters) to collect real-
world data for deeper model validation.
This model could be increasingly used in the
future, depending on how individuals and companies
gradually discover the possibility of manipulating
information. Typical examples of applications can
be social systems where individuals can spread
unverified or false information and systems with
limited ability to verify information.
REFERENCES
Cerf, V. (2017). A brittle and fragile future.
Communications of the ACM, 60(7), pp.7-7.
Deffuant, G., Neau, D., Amblard, F., & Weisbuch, G.
(2000). Mixing beliefs among interacting
agents. Advances in Complex Systems, 3(01n04), pp.
87-98.
Falcone, R., & Singh, M. P. (2013). Introduction to special
section on trust in multiagent systems. ACM Trans. on
Intelligent Systems and Technology (TIST), 4(2), p. 23.
Fredheim, R., Moore, A., & Naughton, J. (2015).
Anonymity and Online Commenting: The Broken
Windows Effect and the End of Drive-by
Commenting. Proceedings of the ACM Web Science
Conference (p. 11). ACM.
Granatyr, J., Botelho, V., Lessing, O. R., Scalabrin, E. E.,
Barthès, J. P., & Enembreck, F. (2015). Trust and
reputation models for multiagent systems. ACM
Computing Surveys (CSUR), 48(2), 27.
Hegselmann, R., & Krause, U. (2002). Opinion dynamics
and bounded confidence models, analysis, and
simulation. Journal of artificial societies and social
simulation, 5(3).
Horio, B. M., & Shedd, J. R. (2016). Agent-based
exploration of the political influence of community
leaders on population opinion dynamics.
In Proceedings of the 2016 Winter Simulation
Conference (pp. 3488-3499). IEEE Press.
Huang, J., & Fox, M. S. (2006). An ontology of trust:
formal semantics and transitivity. In Proceedings of
the 8th int. conference on Electronic commerce (pp.
259-270). ACM.
Jelínek, J. (2014). Information Dissemination in Social
Networks. In Proceedings of the 6th Int. Conference
on Agents and Artificial Intelligence-Volume 2 (pp.
267-271). SCITEPRESS.
Lorenz, J. (2007). Continuous opinion dynamics under
bounded confidence: A survey. International Journal
of Modern Physics C, 18(12), pp. 1819-1838.
Pinyol, I., & Sabater-Mir, J. (2013). Computational trust
and reputation models for open multi-agent systems: a
review. Artificial Intelligence Review, 40(1), pp. 1-25.
Ramchurn, S. D., Huynh, D., & Jennings, N. R. (2004).
Trust in multi-agent systems. The Knowledge
Engineering Review, 19(1), 1-25.
Sen, S. (2013). A comprehensive approach to trust
management. In Proceedings of the 2013 int.
conference on Autonomous agents and multi-agent
systems (pp. 797-800). International Foundation for
Autonomous Agents and Multiagent Systems.
Varadharajan, V. (2009). Evolution and challenges in trust
and security in information system infrastructures.
In Proceedings of the 2nd int. conference on Security
of information and networks (pp. 1-2). ACM.
Yolum, P., & Singh, M. P. (2003). Ladders of success: an
empirical approach to trust. In Proceedings of the
second int. joint conference on Autonomous agents
and multiagent systems (pp. 1168-1169). ACM.
Role of Trust in Creating Opinions in Social Networks
215