Authors:
Sina Honari
;
Brigitte Jaumard
and
Jamal Bentahar
Affiliation:
Concordia University, Canada
Keyword(s):
Possibility theory, Multi-agent systems, Uncertainty in AI.
Related
Ontology
Subjects/Areas/Topics:
Agents
;
Artificial Intelligence
;
Artificial Intelligence and Decision Support Systems
;
Distributed and Mobile Software Systems
;
Enterprise Information Systems
;
Knowledge Engineering and Ontology Development
;
Knowledge-Based Systems
;
Multi-Agent Systems
;
Software Engineering
;
Symbolic Systems
;
Uncertainty in AI
Abstract:
In social networks, estimation of the degree of trustworthiness of a target agent through the information acquired from a group of advisor agents, who had direct interactions with the target agent, is challenging. The estimation gets more difficult when, in addition, there is some uncertainty in both advisor and target agents’ trust. The uncertainty is tackled when (1) the advisor agents are self-interested and provide misleading accounts of their past experiences with the target agents and (2) the outcome of each interaction between agents is multi-valued. In this paper, we propose a model for such an evaluation where possibility theory is used to address the uncertainty of an agent’s trust. The trust model of a target agent is then obtained by iteratively merging the possibility distributions of: (1) the trust of the estimator agent in its advisors, and (2) the trust of the advisor agents in a target agent. Extensive experiments validate the proposed model.