so-called AGM approach (Alchourr
´
on et al., 1985).
In the AGM approach, the beliefs of an agent are rep-
resented by a set of formulas Ψ. An AGM revision
function ∗ takes the initial belief state Ψ and a for-
mula φ as input. This pair is mapped to a new belief
state Ψ ∗ φ, subject to a set of rationality postulates. It
is well known that every AGM revision operator ∗ can
be defined in terms of minimization with respect to a
total pre-order over states (Katsuno and Mendelzon,
1992).
An alternative model for reasoning about beliefs
is by using a ranking function over states to represent
the beliefs of an agent (Spohn, 1988). A ranking func-
tion κ is a function that maps every state to a natural
number, with the constraint that κ(s) = 0 for at least
one state. Informally, if κ(s) ≤ κ(t), we interpret this
to mean that the agent considers it more likely that the
actual state of the world is s as compared to t. A rank-
ing function representing beliefs is sometimes called
an epistemic state, as it includes the current beliefs as
well as information about how these beliefs will be
changed when new information is obtained.
In our software, we internally use the rank-
ing function approach for modeling epistemic states.
However, we will see that our software also permits
users to enter beliefs as a set of formulas, which is
then extended to an epistemic state using a default
ranking.
2.2 Trust Graphs
A trust graph is a representation of the trust that an
agent holds in an information source.
Definition 1 ((Hunter, 2021)). Let S be the set of
states over V . A trust graph over S is a pair hS, wi,
where w : S × S → N.
Hence, a trust graph is just a weighted graph
where the nodes represent states, and the weights are
distances between states. Informally, the distance be-
tween two states represents how strongly we trust the
reporting agent to be able to distinguish them.
Example Suppose that an agent gets information
about the weather from the radio. They strongly trust
the announcer to be able to tell if it is sunny outside
(S). On sunny days, they also have moderate trust that
the announcer can tell if it is humid (H); however,
they have no trust at all in the announcer’s ability to
know the humidity when it is not sunny. This can be
captured by the trust graph in Figure 1.
2.3 Update Rules
A trust graph is not static; it should be updated based
on the accuracy of reports provided by a particular
S, H
S
H
/
0
2
0
4 44 4
Figure 1: A Trust Graph for Weather Reports.
agent. We can define a report to be a pair (φ, m) where
m is either 0 or 1. If m = 0 (resp. 1), this is interpreted
to mean that φ was falsely (resp. correctly) reported
in the past.
Suppose that an agent reports φ, and we subse-
quently learn that φ is false. In this case, we should
now have less trust in the reporting agent’s ability to
know if φ is true. This means that, following a false
report, we should increase the distance between states
where φ is true and states where φ is false. Similarly,
if an agent provides an accurate report of φ, then we
should decrease the distance between such pairs of
states.
There are many different ways to update the dis-
tances on a trust graph. As an illustration, we consider
the following simple additive update rules.
Update Rule 1. Given an initial trust graph over S
and a report (φ, 0), update the graph as follows:
• For each pair of states s
1
, s
2
such that s
1
|= φ and
s
2
6|= φ decrease the value w(s
1
, s
2
) to w(s
1
, s
2
) −
1.
Update Rule 2. Given an initial trust graph over S
and a report (φ, 1), update the graph as follows:
• For each pair of states s
1
, s
2
such that s
1
|= φ and
s
2
6|= φ, increase the value w(s
1
, s
2
) to w(s
1
, s
2
) +
1.
According to the first rule, a false report of φ
makes an agent have less trust in the reporting agent’s
ability to distinguish φ-states from ¬φ-states. Accord-
ing to the second rule, a true report of φ makes an
agent have more trust in that distinction.
Example Consider the weather reporting example.
Suppose that the announcers says it is sunny outside,
but then we go outside and we find that it is not sunny.
This report is formally represented as (S,0). Accord-
ing to Update Rule 1, we need to decrease the dis-
tances on states where S is true and those where S is
false. The new trust graph is given in Figure 2.
Note that Update Rules 1 and 2 are simply in-
tended to provide an example of the process; we do
not intend to assert that these are the most appropri-
ate update rules in practice. In fact, there is clearly
a problem with Update Rule 1 in that it can actually
ICAART 2023 - 15th International Conference on Agents and Artificial Intelligence
56