Formal Reasoning About Trusted Third Party Protocols
Aaron Hunter
BC Institute of Technology, 3700 Willingdon Avenue, Burnaby, Canada
Keywords:
Trust, Belief Revision, Formal Verification.
Abstract:
A trusted third party (TTP) is an entity that facilitates communication between agents by acting as an inter-
mediary. Typical roles for a trusted third party include the establishment of session keys or the validation of
commitment schemes. In a formal setting, this requires a model that provides some mechanism for represent-
ing trust and reasoning about dynamic beliefs. In this paper, we demonstrate how this can be captured using a
combined modal logic of trust and belief. Our formalism uses plausibility models and model transformations
to capture belief revision in a protocol run. It is novel in that it uses the modal accessibility relations in the
logic to define a notion of trust, without requiring any additional formal machinery. We define the formal
semantics of the logic, sketch the axiomatization, and demonstrate the basic verification methodology. Chal-
lenges are discussed, as well as issues related to practical deployment.
1 INTRODUCTION
In network communication, a trusted third party
(TTP) is an agent that participates in a protocol to en-
sure that the information exchanged is correct (Zissis
et al., 2011). In principle, there is no way to guar-
antee that a TTP acts in the interest of either party in
the protocol; they must be trusted to act in a manner
that is satisfactory to the other participants. Having a
TTP participate in a protocol is one way to give the
other participants confidence in the information when
they are not able to trust each other. In practice, this
kind of protocol can be implemented through mecha-
nisms such as the so-called Web of Trust (Ulrich et al.,
2011). However, provable guarantees of security are
difficult to achieve in the general case.
In this paper, we argue that TTP protocols can be
effectively analyzed and verified using dynamic log-
ics of trust and belief. We note that formal logics
of belief have a long history in protocol verification.
However, formal logics of belief have rarely incorpo-
rated a precise notion of trust. Obviously this is an
essential aspect to consider when modelling and rea-
soning about TTP protocols. So our approach in this
paper is to define a new modal logic of knowledge
and belief that captures the trust that each agent holds
in the others. This logic is based on comparatively
recent formal work on trust-sensitive belief revision,
where the extent to which an agent can be trusted is
an explicit component of the logic. We propose that
this new logic can be used to prove precisely when an
honest TTP can be a useful intermediary.
This is a preliminary paper introducing a novel
approach to reasoning about trusted third parties in
communication protocols. As such, the focus is on
developing the basic logic and we leave the details
of deployment on practical protocols for future work.
Nevertheless, this work makes several contributions
to the literature. First, the paper shows how trust can
be captured in a standard dynamic epistemic logic,
without introducing any new formal machinery. Sec-
ond, the proposed logic explicitly captures the trust
held in a TTP with respect to the beliefs of the agents
participating in the protocol This allows the logic to
be used for protocol verification by simply checking
a modal entailment. We remark also that, while our
focus here is on communication protocols, the logic
is sufficiently general to capture mutual trust in other
settings, such as social network communication.
2 PRELIMINARIES
2.1 Motivating Example
To facilitate the discussion, we describe a simple pro-
tocol. The protocol involves the exchange of mes-
sages between three parties: A, B and T . In this pro-
tocol, T is acting as a TTP to allow A and B to es-
tablish a session key. We use the standard notation
926
Hunter, A.
Formal Reasoning About Trusted Third Party Protocols.
DOI: 10.5220/0013237600003890
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 17th International Conference on Agents and Artificial Intelligence (ICAART 2025) - Volume 3, pages 926-932
ISBN: 978-989-758-737-5; ISSN: 2184-433X
Proceedings Copyright © 2025 by SCITEPRESS Science and Technology Publications, Lda.
established in (Burrows et al., 1990) to describe the
protocol:
Simple Key Agreement
1. A B : N
A
, A
2. B T : N
A
, N
B
, A, B
3. T A : {K}
K
A
4. T B : {K}
K
B
In this notation, A B : M means that A sends the
message M to the agent B. An expression of the form
{M}
K
denotes the message M encrypted with the key
K. Messages of the form N
A
are nonces, which are
random numbers generated at the time of protocol ex-
ecution. In this protocol, T is a trusted party that is
responsible distributing sessions keys for communi-
cation between agents. We assume that T shares a
secret key with A which is denoted by K
A
, as well as
a secret key with B which is denoted by K
B
. The goal
of this protocol is to give A and B a new key that they
can use for secure communication. This protocol is
a simplification of a protocol previously presented in
(Perrig et al., 2001).
Proving that this kind of protocol actually works
can be difficult. There are at least two challenges. The
first is a question of honesty: A needs to trust that T is
going to give them a key that is secure and not avail-
able to other parties. As noted in the introduction, we
are not going to be concerned with this issue; we are
going to assume that T has no malicious intent. But
the second problem is a problem of knowledge. Why
should A and B believe that T has a suitable collection
of keys available, which are each secure? This prob-
lem requires an analysis of the beliefs of A and B, and
how they change when information is exchanged on
the network.
2.2 Logics for Protocol Verification
Logics of knowledge and belief have a long history
as tools for proving the security of protocols. This
approach was introduced in the pioneering work on
BAN logic, in which a simple model of knowledge
is used for the analysis of authentication protocols
(Burrows et al., 1990). This basic logic has been exp-
landed and modified to address different kinds of pro-
tocols. Essentially, all variations of BAN logic allow
us to express both the information exchanged in a pro-
tocol and the goal of the protocol as logical formulas.
In this manner, we are able to preciesly prove when
the goal of a protocol is true following a successful
run. We generally need to use a logic of belief, be-
cause authentication protocols are fundamentally con-
cerned with the beliefs of the participating agents.
While the original BAN logic was never sophis-
ticated enough to address real protocols, the logi-
cal tradition in protocol analysis continues. Log-
ics of knowledge and belief have been used in re-
cent years for the analysis of IOT protocols (Hofer-
Schmitz and Stojanovi
´
c, 2020), for smart-home pro-
tocols (Fakroon et al., 2020), and for health-record
protocols (Kim et al., 2020). Similar logical methods
have also been employed for the verification of smart
contracts (Tolmach et al., 2021). For a recent survey
on the use of formal methods for protocol verification,
we refer the reader to (Erata et al., 2023).
When logical methods are used for protocol verifi-
cation, we generally start with some established logic
from the AI community and then we modify it suit-
ably to capture all aspects of some class of protocols.
However, we will see in the next two sections that log-
ical models of belief dynamics are generally focused
entirely on how to model the beliefs of an agent when
new information is received as some kind of infalli-
ble announcement. When a protocol involves agents
that have different levels of trust in each other, then
we need more more complex logic that captures this
fact.
2.3 Dynamic Epistemic Logic
The problem of reasoning about nested beliefs can be
addressed in Dynamic Epistemic Logic (DEL). We
briefly introduce DEL in this section. However, we
assume the reader is familiar with basic modal logic,
as described in (Chellas, 1980).
We start with an underlying propositional signa-
ture P, which is just a set of atomic sentence symbols
that can be true or false. The variables in P represent
properties of the world that may be true or false. A
propositional formula is defined in the usual manner,
by using connectives like (and), (or), and ¬ (not).
Standard modal logics of belief use a static modal
operator to represent the beliefs of an agent. In other
words, they use formulas like B
i
ϕ to mean “agent i
believes ϕ is true. DEL extends standard epistemic
logic by adding dynamic operators of the form [ϕ]ψ;
this means roughly that ψ is true following the an-
nouncement of ϕ. There are many variations on this
logic for different kinds of announcements, and we
refer the reader to (van Bentham, 2014) for a full dis-
cussion.
We are concerned with reasoning about belief dy-
namics in DEL. This requires a notion of plausbility.
For this reason, belief revision in DEL is typically
captured semantically through a plausibility model
(van Bentham, 2007).
Definition 1. A plausibility model M = W, {≤
i
}
iI
, V consists of a set of worlds W , a well-ordering
i
over W for each i I, and a valuation V .
Formal Reasoning About Trusted Third Party Protocols
927
Informally, the ordering
i
is a plausibility order-
ing for the agent i; the minimal elements of the or-
dering are considered the most plausible. Note that
{≤
i
}
iI
defines an accessibility relation on the set of
worlds for each agent i. In particular, write w
i
j as
a shorthand for the statement that w is comparable to
v: either w
i
v or v
i
w. Then
i
defines a KD45
accessibility relation, familiar from standard doxastic
logic.
2.4 Belief Revision in Dynamic
Epistemic Logic
Belief revision is the process that occurs when an
agent receives some new information about the world.
The new information is understood to be more accu-
rate, therefore the agent would like to believe the new
information while giving up as little as possible from
their initial beliefs. This process has been studied ex-
tensively. The most widely studied formal approach
for single-shot belief revision is the so-called AGM
approach (Alchourr
´
on et al., 1985). For iterated belief
change, the Darwiche-Pearl approach (Darwiche and
Pearl, 1997) is the natural generalization. However,
in both of those cases, we can not consider nested
beliefs or beliefs about trust relationships; these are
both important concepts for reasoning about trusted
third parties. For this reason, we focus here on belief
revision in the context of DEL.
Belief revision can be captured in DEL through
tranformations on models. In the simplest case, we
revise by a formula ϕ. For each formula ϕ and each
model M, we define a new model M
where the plau-
sibility ordering is modified according to some rea-
sonable revision policy. Each revision policy can then
be defined in terms of a dynamic epistemic modality.
We illustrate with a simple example.
Example 1. Suppose that the propositional signa-
ture is {p, q}, there is just a single agent, and that
there is one possible world for each interpretation.
The modality will be defined such that the truth of
[p¬q]ψ will be true at any state where ψ would be
true following revision by p ¬q. This is checked by
checking the truth of ψ in the model M that is defined
by performing lexicographic update of the plausibility
ordering by ψ.
One important iterated revision operator is the so-
called lexicographic revision operator (Nayak et al.,
2003). In lexicographic revision by ϕ, the ordering is
shifted so that every ϕ-state precedes every ¬ϕ-state.
Within those regions, the ordering on states is left un-
changed.
Given a plausiblity model M and a formula ϕ, let
M ϕ denote the model obtained from M by simply
modifying all of the orderings according to the lex-
icographic revision policy. We can then introduce a
lexicographic update modality [ ϕ] into the vocabu-
lary such that
M, s |= [ ϕ]ψ M ϕ, s |= ψ.
The logic includes several modalities:
K
i
ϕ: Agent i knows ϕ.
B
ϕ
i
ψ: Agent i would believe ψ if they were given
ϕ.
[ ϕ]: The dynamic modality for update by ϕ.
This logic can be axiomatized completely for lexico-
graphic update, as well as other well-known revision
policies (van Bentham, 2014).
3 LOGICAL FRAMEWORK
3.1 Motivation
We have seen that there are well-established logics for
reasoning about belief revision. However, these logics
simply show how an agent can revise by a formula.
We need a logic that allows an agent to revise by a
report, which is a formula that has been received from
some other agent.
We follow the basic approach outlined in (Hunter
and Booth, 2019). That is, we start with the logic
of belief revision introduced in the previous section,
and we extend it with additional modal operators for
trust. Our goal is to be able to express statements of
the following form:
Agent i would believe agent j if they said the for-
mula ϕ.
While the details in the next sections are quite formal,
the end result is just a natural logic that allows us to
express this sentiment.
3.2 Trust-Sensitive Plausibility Models
For each pair of agents i and j, we introduce a new
modal operator T R
j
i
with the following property:
s |= T R
j
i
ϕ if and only if agent i would trust agent
j if they reported ϕ in the state s Agent i would
believe agent j if they said the formula ϕ.
This operator can be defined by introducing some new
binary relations on the set of states. However, we in-
stead define the relation with respect to the realtions
i
and
j
that are already in the logic. We infor-
mally say that i trusts j to be able to distinguish the
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
928
states w adn v just in case i believes that j can dis-
tinguish them. This assertion can be made using the
operators For each pair of agents i, j, define T
j
i
as
follows:
T
j
i
wv there exist x such that w
i
x and x
j
v.
(1)
We remark that this is clearly an equivalence relation.
As such, we can use T
j
i
to define a notion of trust-
sensitive revision by following the approach in (Booth
and Hunter, 2018).
Let be the order transformation that defines lex-
icographic revision with respect to a particular plau-
sibility ordering. As stated previously, defines a
transformation on plausibility models in which the
plausibility orderings are re-arranged according to the
lexicographic revision policy. We need to modify this
order transformation to produce a new order transfor-
mation
T
j
for each agent j. The agent j here is un-
derstood to represent the reporting agent; this is the
agent that has provided some new piece of informa-
tion. In the parametrized transformation, each agent
will revise their beliefs so that they only believe the
formulas over which the reporting agent is trusted.
Definition 2. Let M = W, {≤
i
}
iI
, V be a plausibil-
ity model, let ϕ be a formula, and let j I. Define
M
T
j
ϕ to be the model with the same set of worlds
and valuations, but where the orderings
ϕ
i
are de-
fined as follows:
ϕ
i
=
i
{v | M, w |= ϕ and T
j
i
wv}
where is the lexicographic order transformation.
We now have a new mapping M
T
j
ϕ
parametrized by an agent j, in which each or-
dering is revised differently. The transformation
associated with
T
j
ϕ captures the way that each agent
i revises their plausibility ordering when the formula
ϕ is reported by the agent j. As such, rather than
simply revising by ϕ, each agent now revises by the
set of states that j can not distinguish from models of
ϕ according to the trust relation T
j
i
.
3.3 A Logic for Trust-Sensitive Revision
In the previous section, we introduced a special order
transformation on plausibility models that takes trust
into consideration.
A complete axiomatization of the logic L () is
given in (van Bentham, 2014). The axiomatization
includes axioms for modelling static conditional be-
liefs, along with the following five axioms for [ ϕ]:
[ ϕ]q q, for all atomic propositional q
[ ϕ]¬ψ ¬[ ϕ]ψ
[ ϕ]ψ α [ ϕ]ψ [ ϕ]α
[ ϕ]K
i
ϕ K
i
[ ϕ]ϕ
[ ϕ]B
α
i
ψ (
˜
K
i
(ϕ [ ϕ]α) B
α[ϕ]α
i
[ ϕ]ψ)
(¬
˜
K
i
(ϕ [ ϕ]α) B
[ϕ]α
i
[ ϕ]ψ)
We can define an extension L()
T
for trust-
sensitive revision, by introducing the following
modalities:
K
i
ϕ: Agent i knows ϕ.
B
ϕ
i
ψ: Agent i would believe ψ if they were given
ϕ.
T R
j
i
ψ: Agent i trusts agent j when reporting ψ.
[
j
ϕ]: The dynamic modality parametrized by j .
The axioms of L()
T
include:
1. The standard S5 axiomatization for static knowl-
edge for each modality K
i
.
2. The standard S5 axiomatization for static knowl-
edge for each T R
j
i
.
3. An axiomatization of static conditional belief for
each modality B
ϕ
i
, as given in (van Bentham,
2014).
As described in (Booth and Hunter, 2018), we can
modify the ve axioms for [ ϕ] to define each [
j
ϕ].
Moreover, the following condition is guaranteed:
M, s |= [
j
ϕ]ψ M
T
j
ϕ, s |= ψ.
Hence [
j
ϕ]ψ is true at a state s, just in case ψ
would be true if each agent’s plausibility ordering is
re-ordered by lexicographic update, with respect to
the set of states that j can not distinguish from models
of ϕ. It is straightfoward to specify the axiomatiza-
tion.
3.4 Lying
As an illustration, we describe how the logic in ques-
tion deals with agents who are lying. The notion of
lying is discussed in detail in the context of DEL in
(van Ditmarsch, 2014). However, our approach to ly-
ing here is simpler, as we do not introduce any new
formal machinery to describe lying announcements.
We acknowledge first that an agent in our frame-
work will not be able to determine when a trusted
agent is being deceptive. If an agent i trusts j on the
formula ϕ, then they will certainly believe ϕ when it
is reported by j. So if j falsely reports ϕ when they
are trusted as an authority on ϕ, we have a problem.
In order to address this, we would need to add some
mechanism for modelling trust change. Such a mech-
anism is introduced in the contxnt of belief revision in
(Hunter, 2024). However, trust change is beyond the
scope of our current discussion.
Formal Reasoning About Trusted Third Party Protocols
929
There is, however, a case of lying that our frame-
work can address. Consider an agent i that receives
a report ϕ from another agent when i actually knows
that ϕ is false. Clearly, in this case, we would like to
be certain that i will not believe the new information.
We state a relevant result with respect to knowledge.
Proposition 1. Let M be a plausibility model. For
any agents i, j, if M
, s |= K
j
¬ϕ and M
, s |= K
i
K
j
¬ϕ
then
M
, s ̸|= [
j
ϕ]¬K
i
ϕ.
Hence, a deceptive report will not be incorporated
when we define trust with respect to perceived knowl-
edge. In fact, the plausibility ordering for each agent
that knows j is being deceptive will be unchanged.
The important point here is that the framework au-
tomatically ensure that deceptive reports will not be
believed.
4 USING TRUSTED THIRD
PARTIES
4.1 Establishing Trust
For the moment, we put aside the manner in which a
TTP might be established. In practice, this might be
done through extra-logical means. In other words, it
might be the case that a certain agent is created under
the joint supervision of all participating parites; that
agent is then trusted, but the justification can not be
established in the logic.
Another possibility would be to establish the TTP
within the logic. This would require a logical method-
ology that not only models trust, but also models
trust change. As noted previously, trust change is not
something we consider in the present paper; we leave
this extension for future work.
4.2 Definition
Although we are not concerned with showing how
an agent becomes trusted, we can still define what it
means for an agent to be a TTP.
Definition 3. Let P be a propositional signature, and
let L P. We say that an agent W is a trusted third
party for A, C over L for the set of models M if the
following conditions hold for all M M and all p
L:
1. M |= T R
A
W
p B
A
T
C
W
p.
2. M |= T R
C
W
p B
C
T
A
W
p.
We let T T P denote the conjunction of these two
formulas. Hence a trusted third party for L is someone
that is trusted when they assert W is true, and both
parties believe the other feels the same. We remark
that this definition is quite simple to state, but it would
be quite difficult to guarantee that these conditions are
true. But combining a modal logic with trust allows
this kind of condition to be stated very compactly.
4.3 Protocol Verification
To prove that a TTP protocol is correct, we simply
need to encode the protocol as a set of logical formu-
las. Consider the simple identity exchange protocool
from the introduction. In order to show that this pro-
tocol is correct, one would need to perform the fol-
lowing steps.
Formalize the protocol as a sequence of an-
nouncements P
1
, . . . , P
n
, made respectively by
agents a
1
, . . . , a
n
.
Formalize the goal of the protocol as another for-
mula G.
Prove that TT P |= [
a
1
]·· · [
a
n
P
n
]G.
This is an established method for verifying simple au-
thentication protocols, which was pioneered in (Bur-
rows et al., 1990). For a more recent discussion of
symbolic approaches to proving protocol correctness,
we refer to reader to (Delaune and Hirschi, 2017).
The novel aspect of our work here is that the proto-
col steps are encoded as the application of dynamic
modalities. The logic is a more sophisticated modal
logic that those typically used for protocol verifica-
tion, and it permits the representation of nested be-
liefs, explicit trust, and belief revision.
We demonstrate by revisiting our example proto-
col.
Example 2. In order to formalize the protocol at
a high level, we assume the propositional vocabu-
lary includes atomic formulas of hte form init(x) for
x {A, B}. These are formulas that are true when A
(resp. B) want to initialize a communication session.
We then assume that we have a finite set K of keys. For
each key k K and each pair of agents x, y we have an
atomic formula of the form {sa f e(k, x, y). Such a for-
mula is true when k is a safe key for communication
between x and y.
We will use lower case a, b, t for the agents be-
low, in order to avoid ambiguity with the belief modal-
ity. The Simple Key Agreement protocol can be repre-
sented as the following announcements:
1. M
1
= B
b
(init(a))
2. M
2
= B
t
(init(b))
3. M
3
= B
a
(sa f e(k
0
, a, b))
4. M
4
= B
b
(sa f e(k
0
, a, b))
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
930
Note that our approach is to encode message as belief
changing announcements. The goal is the following:
G = B
a
(sa f e(k
0
, a, b)) B
a
(sa f e(k
0
, a, b)).
In this case, the agent T is a trusted third party
over sa f e(k
0
, a, b)). With TTP defined over this single
formula, in order to prove the protocol is correct, we
would need to prove the following:
T T P |= [
t
M
4
][
t
M
3
][
b
M
2
][
a
M
1
]G.
It turns out that this can not actually be proved.
The problem is that there is no connection between the
first two messages sent and the second two messages
sent; so there is no guarantee that agent a will believe
the key they are given is sent from a current run of the
protocol.
It is worth noting that the actual protocol pro-
posed in (Perrig et al., 2001) actually includes mes-
sage authentication codes in the last three messages of
the protocol in order to avoid this problem. For our
simplified version of the protocol, however, our logic
can not provide a guarantee of security.
5 DISCUSSION
5.1 Establishing Trusted Third Parties
Note that our approach to protocol analysis does not
address the problem of establishing a trusted third
party. Instead, we simply start by defining what it
means to be a trusted third party. Specifically, for an
agent A, we basically say T is a trusted third party
over some formula if they are trusted on that formula
and if A believes that B also trusts them on that for-
mula.
For simple domains where the formulas of interest
make statements about keys or signatures, it is actu-
ally possible to set up a real-world scenarios where
this is true. For example, in most practical situations,
a certificate authority would satisfy this condition for
the certificates that they control.
However, if we consider cases where the potential
trusted third party may have affiliations or indepen-
dent goals, then this condition is significantly harder
to satisfy. Of course, this is the case in real scenarios
as well; it is essentially impossible to guarantee that a
trusted third party protocol works in the general case.
However, in our setting, the limitation is clear. By
restricting trusted third parties to a specific set of for-
mulas, we may be able to actually prove that an agent
can be a trusted third party by exchanging a suitable
sequence of messages.
5.2 Future Work
This has been a largely speculative paper. The aim has
simply been to show how a standard dynamic epis-
temic logic could define a notion of trust that was
suitable formal reasoning about trusted third parties.
However, there are several directions for future work.
First of all, in order for this logic to be useful,
it must be applied to real protocols. In this prelimi-
nary work, we have focused on defining the formal-
ism and applying it to a toy protocol for illustrative
purposes. However, in future work, we will apply the
same model to more complex protocols. At present,
the best candidate protocols are those used by certifi-
cate authorities.
The second issue that must be addressed is the fact
that the current methodology is not straightforward to
automate. In the past, a variety of authentication log-
ics have been defined. However, proving correctness
by hand is simply not feasible for real protocols; for-
mal verification of protocols requires power solvers
like (Cremers, 2008) as well as precise methodolo-
gies for computer-aided design and analysis of proto-
cols (Barbosa et al., 2021). Hence, we need to address
how our new logic can be implemented efficiently to
quickly analyze protocols. This is not a trivial task,
because there really are not many exisiting implemen-
tations of modal logic. When we move to dynamic
epistemic logic, the problem is worse. At a theoret-
ical level, even basic reasoning tasks in DEL can be
intractable (Charrier et al., 2019) or even undecidable
(French and van Ditmarsch, 2008).
Fortunately, we do not need to implement a fully
general system for reasoning in DEL. Instead, we
need only worry about an efficient solver that works
with the trace-based exchanges message sequences
that are familiar to protocol analysis. We believe for
this restricted class of problems, a suitable implemen-
tation will indeed be feasible.
6 CONCLUSION
In this paper, we have introduced a logic for reason-
ing about trusted third parties. The logic is a vari-
ation of dynamic epsitemic logic, where agents can
reason about knowledge, belief, and announcements.
We have extended the basic framework by showing
how the existing plausibility orderings in DEL can be
used to model knowledge-based trust. This simple ex-
tension allows us to then reason not only about trust,
but also about our beliefs about trust.
The logic proposed here is flexible enough to
model and perform ”by hand” analysis and verifica-
Formal Reasoning About Trusted Third Party Protocols
931
tion of simple, toy TTP protocols. However, in future
work, we will look to implement the system to de-
velop a tool that can automatically find holes in real
protocols.
REFERENCES
Alchourr
´
on, C. E., G
¨
ardenfors, P., and Makinson, D.
(1985). On the logic of theory change: Partial meet
functions for contraction and revision. Journal of
Symbolic Logic, 50(2):510–530.
Barbosa, M., Barthe, G., Bhargavan, K., Blanchet, B., Cre-
mers, C., Liao, K., and Parno, B. (2021). SoK:
Computer-aided cryptography,. In IEEE Symposium
on Security and Privacy, pages 777–795.
Booth, R. and Hunter, A. (2018). Trust as a precursor to
belief revision. Journal of Artificial Intelligence Re-
search, 61:699–722.
Burrows, M., Abadi, M., and Needham, R. (1990). A logic
of authentication. ACM Transactions on Computer
Systems, 8(1):18–36.
Charrier, T., Pinchinat, S., and Schwarzentruber, F. (2019).
Symbolic model checking of public announcement
protocols. Journal of Logic and Computation,
29(8):1211–1249.
Chellas, B. (1980). Modal Logic: An Introduction. Cam-
bridge University Press.
Cremers, C. (2008). The scyther tool: Verification, falsifi-
cation, and analysis of security protocols. In Proceed-
ings of the 20th International Conference on Com-
puter Aided Verification.
Darwiche, A. and Pearl, J. (1997). On the logic of iterated
belief revision. Artificial Intelligence, 89(1-2):1–29.
Delaune, S. and Hirschi, L. (2017). A survey of symbolic
methods for establishing equivalence-based properties
in cryptographic protocols. Journal of Logical and
Algebraic Methods in Programming, 87:127–144.
Erata, F., Deng, S., Zaghloul, F., Xiong, W., Demir, O.,
and Szefer, J. (2023). Survey of approaches and tech-
niques for security verification of computer systems.
Journal on Emerging Technologies in Computing Sys-
tems, 19(1).
Fakroon, M., Alshahrani, M., Gebali, F., and Traore, I.
(2020). Secure remote anonymous user authentica-
tion scheme for smart home environment. Internet of
Things, 9:100158.
French, T. and van Ditmarsch, H. (2008). Undecidability
for arbitrary public announcement logic. In Advances
in Modal Logic, pages 23–42.
Hofer-Schmitz, K. and Stojanovi
´
c, B. (2020). Towards for-
mal verification of iot protocols: A review. Computer
Networks, 174:107233.
Hunter, A. (2024). Combined change operators for trust and
belief. In Australasian Joint Conference on Artificial
Intelligence.
Hunter, A. and Booth, R. (2019). Implicit and explicit
trust in dynamic epistemic logic. In 21st International
Workshop on Trust in Agent Societies.
Kim, M., Yu, S., Lee, J., Park, Y., and Park, Y. (2020).
Design of secure protocol for cloud-assisted elec-
tronic health record system using blockchain. Sensors,
20(10).
Nayak, A., Pagnucco, M., and Peppas, P. (2003). Dy-
namic belief change operators. Artificial Intelligence,
146:193–228.
Perrig, A., Szewczyk, R., Wen, V., Culler, D., and Tygar, J.
(2001). Spins: security protocols for sensor networks.
In Proceedings of the 7th Annual International Con-
ference on Mobile Computing and Networking, pages
189–199.
Tolmach, P., Li, Y., Lin, S.-W., Liu, Y., and Li, Z. (2021).
A survey of smart contract formal specification and
verification. ACM Comput. Surv., 54(7).
Ulrich, A., Holz, R., Hauck, P., and Carle, G. (2011). In-
vestigating the openpgp web of trust’. In Computer
Security - ESORICS., pages 489–507. Lecture Notes
in Computer Science.
van Bentham, J. (2007). Dynamic logic for belief revision.
Journal of Applied Non-Classical Logics, 17(2):129–
155.
van Bentham, J. (2014). Logical Dynamics of Information
and Interaction. Cambridge University Press.
van Ditmarsch, H. (2014). Dynamics of lying. Synthese,
191(5):745–777.
Zissis, D., Lekkas, D., and Koutsabasis, P. (2011). Cryp-
tographic dysfunctionality - a survey on user percep-
tions of digital certificates. In C.K., G., H., J., E., P.,
R., B., and A., A.-N., editors, Global Security, Safety
and Sustainability and E-Democracy.
ICAART 2025 - 17th International Conference on Agents and Artificial Intelligence
932