As explained in Section 7.2.1, at the first step of
her recursive decision-making process for task se-
lection, yuyu reasons on the set of agents A
y,1
=
{xenia, yuyu, zoe} and on the set of tasks T
y,1
=
{τ
1
, τ
2
}. yuyu computes the success-outcome utili-
ties of the tasks in T
y,1
. She goes through the task
utility computation process to compute U
y
(o
+
1
) as de-
scribed in Section 7.2.2: she can do τ
1
since she has
the required abilities. She wants to do τ
1
since τ
1
con-
tributes to her goal γ
1
. She computes her general abil-
ity value for τ
1
and obtains a
y,τ
1
= 0.5. She finally
computes U
y
(o
+
1
) as described in Equation 4 and ob-
tains U
y
(o
+
1
) ' 0.54. She applies the same process
with o
+
2
and obtains U
y
(o
+
2
) = f Imp
y
(γ
2
) = 0.375.
She then uses her theory-of-mind capability to com-
pute what she thinks of xenia’s utilities, applying the
same process than for herself. She obtains U
y
x
(o
+
1
) =
0.375 and U
y
x
(o
+
2
) = 0.75. She does the same for zoe
and obtains U
z
x
(o
+
1
) ' 0.82 and U
z
x
(o
+
2
) = 0.
yuyu then generates task distributions as explained
in Section 7.2.3, which we do not list here, but it is
obvious here that the maximal task-distribution utility
is obtained when yuyu and zoe are assigned to τ
1
and
xenia is assigned to τ
2
. Hence yuyu selects the task τ
1
.
Because τ
1
is an abstract task, she recursively starts
again the task-selection process as described in Sec-
tion 7.2.1 to choose one of τ
1
’s subtasks: it is the step
2 of her recursive task-selection process. At this step,
she reasons on the set of tasks T
y,2
= T
1
= {τ
11
, τ
12
}
and on the set of agents A
y,2
= {yuyu, zoe} since she
thinks zoe will also choose τ
1
. This second step of re-
cursive decision-making is similar to the first one and
we will not develop it here. At the end of this step,
yuyu chooses the task τ
12
(because both τ
11
and τ
12
contribute to her goal γ
1
, she is skilled on τ
12
and she
thinks zoe will also choose τ
1
2). This task is a leaf
task that corresponds to an action, hence the recur-
sion stops here, and yuyu will try execute the action
that corresponds to τ
12
.
9 CONCLUSIONS
We proposed in this paper mechanisms of decision-
making for generating agent behavior in collective ac-
tivities. We proposed an augmentation on ACTIVITY-
DL that supports collective activity description. We
defined activity instances that are representative of
agents progress on the collective activity and on
which agents can directly reason to select their ac-
tions. We proposed an agent model based on the
trust model of (Mayer et al., 1995) and a trust-based
decision-making system that allow agents to reason
on activity instances and to take their teammates into
account to select an action. We gave an example of
the functioning of the activity-treatment module and
of the decision-making system. Further work per-
spectives include testing the decision-making system
when agents have false beliefs about others, and eval-
uating the credibility of the produced behaviors. The
model could also be extended so that agents could to
act purposely to harm the team or their teammates,
which is not currently possible since agents can only
decide not to help the team.
ACKNOWLEDGEMENTS
This work was carried out in the framework of the
VICTEAMS project (ANR-14-CE24-0027, funded
by the National Agency for Research) and funded by
both the Direction G
´
en
´
erale de l’Armement (DGA)
and the Labex MS2T, which is supported by the
French Government, through the program ”Invest-
ments for the future” managed by the National
Agency for Research (Reference ANR-11-IDEX-
0004-02).
REFERENCES
Barot, C., Lourdeaux, D., Burkhardt, J.-M., Amokrane,
K., and Lenne, D. (2013). V3s: A Virtual En-
vironment for Risk-Management Training Based on
Human-Activity Models. Presence: Teleoperators
and Virtual Environments, 22(1):1–19.
Carruthers, P. and Smith, P. K. (1996). Theories of Theories
of Mind. Cambrige University Press, Cambrige.
Castelfranchi, C. and Falcone, R. (2010). Trust theory: A
socio-cognitive and computational model, volume 18
of John Wiley & Sons.
Chevaillier, P., Trinh, T.-H., Barange, M., De Loor, P.,
Devillers, F., Soler, J., and Querrec, R. (2012). Se-
mantic modeling of Virtual Environments using MAS-
CARET. pages 1–8. IEEE.
Gerbaud, S., Mollet, N., and Arnaldi, B. (2007). Virtual
environments for training: from individual learning to
collaboration with humanoids. In Technologies for E-
Learning and Digital Entertainment, pages 116–127.
Springer.
Lochbaum, K. E., Grosz, B. J., and Sidner, C. L. (1990).
Models of Plans to Support Communication: an Initial
Report.
Marsh, S. and Briggs, P. (2009). Examining trust, forgive-
ness and regret as computational concepts. In Com-
puting with social trust, pages 9–43. Springer.
Mayer, R. C., Davis, J. H., and Schoorman, F. D. (1995).
An Integrative Model of Organizational Trust. The
Academy of Management Review, 20(3):709.
ICAART 2016 - 8th International Conference on Agents and Artificial Intelligence
294