Figure 1: The TangiSense table equipped for the RISK
game with tangible objects and virtual images displaying
the ground map and tangible object moves.
game is a strategic board game where players fight to
win territories. Upon start, each player is given an
army (cannons, soldiers, cavalrymen) and a set of ter-
ritories from a political map of the Earth. Each player
attacks the other players in turn. To this end, he must
first of all designate two territories, one from its own,
supporting the attacking armies, and the other from
the board, that is attacked. The attacking and attacked
players then throw the dice to determine who loses
and who wins the round. A sample view of the game,
as played on the TangiSense table, is provided in Fig-
ure 1. This game leaves some autonomy to the play-
ers (which army to select, which territories to attack).
However, they have to remember and follow the rules
governing each move and proceed according to well-
defined gameplay (in this case, a turn-taking proto-
col). Support for the follow-up of these rules will be
provided by the collaborative support system that we
describe in the following sections.
2 STATE OF THE ART
2.1 Collaborative Support Systems
One major challenge when designing collaborative
support systems is to preserve the spontaneity and flu-
idity of human activity while ensuring the consistency
and proper coordination of action (Pape and Graham,
2010). Informal and opportunistic working styles
should indeed be promoted (Gutwin et al., 2008); at
the same time, the role of the system is to support
the building of a common vision or so-called ”mutual
awareness” (Kraut et al., 2003). Physical co-presence
provides multiple resources for awareness and con-
versational grounding. This has to be complemented
in the case of distant communication. Tangible inter-
action occupies a specific niche in this respect, since
tangible objects may be seen as full resources to sit-
uate action (Shaer and Hornecker, 2010). Commu-
nication is then grounded in the objects of the work-
ing space, and some of them may be designed to sup-
port action coordination and elicitation. Visual infor-
mation then becomes a conversational resource that
allows maintaining mutual awareness (Kraut et al.,
2003). Beyond conversation, perceiving the other’s
activity may be approached from the viewpoint of
the other’s social embodiment, that is considering
the constraints and rules that shape individual activ-
ity (Erickson and Kellogg, 2003). These issues were
discussed in a previous paper (Garbay et al., 2012)
into some more depth. We proposed in particular the
introduction of tangigets, tangible objects aimed at
supporting distant coordination, and ”norms”, declar-
ative rules aimed at representing social laws and con-
ventions, and governing the processing of tangible
objects moves. Managing human activities in dis-
tributed environments requires the adoption of com-
plex, emergent and adaptive system design, where
flexibility, re-configurability and responsiveness play
crucial roles (Millot and Mandiau, 1995). Various
architecture models have been proposed in this re-
spect. As quoted by (Kolski et al., 2009), these
models have been largely inspired by interactive sys-
tems architectures. Among these, CoPAC, PAC* or
CLOVER (Laurillau and Nigay, 2002) propose a dis-
tinction between production, communication and co-
ordination spaces. Such distinction is of interest to
our work, since there is a need to (i) ensure the follow-
up of distant objects, ((ii) ensure the follow-up of the
state of collaboration and (iii) provide feedback about
these moves.
2.2 Normative Multiagent Systems
The goal in normative multiagent systems is to model
cooperation and coordination under a social per-
spective. In such systems, norms drive agents to-
ward ”proper and acceptable behavior” and define
”a principle of right action binding upon the mem-
bers of a group” (Boella et al., 2007). These norms
are usually represented as production rules of the
form: ”whenever ⟨context⟩ if ⟨state⟩ then ⟨agent⟩
is ⟨deontic operator⟩ to do ⟨action⟩” (Boella et al.,
2007). Specific to this style of programming is the
fact that agents autonomously commit to obey the
norms, in a way specified by the deontic operator.
Any agent may however decide not to follow some
norm: this may result in penalties. The implemen-
tation of normative agent architectures is very of-
ten based on the belief, desire, and intention (BDI)
paradigm, with norms seen as external incentives for
action (Dignum et al., 2002). Norms are triggered by
a dedicated engine and result in agent notifications.
Another specificity of this modeling is the fact that
norms may evolve along the course of action. This
ICAART2014-InternationalConferenceonAgentsandArtificialIntelligence
250