
Section 7 contains a graphical visualization of the 
results of this experiment; the last 8
th
 section is a 
generalization of the results of this project also 
discussing certain directions for future development.  
2 BACKGROUND  
The software agents that know the interests, habits 
and priorities of the user will be able to actively 
assist him with work and information and, 
personalizing themselves, to take part in the 
activities in his leisure time (Picard, R. S, 1998). 
Users trust virtual fitness-instructors (Zsófia 
Ruttkay et. al., 2006) to recover from traumas or just 
to do some exercise, listen to virtual reporters 
(Michael Nischt et. al., 2006) and rely on IVA 
teachers (Jean-Paul Sansonnet et. al. 2006) to clarify 
difficult parts from the school subjects. Some users 
prefer medical assistants – IVA – to explain the 
results from their patients' medical checks (T. W. 
Bickmore et. al.) etc. 
The more serious the role of the agent in the 
application and the more useful for the user it is, the 
better it is perceived. (D.Budakova et. al., 2010).  
Many researchers model IVA behaviour aiming 
at establishing a trust-based relationship between the 
user and the IVA (Celso M. de Melo et. al. 2009, 
Jonathan Gratch et. al 2007, Timothy W. Bickmore 
et. al.2007, Radoslaw Niewiadomski et. al. 2008). 
Thus IVA-s are modelled, having capabilities to 
express so called moral emotions (pity, gladness, 
sympathy, remorse) (Celso M. de Melo 2009), and 
the way in which the frequency and the moment of 
sending a positive feedback from the user to the IVA 
(Jonathan Gratch et. al.2007), influence the trust 
between them, is investigated. Agent's behaviour is 
modelled so that it follows the user's behaviour 
(Jonathan Gratch et. al.2007). 
A hypothesis has been derived (Budakova D. et. 
al., 2010), that agents with subjective behaviour 
could be well accepted among users, if this 
behaviour is a well grounded and fair subjective 
behaviour. Only in this case it will lead to users’ 
reactions like sharpening their attention, increasing 
their trust in the agent and more natural perception 
of the IVA.  An option for the user to try to meet the 
requirements of the IVA and gain its approval exists 
as well. 
It is assumed that an intelligent virtual agent 
(IVA), capable of detecting a critical situation, of 
analyzing it and choosing the best possible option to 
take care of all individuals concerned, would easily 
gain trust. Such a behavioural model is presented in 
this paper with the help of the PRE-ThINK 
architecture. 
The IVA presented in this paper is supposed to 
take care both of the desired and of health-related 
features of the environment in a family house. These 
two goals could be in conflict if a family member 
sets environment features which are not healthy. 
This evokes mixed, conflicting and social thoughts 
as well as emotions in the agent. It has to choose 
whether and until when to continue maintaining the 
pre-set features or to change them into more 
appropriate ones.  
An IVA is not only able to follow the user’s 
behaviour and desire but after preliminary 
consideration (PRE-ThINK) it can also choose the 
best possible action. The purpose of the agent is to 
possibly take the best care of the family and the 
inhabitants of the house even if the undertaken 
action does not precisely correspond to their will. It 
is assumed that such a type of subjective behaviour 
would help in establishing trust between the IVA 
and the family members. 
3 AGENT’S ARCHITECTURES 
In order for the principles of intelligent behaviour to 
be shown and examined, there have been a number 
of models, introduced recently, that include virtual 
world and emotional software agents, inhabiting it 
(Franklin, S. 2000, Wright, I. P., Sloman, A., 1996, 
Reilly, W. S., 1996, D. Budakova, L. Dakovski, 
2005). In a number of models there has been shown 
how emotions are used as primary reasons and 
means of learning (Gadanho S. C., 2003). In others, 
the emotions are defined as an evaluating system 
that works automatically on perceptive and cognitive 
level through measuring importance and usefulness 
(McCauley L., Franklin Stan, 1998). 
In the architectures of intelligent agents with 
clearly expressed emotional element, the 
components are grouped as follows: behavioral 
system, motive system, inner stimuli, generator of 
emotions (Velásquez, J. D. 1997); meta-
management subsystem, consultative subsystem, 
subsystem action (Wright, I. P., Sloman, A., 1996); 
synthesis of phrases in natural language, 
understanding phrases in natural language, 
sensations and conceptions, inductive conclusions, 
memory, emotions, social behavior and knowledge, 
physical state and face expression, generator of 
actions (Reilly,W.S., 1996). 
The cognitive cycle of IDA architecture 
(Franklin, S., 2000, 2001, 2004) comprises nine 
ICAART 2011 - 3rd International Conference on Agents and Artificial Intelligence
158