USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN
AFFECTIVE BI-MODAL USER INTERFACE
Katerina Kabassi
Department of Ecology and the Environment, Technological Educational Institute of the Ionian Islands
2 Kalvou Sq., 29100 Zakynthos, Greece
Maria Virvou, Efthymios Alepis
Department of Informatics, University of Piraeus
80 Karaoli & Dimitriou St., 18534, Piraeus, Greece
Keywords: Software life-cycle, multi-criteria decision making theories, affective interaction, user modelling.
Abstract: Decision making theories seem very promising
for improving human-computer interaction. However, the
actual process of incorporating multi-criteria analysis into an intelligent user interface involves several
development steps that are not trivial. Therefore, we have employed and tested the effectiveness of a
unifying life-cycle framework that may be used for the application of many different multi-criteria decision
making theories. The life-cycle framework is called MBIUI and in this paper we show how we have used it
for employing a multi-criteria decision making theory, called Simple Additive Weighting, in an affective bi-
modal educational system. More specifically, we describe the experimental studies for designing,
implementing and testing the decision making theory. The decision making theory has been adapted in the
user interface for combining evidence from two different modes and providing affective interaction.
1 INTRODUCTION
Decision making theories are rather promising for
knowledge-based software. However, multi-criteria
analysis has not been used adequately in intelligent
user interfaces, even though user-computer
interaction is, by nature, multi-criteria-based. The
actual process of incorporating multi-criteria
analysis into an intelligent user interface is neither
clearly defined nor adequately described in the
literature. Furthermore, Hull et al. (2002) point out
that as the systems become more complex, their
development and maintenance is becoming a major
challenge. This is particularly the case for software
that incorporates intelligence. Indeed, intelligent
systems are quite complex and they have to be
developed based on software engineering
approaches that are quite generic and do not
specialise on the particular difficulties of the
intelligent approach that is to be used.
For this purpose a life-cycle framework have
been
developed for the incorporation of a multi-
criteria theory in an Intelligent User Interface (IUI).
This framework is called MBIUI (Multi-criteria
Based Intelligent User Interface) life-cycle
framework (Kabassi & Virvou 2006) and involves
the description of a software life-cycle that gives
detailed information and guidelines about the
experiments that need to be conducted, the design of
the software, the selection of the right decision
making theory and the evaluation of the IUI that
incorporates a decision making theory.
MBIUI life-cycle framework is based on the
R
ational Unified Process (RUP) (Jacobson et al.
1999). RUP is clearly documented and easily used
due to its clarity. For this purpose RUP seems quite
suitable for the development of knowledge-based
systems. Indeed, RUP’s rationale has also been
adapted in other methodologies for knowledge-based
systems, e.g. ADELFE (Bernon et al. 2003).
In this paper, we test the effectiveness of this
fram
ework by applying it for the development of an
affective bi-modal user interface. The particular
interface is called Edu-Affe-Mikey and is an
affective educational user interface targeted to first-
year medical students. Emphasis has been given on
the application of the MBIUI life-cycle framework
for the application of a multi-criteria decision
40
Kabassi K., Virvou M. and Alepis E. (2007).
USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN AFFECTIVE BI-MODAL USER INTERFACE.
In Proceedings of the Second International Conference on Software and Data Technologies - SE, pages 40-47
Copyright
c
SciTePress
making method for combining evidence from two
different modes in order to identify the users’
emotions. More specifically, the Simple Additive
Weighting (SAW) (Fishburn 1967, Hwang & Yoon
1981) has been applied in the educational user
interface for evaluating different emotions, taking
into account the input of the two different modes and
selecting the one that seems more likely to have
been felt by the user. In this respect, emotion
recognition is based on several criteria that a human
tutor would have used in order to perform emotion
recognition of his/her students during the teaching
course.
A main difference of the proposed approach with
other systems that employ decision making theories
(Naumann 1998; Schütz & Schäfer 2001;
Bohnenberger et al. 2005; Kudenko et al. 2003) is
that the values of the weights of the criteria are not
static. More specifically, the values of the criteria
used in the proposed approach are acquired by user
stereotypes and differ for the different categories of
users. Stereotypes constitute a common user
modelling technique for drawing assumptions about
users belonging to different groups (Rich 1989;
1999). In our case, user stereotypes have been
constructed with respect to the different emotional
states of users that are likely to occur in typical
situations during the educational process and their
interaction with the educational software and
represent the weight of the criteria.
The main body of this paper is organized as
follows: In section 2 we present and discuss the
MBIUI life-cycle framework. In sections 3 and 4 we
present briefly the experimental studies for
requirements capture and analysis. Section 5
describes the design of the affective bi-modal
educational application and section 6 its main
characteristics. In section 7 we present and discuss
the results of the evaluation of the multi-criteria
model. Finally, in section 8 we give the conclusions
drawn from this work.
2 MBIUI LIFE-CYCLE
FRAMEWORK
MBIUI life-cycle framework is based on RUP,
which gives a framework of a software life-cycle
that is based on iterations and maintains its phases
and procedural steps. However, RUP does neither
specify what sort of requirements analysis has to
take place nor what kind of prototype has to be
produced during each phase or procedural step. Such
specifications are provided by our MBIUI
framework concerning IUIs that are based on multi-
criteria theories.
According to MBIUI framework, during the
inception phase, the requirements capture is
conducted. During requirements capture, a prototype
is developed and the main requirements of the user
interface are specified. At this point the multi-
criteria decision making theory that seems most
promising for the particular application has to be
selected. This decision may be revised in the
procedural step of requirements capture in the phase
of construction.
According to MBIUI, in the inception phase,
during analysis, two different experiments are
conducted in order to select the criteria that are used
in the reasoning process of the human advisors as
well as their weights of importance. The
experiments should be carefully designed, since the
kind of participants as well as the methods selected
could eventually affect the whole design of the IUI.
Both experiments involve human experts in the
domain being reviewed.
The information collected during the two
experiments of the empirical study is further used
during the design phase of the system, where the
decision making theory that has been selected is
applied to the user interface. More specifically, in
the elaboration phase, during design, the user
modelling component of the system is designed and
the decision making model is adapted for the
purposes of the particular domain. Kass and Finin
(1989) define the user model as the knowledge
source of a system that contains hypotheses
concerning the user that may be important in terms
of the interactive behaviour of the system.
In the elaboration phase, during implementation,
the user modelling component of the system as well
as the basic decision making mechanisms are
developed. As a result a new version of the IUI is
developed which incorporates fully the multi criteria
decision making theory.
In the construction phase, during testing, the IUI
that incorporates the multi-criteria decision making
theory is evaluated. The evaluation of IUIs is very
important for their accuracy, efficiency and
usefulness. In MBIUI, evaluation is considered
important for two reasons: 1) the effectiveness of the
particular decision making theory that has been used
has to be evaluated 2) the effectiveness of the IUI in
general has to be evaluated. In case the version of
the IUI that incorporates a particular decision
making theory does not render satisfactory
evaluation results with respect to real users and
human experts, then the designers have to return to
requirements capture, select an alternative decision
making model and a new iteration of the life cycle
takes place. In transition phase, during testing, the
decision making model that has been finally selected
USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN AFFECTIVE BI-MODAL USER INTERFACE
41
is evaluated and possible refinements of that model
may take place, if this is considered necessary.
3 REQUIREMENTS CAPTURE
In the inception phase, during the procedural step of
requirements capture the basic requirements of the
system are specified. For this purpose we conducted
an empirical study. Due to the difference of an
affective bi-modal user interface from common user
interfaces, the main aim of the particular experiment
was to find out how users express their emotions
through a bi-modal interface that combines voice
recognition and input from keyboard.
50 users (male and female), of the age range 17-
19 and at the novice level of computer experience
participated the experiment. The particular users
were selected because such a profile describes the
majority of first year medical students in a Greek
university that the educational application is targeted
to. They are usually between the age of 17 and 19
and usually have only limited computing experience,
since the background knowledge required for
medical studies does not include advanced computer
skills.
These users were given questionnaires
concerning their emotional reactions to several
situations of computer use in terms of their actions
using the keyboard and what they say. Participants
were asked to determine what their possible
reactions may be when they are at certain emotional
states during their interaction. Our aim was to
recognise the possible changes in the users
behaviour and then to associate these changes with
emotional states like anger, happiness, boredom, etc.
After collecting and processing the information of
the empirical study we came up with results that led
to the design of the affective module of the
educational application. For this purpose, some
common positive and negative feelings were
identified.
The results of the empirical study were also used
for designing the user stereotypes. In our study user
stereotypes where built first by categorizing users by
their age, their educational level and by their
computer knowledge level. The underlying
reasoning for this is that people’s behaviour while
doing something may be affected by several factors
concerning their personality, age, experience, etc.
For example, experienced computer users may be
less frustrated than novice users or older people may
have different approaches in interacting with
computers, comparing with younger people.
Younger computer users are usually more expressive
than older users while interacting with an animated
agent and we may expect to have more data from
audio mode than by the use of a keyboard. The same
case is when a user is less experienced in using a
computer than a user with a high computer
knowledge level. In all these cases stereotypes were
used to indicate which specific characteristics in a
user’s behaviour should be taken to account in order
make more accurate assumptions about the users’
emotional state.
4 ANALYSIS
According to MBIUI, during analysis, two different
experiments are conducted. The first experiment
aims at determining the criteria that are used in the
reasoning process of the human advisors and the
second aims at calculating their weights of
importance.
4.1 Determining Multiple Criteria
Decision making theories provide precise
mathematical methods for combining criteria in
order to make decisions but do not define the
criteria. Therefore, in order to locate the criteria that
human experts take into account while providing
individualised advice, we conducted an empirical
study.
The empirical study should involve a satisfactory
number of human experts, who will act as the human
decision makers and are reviewed about the criteria
that they take into account when providing
individualised advice. Therefore, in the experiment
conducted for the application of the multi-criteria
theory in the e-learning system, 16 human experts
were selected in order to participate in the empirical
study. All the human experts possessed a first and/or
higher degree in Computer Science.
The participants of the empirical study were
asked which input action from the keyboard and the
microphone would help them find out what the e-
motions of the users were. From the input actions
that appeared in the experiment, only those proposed
by the majority of the human experts were selected.
In particular considering the keyboard we have: a)
user types normally b) user types quickly (speed
higher than the usual speed of the particular user) c)
user types slowly (speed lower than the usual speed
of the particular user) d) user uses the backspace key
often e) user hit unrelated keys on the keyboard f)
user does not use the keyboard.
Considering the users’ basic input actions
through the microphone we have 7 cases: a) user
speaks using strong language b) users uses
ICSOFT 2007 - International Conference on Software and Data Technologies
42
exclamations c) user speaks with a high voice
volume (higher than the average recorded level) d)
user speaks with a low voice volume (low than the
average recorded level) e) user speaks in a normal
voice volume f) user speaks words from a specific
list of words showing an emotion g) user does not
say anything.
4.2 Determining the Weights of
Importance of the Criteria
During requirements capture the main categories of
the users and emotions were identified. As a result
the main stereotypes were designed. For the design
of the body of the stereotype we have used the
results of the empirical study described in section
4.1, in we categorized users’ input actions in terms
of the two modes of the bi-modal system. These
actions would indicate possible changes in a user’s
emotional state while s/he interacted with the
system. However, in order to identify the weights of
the criteria (input action) another experimental study
was conducted.
More specifically, 50 medical students were
asked to use Edu-Affe-Mickey, which incorporated a
user modelling component. The user modelling
component recorded all users’ actions as a filter
between the user and the main educational
application. Then these actions were classified by
the six and seven basic input actions in regard to the
keyboard and the microphone respectively.
Table 1: Values for the stereotypic weights for the
emotions of happiness and anger concerning input from
the keyboard.
Using the keyboard
Emotion of
happiness
Emotion of anger
input
action
weight
input
action
weight
k1 0,4 k1 0,11
k2 0,4 k2 0,14
k3 0,1 k3 0,18
k4 0,05 k4 0,2
k5 0,05 k5 0,25
k6 0 k6 0,12
The results of the empirical study were collected
and analyzed. The analysis revealed how important
each input action is in identifying each emotion.
Therefore, the weight of each criterion (input action)
for all emotions were identified and the default
assumptions of the stereotypes were designed.
Tables 1 and 2 illustrate the values of the
weights for two opposite (positive/negative)
emotions, namely the emotion of happiness and the
emotion of anger. Variables k1 to k6 refer to the
weights of the six basic input actions from the
keyboard, while variables m1 to m7 refer to the
weights of the seven possible input cases concerning
interaction through the microphone. We may also
note that for each emotion and for each mode the
values of the weights have sum that equals to 1.
Table 2: Values for the stereotypic weights for the
emotions of happiness and anger concerning input from
the microphone.
Using the microphone
Emotion of
happiness
Emotion of anger
input
action
weight
input
action
weight
m1 0,06 m1 0,19
m2 0,18 m2 0,09
m3 0,15 m3 0,12
m4 0,02 m4 0,05
m5 0,14 m5 0,12
m6 0,3 m6 0,27
m7 0,15 m7 0,16
5 DESIGN
In MBIUI the design of the running application is
mainly concerned with the design of the user model
and is divided into two major parts with respect to
the application of a multi-criteria decision making
theory: 1) design decisions about how the values of
the criteria are estimated based on the information of
the user model and 2) design of the embedment of
the actual multi-criteria theory that has been selected
into the system.
The input actions that were identified by the
human experts during the first experimental study of
analysis provided information for the actions that
affect the emotional states that may occur while a
user interacts with an educational system. These
input actions are considered as criteria for evaluating
all different emotions and selecting the one that
seems more prevailing. More specifically, each
emotion is evaluated first using only the criteria
(input actions) from the keyboard and then only the
criteria (input actions) from the microphone. For the
evaluation of each alternative emotion the system
uses SAW for a particular category of users.
According to SAW, the multi-criteria utility function
USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN AFFECTIVE BI-MODAL USER INTERFACE
43
for each emotion in each mode is estimated as a
linear combination of the values of the criteria that
correspond to that mode.
In view of the above, for the evaluation of each
emotion taking into account the information
provided by the keyboard is done using formula 1.
(Formula 1)
Similarly, for the evaluation of each emotion
taking into account the information provided by the
other mode (microphone) is done using formula 2.
(Formula 2)
11
1
e
is the probability that an emotion has
occurred based on the keyboard actions and
21
1
e
em
is the probability that refers to an emotional state
using the users’ input from the microphone.
and take their values in [0,1].
em
11
1
e
em
21
1
e
In formula 1 the k’s from k1 to k6 refer to the six
basic input actions that correspond to the keyboard.
In formula 2 the m’s from m1 to m7 refer to the
seven basic input actions that correspond to the
microphone. These variables are Boolean. In each
moment the system takes data from the bi-modal
interface and translates them in terms of keyboard
and microphone actions. If an action has occurred
the corresponding criterion takes the value 1,
otherwise its value is set to 0. The w’s represent the
weights. These weights correspond to a specific
emotion and to a specific input action and are
acquired by the stereotype database. More
specifically, the weights are acquired by the
stereotypes about the emotions.
em
In cases where both modals (keyboard and
microphone) indicate the same emotion then the
probability that this emotion has occurred increases
significantly. Otherwise, the mean of the values that
have occurred by the evaluation of each emotion
using formulae 1 and 2 is calculated.
The system compares the values from all the
different emotions and determines whether an
emotion is taking effect during the interaction.
As an example we give the two formulae with
their weights for the two modes of interaction that
correspond to the emotion of happiness when a user
(under the age of 19 and novice with respect to
his/her computer skills) gives the correct answer in a
test of our educational application. In case of
considering the keyboard we have:
11
1
e
em
65432111
005.005.01.04.04.0
1
kkkkkkem
e
+++++=
44133122111111
11111
kwkwkwkwem
kekekekee
+++=
In this formula, which corresponds to the
emotion of happiness, we can observe that the
highest weight value correspond to the normal and
quickly way of typing. Slow typing, ‘often use of the
backspace key’ and ‘use of unrelated keys’ are
actions with lower values of stereotypic weights.
Absence of typing is unlikely to take place.
Concerning the second mode (microphone) we have:
661551
11
kwkw
keke
++
765432121
15.03.014.002.015.018.006.0
1
mmmmmmmem
e
+++
+
+
+
=
44133122111121
11111
mwmwmwmwem
memememee
+++=
In the second formula, which also corresponds to
the emotion of happiness, we can see that the highest
weight corresponds to m6, which refers to the
‘speaking of a word from a specific list of words
showing an emotion’ action. The empirical study
gave us strong evidence for a specific list of words.
In the case of words that express happiness, these
words are more likely to occur in a situation where a
novice young user gives a correct answer to the
system. Quite high are also the weights for variables
m2 and m3 that correspond to the use of
exclamations by the user and to the raising of the
user’s voice volume. In our example the user may do
something orally or by using the keyboard or by a
combination of the two modes. The absence or
presence of an action in both modes will give the
Boolean values to the variables k1…k6 and m1…m7.
771661551
111
mwmwmw
mememe
+
++
A possible situation where a user would use both
the keyboard and the microphone could be the
following: The specific user knows the correct
answer and types in a speed higher than the normal
speed of writing. The system confirms that the
answer is correct and the user says a word like
‘bravo’ that is included in the specific list of the
system for the emotion of happiness. The user also
speaks in a higher voice volume. In that case the
variables k1, m3 and m6 take the value 1 and all the
others are zeroed. The above formulas then give us
and
4.01*4.0
11
1
==
e
em
45.01*3.01*15.0
21
1
=
+
=
e
In the same way the system then calculates the
corresponding values for all the other emotions
using other formulae. For each basic action in the
educational application and for each emotion the
corresponding formula have different weights
deriving from the stereotypical analysis of the
empirical study. In our example in the final
comparison of the values for the six basic emotions
the system will accept the emotion of happiness as
the most probable to occur.
.
em
2
2111
11
ee
emem +
ICSOFT 2007 - International Conference on Software and Data Technologies
44
6 IMPLEMENTATION
During implementation, in the elaboration phase the
overall functionality and emotion recognition
features of Edu-Affe-Mikey are implemented. The
architecture of Edu-Affe-Mikey consists of the main
educational application with the presentation of
theory and tests, a programmable human-like
animated agent, a monitoring user modeling
component and a database.
Figure 1: A screen-shot of theory presentation in Edu-
Affe-Mikey educational application.
While using the educational application from a
desktop computer, students are being taught a
particular medical course. The information is given
in text form while at the same time the animated
agent reads it out loud using a speech engine. The
student can choose a specific part of the human body
and all the available information is retrieved from
the systems’ database. In particular, the main
application is installed either on a public computer
where all students have access, or alternatively each
student may have a copy on his/her own personal
computer. An example of using the main application
is illustrated in figure 1. The animated agent is
present in these modes to make the interaction more
human-like.
While the users interact with the main
educational application and for the needs of emotion
recognition a monitoring component records the
actions of users from the keyboard and the
microphone. These actions are then processed in
conjunction with the multi-criteria model and
interpreted in terms of emotions. The basic function
of the monitoring component is to capture all the
data inserted by the user either orally or by using the
keyboard and the mouse of the computer. The data is
recorded to a database and the results are returned to
the basic application the user interacts with. Figure 2
illustrates the “monitoring” component that records
the user’s input and the exact time of each event.
Figure 2: Snapshot of operation of the user modeling
component.
Instructors have also the ability to manipulate the
agents’ behaviour with regard to the agents’ on
screen movements and gestures, as well as speech
attributes such as speed, volume and pitch.
Instructors may programmatically interfere to the
agent’s behaviour and the agent’s reactions
regarding the agents’ approval or disapproval of a
user’s specific actions. This adaptation aims at
enhancing the “affectiveness” of the whole
interaction. Therefore, the system is enriched with
an agent capable to express emotions and, as a
result, enforces the user’s temper to interact with
more noticeable evidence in his/her behaviour.
7 TESTING
In construction phase, during the procedural step of
testing, the final version of the system is evaluated.
When a user interface incorporates a decision
making theory, the evaluation phase plays an
important role for showing whether the particular
theory is effective or not. In MBIUI life-cycle
framework it is considered important to conduct the
evaluation of a decision making model by
comparing the IUI’s reasoning with that of real
users. Therefore, in this experiment it is important to
evaluate how successful the application of the
decision making model is in selecting the alternative
action that the human experts would propose in the
case of a user’s error. For this reason, it has to be
checked whether the alternative actions that are
proposed by the human experts are also highly
ranked by the application of the decision making
USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN AFFECTIVE BI-MODAL USER INTERFACE
45
model. In case this comparison reveals that the
decision making model is not adequate, another
iteration of the life-cycle has to take place and
another decision model should be selected. This
iteration continues until the evaluation phase gives
satisfactory results.
In view of the above, an evaluation study was
conducted. Therefore, the 50 medical students that
were involved in the empirical study during
requirements capture were also involved in the
evaluation of the multi-criteria emotion recognition
system. More specifically, they were asked to
interact with the educational software and the whole
interaction was video recorded. The protocols
collected were presented to the same users in order
to perform emotion recognition for themselves with
regard to the six emotional states, namely happiness,
sadness, surprise, anger, disgust and the neutral
emotional state.
The students as observers were asked to justify
the recognition of an emotion by indicating the
criteria that s/he had used in terms of the audio mode
and keyboard actions. Whenever a participant
recognized an emotional state, the emotion was
marked and stored as data in the system’s database.
Finally, after the completion of the empirical study,
the data were compared with the systems’
corresponding hypothesis in each case an emotion
was detected. Table 3 illustrates the percentages of
successful emotion recognition of each mode after
the incorporation of stereotypic weights and the
combination through the multi-criteria approach.
Table 3. Recognition of emotions using stereotypes and
SAW theory.
Using Stereotypes and SAW
Emotions
Audio
mode
recognition
Recognition
through
keyboard
Multi-
criteria bi-
modal
recognition
Neutral 17% 32% 46%
Happiness 52% 39% 64%
Sadness 65% 34% 70%
Surprise 44% 8% 45%
Anger 68% 42% 70%
Disgust 61% 12% 58%
Provided the correct emotions for each situation
identified by the user himself/herself we were able
to come up with conclusions about the efficiency of
our systems’ emotion recognition ability. However,
one may notice that there are a few cases where the
proposed approach had a bit worse performance in
recognizing an emotional state (e.g. neutral
emotional state). Possible reason is the fact that
some emotional states, e.g. neutral emotional state,
give little evidence to certain modes (e.g. the
keyboard mode). For the same emotions, other
modalities (e.g. the visual mode) may give us
significantly better evidence. However, the
combination of the two modes in the multi-criteria
model increases significantly the accuracy of the
proposed approach.
Although, success rates may look at a first
glance lower than expected, we should underline the
fact that emotion recognition is a very difficult task
for human and their success rates are also quite low.
Therefore, the results of the evaluation study offer
evidence for the adequacy of the multi-criteria multi-
modal model for emotion recognition.
8 CONCLUSIONS
In this paper, we have used a general framework,
which provides detailed guidelines for the
application of a multi-criteria decision making
theory in an affective bi-modal Intelligent User
Interface. This framework is called MBIUI life-cycle
framework and was initially designed for
incorporating a decision making theory in a user
interface that helps users during their interaction
with a file-store system.
In this paper we aimed at checking the
effectiveness of this framework in order to apply a
simple multi-criteria decision making theory in an
adaptive bi-modal user interface. Indeed, the MBIUI
life-cycle framework facilitates the application of
the multi-criteria decision making theory by
providing detailed guidelines for all experimental
studies during requirements capture and analysis as
well as testing.
The user interface presented in this paper is used
for providing medical education to first-year medical
students and is called Edu-Affe-Mikey. In this
system, the multi-criteria decision making theory,
SAW, is used for combining evidence from two
different modes in order to identify the users’
emotions. SAW is used for evaluating different
emotions, taking into account the input of the two
different modes and selecting the one that seems
more likely to have been felt by the user. The
particular user interface offers affective bi-modal
interaction and for this reason differentiates from the
other user interfaces. The fact that the particular
framework can be used in interfaces that differ in
many ways strengthens MBIUI’s generality.
The affective bi-modal user interface has been
evaluated and the results prove the effectiveness of
the multi-criteria decision making theory for
ICSOFT 2007 - International Conference on Software and Data Technologies
46
combining evidence from two different modes and
perform emotion recognition.
ACKNOWLEDGEMENTS
Support for this work was provided by the General
Secretariat of Research and Technology, Greece,
under the auspices of the PENED-2003 program.
REFERENCES
Bernon, C., Gleizes, M.P., Peyruqueou, S., Picard, G.,
2003. ADELFE: A methodology for adaptive multi-
agent systems engineering, Engineering Societies in
the Agents World III, Lecture Notes in Artificial
Intelligence, Vol. 2577, pp. 156-169.
Bohnenberger, T. , Jacobs, O., Jameson A., Aslan, I. 2005.
Decision-Theoretic Planning Meets User
Requirements: Enhancements and Studies of an
Intelligent Shopping Guide, in H. Gellersen, R. Want,
& A. Schmidt (Eds.), Pervasive computing: Third
international conference, Berlin: Springer, 279–296.
Fishburn, P.C. 1967. Additive Utilities with Incomplete
Product Set: Applications to Priorities and
Assignments, Operations Research.
Hull, M. E. C. , Taylor, P. S., Hanna, J. R. P., Millar, R. J.
2002. Software development processes- an
assessment, Information and Software Technology,
vol. 44, pp. 1-12.
Hwang C.L., Yoon, K., 1981. Multiple Attribute Decision
Making: Methods and Applications, Lecture Notes in
Economics and Mathematical Systems, vol. 186.
Jacobson, I., Booch, G., Rumbaugh, J. 1999. The Unified
Software Development Process, Addison-Wesley,
Reading, MA, 1999.
Kabassi, K., Virvou, M., 2006. A Knowledge-based
Software Life-Cycle Framework for the incorporation
of Multi-Criteria Analysis in Intelligent User
Interfaces. IEEE Transactions on Knowledge and
Data Engineering, vol. 18, No. 9, pp. 1-13.
Kass, R., Finin, T. 1989. The role of User Models in
Cooperative Interactive Systems, International
Journal of Intelligent Systems, vol. 4, pp. 81-112.
Kudenko, D., Bauer, M. Dengler, D. 2003. Group
Decision Making Through Mediated Discussions,
Proceedings of the 9th International Conference on
User Modelling, 2003.
Naumann, F., 1998. Data Fusion and Data Quality,
Proceedings of the New Techniques and Technologies
for Statistics.
Rich, E., 1989. Stereotypes and User Modeling. In: Kobsa,
A., Wahlster, W., (Eds.) User Models in Dialog
Systems, pp. 199-214.
Rich, E., 1999. Users are individuals: individualizing user
models. International Journal of Human-Computer
Studies vol. 51, pp. 323-338.
Schütz, W., Schäfer, R., 2001. Bayesian networks for
estimating the user's interests in the context of a
configuration task, in R. Schäfer, M. E. Müller, and S.
A. Macskassy (eds.) Proceedings of the UM2001
Workshop on Machine Learning for User Modeling,
pp. 23-36.
USING MBIUI LIFE-CYCLE FRAMEWORK FOR AN AFFECTIVE BI-MODAL USER INTERFACE
47