A METHODOLOGY FOR CREATING INTELLIGENT
WHEELCHAIR USERS’ PROFILES
Brígida Mónica Faria
1,2,5
, Sérgio Vasconcelos
3,5
, Luís Paulo Reis
4,5
and Nuno Lau
2
1
Escola Superior de Tecnologia da Saúde do Porto, Instituto Politécnico do Porto, Vila Nova de Gaia, Portugal
2
Dep. Elect., Telecomunicações e Informática (DETI/UA),
Inst. Eng. Electrónica e Telemática de Aveiro, Universidade de Aveiro, Aveiro, Portugal
3
Dep. Eng. Informática, Faculdade de Engenharia, Universidade do Porto, Porto, Portugal
4
Dep. Sistemas de Informação, Escola de Engenharia, Universidade do Minho, Guimarães, Portugal
5
Laboratório de Inteligência Artificial e Ciência de Computadores, Universidade do Porto, Porto, Portugal
Keywords: Intelligent wheelchair, Users´ profile, Adaptive interface.
Abstract: Intelligent Wheelchair (IW) is a new concept aiming to allow higher autonomy to people with lower
mobility such as disabled or elderly individuals. Some of the more recent IWs have a multimodal interface,
enabling multiple command modes such as joystick, voice commands, head movements, or even facial
expressions. In these IW it may be very useful to provide the user with the best way of driving it through an
adaptive interface. This paper describes the foundations for creating a simple methodology for extracting
user profiles, which can be used to adequately select the best IW command mode for each user. The
methodology is based on an interactive wizard composed by a flexible set of simple tasks presented to the
user, and a method for extracting and analyzing the user’s execution of those tasks. The results achieved
showed that it is possible to extract simple user profiles, using the proposed method. Thus, the approach
may be further used to extract more complete user profiles, just by extending the set of tasks used, enabling
the adaptation of the IW interface to each user’s characteristics.
1 INTRODUCTION
The fraction of population with physical disabilities
has earned more relevance and has attracted the
attention of international health care organizations,
universities and companies interested in developing
and adapting new products. The actual tendency
reflects the demand for an increase on health and
rehabilitation services, in a way that senior and
handicapped individuals might become more and
more independent performing quotidian tasks.
Regardless the age, mobility is a fundamental
characteristic for every human being. Children with
disabilities are very often deprived of important
opportunities and face serious disadvantages
compared to other children. Adults who lose their
independent means of locomotion become less self
sufficient, raising a negative attitude towards them.
The loss of mobility originates obstacles that reduce
the personal and vocational objectives (Simpson,
2005). Therefore it is necessary to develop
technologies that can aid this population group, in a
way to assure the comfort and independence of the
elderly and handicapped people. Wheelchairs are
important locomotion devices for those individuals.
There is a growing demand for safer and more
comfortable wheelchairs, and therefore, a new
Intelligent Wheelchair (IW) concept was introduced.
However, most of the Intelligent Wheelchairs
developed by distinct research laboratories
(Simpson, 2005), have hardware and software
architectures very specific for the used wheelchair
model/developed project and are typically very
difficult to configure in order for the user to start
using them.
The rest of the paper is organized as follows.
Section 2 presents the state of art on intelligent
wheelchairs. Section 3 contains a description of the
users’ interfaces already developed and how the
interface is integrated in our work. Section 4
presents the implementation and methodology for
creating an intelligent wheelchair user interface. The
experiments and the results achieved are presented
in section 5. Finally some conclusions and future
work is described in the last section.
171
Mónica Faria B., Vasconcelos S., Paulo Reis L. and Lau N..
A METHODOLOGY FOR CREATING INTELLIGENT WHEELCHAIR USERS’ PROFILES.
DOI: 10.5220/0003749701710179
In Proceedings of the 4th International Conference on Agents and Artificial Intelligence (ICAART-2012), pages 171-179
ISBN: 978-989-8425-95-9
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
2 INTELLIGENT
WHEELCHAIRS
In the last years several prototypes of Intelligent
Wheelchairs (Figure 1) have been developed and
many scientific work has been published (Braga et
al., 2009) (Reis et al., 2010) in this area. Simpson
(Simpson, 2005) provides a comprehensive review
of IW projects with several descriptions of
intelligent wheelchairs. The main characteristics of
an IW are (Braga et al., 2009) (Jia, 2007):
interaction with the user using distinct types of
devices such as joysticks, voice interaction, vision
and other sensor based controls like pressure
sensors; autonomous navigation with safety,
flexibility and obstacle avoidance capabilities and
communication with others devices such automatic
doors and other wheelchairs.
The first project of an autonomous wheelchair
for physical handicapped was proposed by Madarasz
in 1986 (Madarasz, 1986). It was planned as a
wheelchair with a micro computer, a digital camera
and an ultra-sound scanner with the objective of
developing a vehicle that could move around in
populated environments without human intervention.
Hoyer and Holper (Hoyer and Holper, 1993)
presented a modular control architecture for an
Omni-directional wheelchair. The characteristics of
NavChair (1996), such as the capacity of following
walls and avoid obstacles by deviation are described
in (Simpson, 1998) (Bell et al., 1994) (Levine,
1999). Miller and Slak (Miller and Slak, 1995)
(Miller, 1998) proposed the system Tin Man I with
three operation modes: one individual driving a
wheelchair with automatic obstacles deviation;
moving through-out a track and moving to a point
(x,y). This kind of chair evolved to Tin Man II
which included advanced characteristics such as
storing travel information, return to the starting
point, follow walls, pass through doors and recharge
battery. Wellman (Wellman et al., 1994) proposed a
hybrid wheelchair equipped with two extra legs in
addition to its four wheels, to allow stair climbing
and movement on rough terrain. FRIEND is a robot
for rehabilitation which consists of a motorized
wheelchair and a MANUS manipulator (Borgerding
et al., 1999) (Volosyak et al., 2005). In this case,
both the vehicle and the manipulator are controlled
by voice commands. Some projects present solutions
for quadriplegic individuals, where facial
expressions recognition is used to control the
wheelchair (Jia et al., 2007) (Ng and Silva, 2001)
(Adachi et al., 1998). In 2002, Pruski presented
VAHM, a user adapted intelligent wheelchair
(Pruski et al., 2002).
Figure 1: Several prototypes of Intelligent Wheelchairs.
Satoh and Sakaue (Satoh and Sakaue, 2007)
presented an omni-directional stereo vision-based
IW which detects both the potential hazards in a
moving environment and the postures and gestures
of a user, using a stereo omni-directional system,
which is capable of acquiring omni-directional color
image sequences and range data simultaneously in
real time. In 2008 John Spletzer studied the
performance of LIDAR based localization for
docking an IW system (Spletzer et al., 2008) and in
2009 Horn and Kreutner (Horn and Kreutner, 2009)
showed how the odometric, ultrasound, and vision
sensors are used in a complementary way in order to
locate the wheelchair in a known environment. In
fact, the research on IW has suffered a lot of
developments in the last few years. Some IW
prototypes are controlled with "thought". This type
of technology uses sensors that pick up
electromagnetic waves of the brain (Hamagami and
Hirata, 2004) (Lakany, 2005) (Rebsamen, 2006).
ICAART 2012 - International Conference on Agents and Artificial Intelligence
172
2.1 IntellWheels Project
This section presents a brief overview of the
Intelligent Wheelchair project that is being
developed at the Faculty of Engineering of the
University of Porto (FEUP) in collaboration with
INESC-P and the University of Aveiro. Also, the
primary results of this project that have already been
published are also presented. The main objective of
the IntellWheels Project is to develop an intelligent
wheelchair platform that may be easily adapted to
any commercial wheelchair and aid any person with
special mobility needs. Initially, an evaluation of
distinct motorized commercial wheelchair platforms
was carried out and a first prototype was developed
in order to test the concept. The first prototype was
focused on the development of the modules that
provide the interface with the motorized wheelchair
electronics using a portable computer and other
sensors.
Figure 2: Generic gamepad, headset with microphone and
Nintendo Wii Remote.
Several different modules have been developed
in order to allow different ways of conveying
commands to the IW. These include, for example,
joystick control with USB, voice commands, control
with head movements and gestures, and facial
expressions recognition (Faria et al., 2007). Figure 2
shows the three commands already available in the
IntellWheels IW. The project research team
considered the difficulty that some patients have
while controlling a wheelchair using traditional
commands such as the traditional joystick.
Therefore, new ways of interaction between the
wheelchair and the user have been integrated,
creating a system of multiple entries based on a
multimodal interface. The system allows users to
choose which type of command best fits their needs,
increasing the level of comfort and safety.
Another possibility enabled by a system of
multiple entries is the use of software for intelligent
control of inputs. This application has the task of
determining the confidence level of each of the
entries, or even cancels them if it detects the
presence of conflicts or noise in the surrounding
environment. For example, in a very dark or very
bright room, where the patient's face is not fully
recognized, the intelligent control of inputs would
decrease the degree of confidence of this type of
software commands sent by the recognition of facial
expressions, and would provide greater importance
to the joystick, voice and/or head movements based
commands.
3 USER PROFILES
In addition to the wide variety of disabilities that
cause different types of limited mobility, each
person has specific characteristics, which may be
related to physical and cognitive factors. Thus,
individuals with similar symptoms may have
significant differences. It is fair to say that
characteristics such as moving the head, ability to
pronounce words, ability to move the hand and
fingers, can vary substantially from individual to
individual. Similarly, the time of learning and
proficiency in using assistive devices may also vary
greatly.
3.1 User Interfaces
The interface between a human and a computer is
called a user interface and it is a very important part
of any computerized system. Moreover, an adaptive
user interface (Langley, 1999) is a software entity
that improves its ability to interact with a user by
constructing a user model based on past experience
with that user. The emerging area of adaptive and
intelligent user interfaces has been exploring
applications in which these paradigms are useful and
facilitate the human machine communication (Ross,
2000). In fact, if an intelligent user interface has a
model of the user, this user model can be used to
automatically adapt the interface. Additionally,
adaptive user interfaces may use machine learning
techniques to improve the interaction with
individuals in order to have the users reach their goal
more easily, faster and with a higher level of
satisfaction. It is also essential for an adaptive
interface to obtain knowledge included in four
distinct domains: knowledge of the user; knowledge
of the interaction (modalities of interaction and
dialogue management); knowledge of the
task/domain; and knowledge of the system
characteristics (Norcio and Stanley, 1989).
A METHODOLOGY FOR CREATING INTELLIGENT WHEELCHAIR USERS' PROFILES
173
3.2 Adaptive Interfaces
Ross (Ross, 2000) presented a comprehensive
classification of adaptive/intelligent interfaces. His
classification contains three main classes. The first
class involves the addition of adaptation to an
existing direct manipulation interface. Examples of
this class are adding extra interface objects in order
to hold the predicted future commands or designing
an interface with multiple commands. The second
class is composed by interfaces acting as an
intermediary between the user and the direct
manipulation interface by filtering information or
generating suggested data values. The third class is
composed by the agent interfaces, in which
autonomous agents (Maes, 1996) (Wooldridge,
2002) can provide pro-active support to the user,
typically can make suggestions and give advice.
It is also mentioned that many intelligent interfaces
can be viewed as adaptive user interfaces, because
they change their behaviour to adapt to an individual
or assignment (Ross, 2000). Another taxonomy
defended by Langley (Langley, 1997) for adaptive
user interfaces (AUI) is based on separating them
into two groups: Informative Interfaces and
Generative Interfaces. The first class selects
information for the user and presents the items he
will find interesting or practical. The second process
tries to generate a useful knowledge structure like
spread-sheets, document preparation or drawing
packages.
Also, in the literature, another class of adaptive
interfaces is presented and studied. This class is
designated as Programming by Demonstration
(Cypher and Halbert, 1994) (Ross, 2000). This class
is distinct from the previous since generative
interfaces produce data values but programming by
demonstration systems produce commands with
arguments.
3.3 Intellwheels User Profile
Tracing a user diagnostic can be very useful to
adjust certain settings allowing for an optimized
configuration and improved interaction between the
user and the multimodal interface.
Accordingly, the Intellwheels Multimodal
Interface should contain a module capable of
performing series of training sessions, composed of
small tests for each input modality. These tests may
consist, for example, of asking the user to press a
certain sequence of buttons on the gamepad, or to
move one of the gamepads' joysticks to a certain
position. Another test may consist in asking the user
to pronounce a set of voice commands, or to perform
a specific head movement. Figure 3 shows where the
user needs to click to start the User Profiler module.
Figure 3: Starting user´s profile module.
The tests should be performed sequentially and
should have an increasing difficulty. Additionally,
the tests should be reconfigurable and extensible.
Finally, the tests sets and theirs results should be
saved on a database, accessible by the Intellwheels
Multimodal Interface. Therefore, the following user
characteristics should be extracted. These
characteristics are separated in two different types:
quantitative and qualitative. The quantitative
measures consist of: the time taken to perform a full
button sequence; the average time between pressing
two buttons; the average time to place a gamepad
analogical switch on a certain position; the average
time to position the head on a certain position; the
trust level of speech recognition; maximum
amplitude achieved with the gamepad analogical
switches in different directions; maximum amplitude
achieved with the head in different directions and
number of errors made using the gamepad. Using the
quantitative measures, the following qualitative
measures should be estimated: user ability to use the
gamepad buttons; user ability to perform head
movements and user ability to pronounce voice
commands.
At the end of the training session, the tracked user
information should be saved to an external database,
containing the users' profile. The user profile can be
used to improve security, by defining, for each user,
a global trust level for each input modality. The trust
level can be used to advice the user of each modality
to use, at the creation of a new association. Also, it
could be useful to activate confirmation events
whenever a user requests a certain output action
using an input level with a low trust level.
4 IMPLEMENTATION
This section presents the implementation for the
proposed User Profile feature. Firstly, it explains the
approach followed to specify which the test sets are
going to be loaded by the module responsible for
tracking the users’ profile. Secondly, we show the
simple profiling methods that were implemented to
ICAART 2012 - International Conference on Agents and Artificial Intelligence
174
create in future a user classification. Following, it is
explained how the extracted information was used to
adjust certain settings of the interface. Finally, a
demonstration of how the profile is stored to enable
future use is also made.
4.1 Definition of the Sets
To perform the measures described in the previous
section, a simple XML grammar was defined. It
implements four configurable distinct test types:
sequences of gamepad buttons; voice commands;
positions for both joysticks and positions for head.
Example of XML containing user profile test set:
<INTELLWHEELS_PROFILER>
<BINARY_JOYSTICK>
<item>
<sequence>joystick.1
joystick.2 </sequence>
<difficulty>easy</difficulty>
</item>
</BINARY_JOYSTICK>
<ANALOG_JOYSTICK>
(…)
<ANALOG_WIIMOTE>
<item>
<x>100</x>
<y>0</y>
</item>
</ANALOG_WIIMOTE>
<SPEECH>
<item>go forward</item>
<item>turn right</item>
<item>create new sequence</item>
<item>stop</item>
</SPEECH>
</INTELLWHEELS_PROFILER>
The proposed XML grammar makes it possible
for an external operator to configure a test set that
they may find appropriate for a specific context or
user. When a user starts the training session, the four
different types of tests are iterated. In order to attain
a consistent classification of the user, the defined
grammar should be sufficiently extensive. The test
set present on the XML file is iteratively shown to
the user. It starts by asking the user to perform the
gamepad button sequence as can be observed in
Figure 4.
When the user ends the rst component of the
user proler module, the navigation assistant asks
the user to pronounce the voice commands stored in
the XML. Also, the quantitative results for the
gamepad buttons test are presented.
The last part of the user proler test is shown in
Figure 4: User proler gamepad and voice tests.
Figure 5. The user is invited to place the gamepad’s
joystick into certain positions. A similar approach is
used for the head movements test.
Figure 5: User proler joystick test.
To define the user proficiency in using the
gamepad buttons, a very simple method was
implemented. Each sequence defined on the
grammar should have an associated difficulty level
(easy, medium or hard). The difficulty type of a
sequence may be related to its size, and to the
physical distance between the buttons on the
gamepad. Since the layout of a generic gamepad
may change depending on the model, defining
whether or not a sequence is of easy, medium or
difficulty level is left to the operator.
When the user completes the gamepad sequences
training part, an error rate is calculated for each of
the difficulty levels. If these rates are higher than a
minimum acceptable configurable value, the user
classification in this item is immediately defined.
A METHODOLOGY FOR CREATING INTELLIGENT WHEELCHAIR USERS' PROFILES
175
This classification is then used to turn on the
security feature, which is characterized by a
confirmation event performed by the navigation
assistant. For a grammar with 5 sequences of
difficulty type easy, the maximum number of
accepted errors would be 1. If the user fails more
than one sequence, the confirmation event is
triggered for any input sequence, of any difficulty
type, and the gamepad training session is terminated.
If the error rate for the easy type is less than 20%
(=1/5) the training with the sub-set composed by the
sequences of medium difficulty is initiated. At the
end, a similar method is applied. If the error rate for
the medium level is higher than 30%, the
confirmation is triggered for the medium and hard
levels of difficulty, and the training session is
terminated. Finally, if the user makes it to the last
level of difficulty, the training for the hard
sequences sub-set is started. If the error rate is
higher than 50%, the confirmation event is triggered
only for sequences with a hard difficulty level. The
best scenario takes place when the user is able to
surpass the maximum accepted error rates for all the
difficulty levels. In this situation, the confirmation
event is turned off, and an output request is
immediately triggered for any kind of input
sequence composed only by gamepad buttons.
Defining the ideal maximum acceptable error
rates is not easy. With this in mind, we made it
possible to also configure these values in the XML
grammar.
The joystick phase of the training session can be
used to calculate the maximum amplitude achieved
by the user. This value can then be used to
parameterize the maximum speed value. Imagining a
user who can only push the joystick to 50% of its
maximum amplitude, the speed can be calculated by
multiplying the axis value by two. This feature was
not implemented. However, all the background
preparation to implement it was set for future work.
The speech component of the training session
was used to define the recognition trust level for
each of the voice commands. The trust level is a
percentage value retrieved by the speech recognition
engine. This value is used to set the minimum
recognition level for the recognition module.
Finally, the head movement phase of the training
session has a similar purpose to the joystick's phase.
Additionally, the maximum amplitude for each
direction can be used to determine the range that will
trigger each one of the leaning inputs of the head
gestures recognition.
5 EXPERIMENTS AND RESULTS
The main objective of the experimental work was to
make a preliminary study of the tasks that can be
implemented and the responses of the individuals in
order of get information for the user profiling. The
experiments involved 33 voluntaries, with a mean
age of 24, a standard deviation of 4.2 and without
any movements’ restrictions.
The first experiment consisted in performing the
sequence tasks with several levels of difficulty. In
the first sequence the users needed to push the
gamepad buttons GP1 - GP2 (easy level of
difficulty); the second sequence was GP3 - GP8
(easy level of difficulty); the third sequence was
GP5 - GP8 - GP9 (medium level of difficulty) and
the last sequence was GP6 - GP1 - GP7 - GP4 - GP2
(hard level of difficulty). For the experiments with
voice commands the individuals had to pronounce
the sentences: “Go forward”; “Go back”; “Turn
right”; “Turn left”; “Right spin”; Left Spin” and
“Stop” to get the information about the recognition
trust level for each voice command.
Figure 6: User proler joystick tests.
The last two experiments involved the precision
of the gamepad’s joystick and the head movements.
The voluntaries should move the small dot into the
bigger one with the gamepad’s joystick and with the
wiimote controller. Figure 6 shows some of the tasks
that were asked. The positions were moving right;
up; down; northeast; north-west; southeast and a
sequence northeast - northwest - southeast without
going back to the initial position in the center of the
target.
In general, the achieved results show the good
performance of the individuals using gamepad and
voice commands. The behaviour with head
movements reflects more asymmetry and
heterogeneous results, since several moderate and
severe outliers exist in the time results. The time
ICAART 2012 - International Conference on Agents and Artificial Intelligence
176
consumed to perform the sequences confirmed the
complexity of the tasks as can be seen in Figure 7. In
terms of average time between buttons (Figure 8) it
is interesting to notice the results for the last
sequence. Although it is more complex and longer it
has a positive asymmetry distribution. This probably
reveals that training may improve the user’s
performance.
Figure 7: Time to perform the sequences.
Figure 8: Average time between gamepad buttons.
In terms of errors, the third sequence presents a
higher result with at least one fail. The last sequence
presented a case where 12 errors were committed.
Table 1: Contingency table with the errors of sequences.
Number of Errors
Seq 0 1 2 3 4 5 6 12
1 30 1 2 0 0 0 0 0
2 31 2 0 0 0 0 0 0
3 20 7 3 1 1 0 1 0
4 27 1 1 1 0 2 0 1
Table 2 presents several descriptive statistics,
such as central tendency (mean, median) and
dispersion (standard deviation, minimum and
maximum), for the trust level of speech recognition.
Table 2: Descriptive Statistics for the trust level of speech
recognition.
Sentence Mean Median S. Dev Min Max
“Go Forward”
95.36 95.50
0.51 93.9 95.9
“Go Back” 94.37 95.00 2.44 82.2 95.9
“Turn Right” 95.31 95.40
0.42
94.4 95.9
“Turn Left” 94.76 95.20 1.42 88.4 95.8
“Left Spin” 93.69 94.90 2.88 83.1 95.8
“Right Spin” 94.82 95.00 1.25 89.7 97.2
“Stop”
92.67
94.30
3.85
82.2 95.8
Total Sentences 94.43 94.99 1.08 92.24 95.93
The speech recognition has very good results. In
fact, the minimum of minimums was 82.2 for the
sentences “Go Back” and “Stop”. The expression
“Go Forward” has the highest mean and median.
The sentence “Stop” is more heterogeneous since it
has the higher standard deviation (3.85).
The paired samples t test was applied with a
significance level of 0.05 to compare the means of
time using joystick and head movements. It was
established the null hypothesis: the means of time to
perform the target tasks with joystick and head
movements were equal. The alternative hypothesis
is: the means of time to perform the target tasks with
joystick and head movements were not different.
The achieved power was of 0.80 with an effect size
of 0.5. Table 3 contains the p values of the paired
sample t tests and the 95% confidence interval of the
difference. Observing the results for the positions
Down and Northwest, it is valid to claim there are
statistical evidences to affirm that the mean of time
with joystick and head movements is different. This
reveals the different performance by using in the
same experience the joystick and the head
movements.
Clustering analysis is a technique that can be
used to obtain the information about similar groups.
In the future, this can be used to extract
characteristics for classification and users’ profiling.
The results obtained by hierarchical clustering,
using the nearest neighbour method and squared
Euclidean distance, show the similar performance of
subjects except one individual. In this case, using the
Table 3: Confidence intervals of the difference and p
values.
95% Confidence Interval of
the difference
Move the red dot to:
Lower Upper P
value
Right -2.29 0.67 0.273
Up -1.38 0.08 0.080
Down -9.67 -1.87 0.005
*
Northeast -2.89 0.66 0.211
Northwest -2.74 -0.17 0.028
*
Southeast -6.26 1.00 0.150
Northeast - Northwest -
Southeast
-5.32 0.37 0.085
A METHODOLOGY FOR CREATING INTELLIGENT WHEELCHAIR USERS' PROFILES
177
R-square criteria, the number of necessary clusters
to achieve 80% of the total variability retain by the
clusters is 12. Since the sample of volunteers was
from the same population, this kind of conclusions
are very natural. So the next step will consist in
obtain information about handicapped people. In
fact, if the clusters of subjects could be defined then
it should be interesting to work with supervised
classification in which the best command mode
would be the class.
6 CONCLUSIONS AND FUTURE
WORK
Although many Intelligent Wheelchair prototypes
are being developed in several research projects
around the world, the adaptation of user interfaces to
each specific patient is an often neglected research
topic. Typically, the interfaces are very rigid and
adapted to a single user or user group. Intellwheels
project is aiming at developing a new concept of
Intelligent Wheelchair controlled using high-level
commands processed by a multimodal interface.
However, in order to fully control the wheelchair,
users must have a wheelchair interface adapted to
their characteristics. In order to collect the
characteristics of individuals it is important to have
variables that can produce a user profile. The first
stage must be a statistical analysis to extract
knowledge of user and the surrounding. The second
stage must be a supervised classification to use
Machine Learning algorithms in order to construct a
model for automatic classification of new cases.
This paper mainly refers to the proposal of a set of
tasks for extracting the required information for
generating user profiles. A preliminary study has
been done with several voluntaries, enabling to test
the proposed methodology before going to the field
and acquiring information with disabled individuals.
In fact, this will be the next step for future work. The
test set presented in this paper will be tested by a
group of disabled individuals, and the results of both
experiments will be compared to check if the
performances of both populations are similar. Also,
in order to collect feedback regarding the system
usability, disabled users will be invited to drive the
wheelchair in a number of real and simulated
scenarios.
ACKNOWLEDGEMENTS
The authors would like to acknowledge to
FCT – Portuguese Science and Technology
Foundation for the INTELLWHEELS project
funding (RIPD/ADA/109636/2009), for the PhD
Scholarship FCT/SFRH/BD/44541/2008, LIACC –
Laboratório de Inteligência Artificial e de
Computadores, DETI/UA – Dep. Electrónica,
Telecomunicações e Informática and ESTSP/IPP
Escola Superior de Tecnologia da Saúde Porto
IPP.
REFERENCES
Adachi, Y., Kuno, Y., Shimada, N., Shirai, N., 1998.
Intelligent wheelchair using visual information on
human faces. In International Conference in Intelligent
Robots and Systems, vol. 1, pp. 354-359.
Bell, D. A., Borenstein, J., Levine, S. P., Koren, Y.; Jaros,
J., 1994. An assistive navigation system for
wheelchairs based upon mobile robot obstacle
avoidance. In IEEE Conf. on Robotics and
Automation, pp. 2018-2022.
Borgerding, B., Ivlev, O., Martens, C., Ruchel, N., Gräser,
A., 1999. FRIEND: Functional robot arm with user
friendly interface for disabled people. In 5th European
Conf. for the Ad-vancement of Assistive Technology.
Braga, R., Petry, M., Moreira, A.P., Reis, L.P., 2009.
Concept and Design of the Intellwheels Platform for
Developing Intelligent Wheelchairs. In LNEE/
Informatics in Control, Automation and Robotics, vol.
37, pp. 191-203.
Cypher, A., Halbert, D. C., 1994. Watch what I do:
programming by demonstration. A. Cypher and D. C.
Halbert, Eds. Massachusetts, USA: Library of
Congress.
Faria, P.M., Braga, R., Valgôde, E., Reis, L.P., 2007.
Interface framework to drive an intelligent wheelchair
using facial expressions. In IEEE International
Symposium on Industrial Electronics, Vigo, pp. 1791-
1796.
Gao, C., Hoffman, I., Miller, T., Panzarella, T., Spletzer,
J., 2008. Performance Characterization of LIDAR
Based Localization for Docking a Smart Wheelchair
System. In Internationsl Conference on Intelligent
Robots and Systems, San Diego.
Hamagami, T., Hirata, H., 2004. Development of
Intelligent Wheelchair acquiring autonomous,
cooperative and collaborative behaviour. In IEEE
International Conference on Systems Man and
Cybernetics, vol. 4, pp. 3525-3530.
Horn, O., Kreutner, M., 2009. Smart wheelchair
perception using odometry, ultrasound sensors and
camera. Robotica, vol. 27, no. 2, pp. 303-310, March.
Hoyer, H., Hölper, R., 1993. Open control architecture for
an intelligent omnidirectional wheelchair. In Proc.1st
TIDE Congress, Brussels, pp. 93-97.
Jia, P., Hu, H., Lu, T., Yuan, K., 2007. Head Gesture
Recognition for Hands-free Control of an Intelligent
ICAART 2012 - International Conference on Agents and Artificial Intelligence
178
Wheelchair. Journal of Industrial Robot, vol. 34, no. 1,
pp. 60-68.
Jia, P., Hu, H., Lu, T., Yuan, K., 2007. Head Gesture
Recognition for Hands-free Control of an Intelligent
Wheelchair. Journal of Industrial Robot, vol. 34, no. 1.
Lakany, H., 2005. Steering a wheelchair by thought. IEE
Digest, vol. 2005, no. 11059, pp. 199-202, The IEE
International Workshop on Intelligent Environments.
Langley, P., 1997. Machine learning for adaptive user
interfaces. In Proceedings of the 21st Ger-man Annual
Conference on Artificial Intelligence, Freiburg, p. 53–
62.
Langley, P., 1999. User modeling in adaptive interface. In
Proceedings of the seventh international conference on
User modeling, Banff, pp. 357-370.
Levine, S. P., Bell, D. A., Jaros, L. A., Simpson, R. C.,
Koren, Y., 1999. The NavChair assistive wheelchair
navigation system. In IEEE Transactions on
Rehabilitation Engineering, vol. 7, pp. 443-451.
Madarasz, R. L., Heiny, L. C., Cromp, R. F., Mazur, N.
M., 1986. The design of an autonomous vehicle for the
disabled. IEEE Journal of Robotics and Automation,
vol. 2, no. 3, pp. 117-126, September.
Maes, P., 1996. Intelligent Software: Programs That Can
Act Independently Will Ease the Burdens that
Computers Put on People. IEEE Expert Systems, vol.
11, no. 6, pp. 62-63, February.
Miller D., Slack, M., 1995. Design and testing of a low-
cost robotic wheelchair. In Autonomous Robots, vol.
2, pp. 77-88.
Miller, D.P., 1998. Assistive Robotics: An Overview. In
Assistive Technology and AI, pp. 126-136.
Ng, P. C., De Silva, L. C., 2001. Head gestures
recognition. In Proceedings International Conference
on Image Processing, pp. 266-269.
Norcio, A. F., Stanley, J., 1989. Adaptive Human-
Computer Interfaces: A Literature Survey and
Perspective. IEEE Transactions on Systems, Man and
Cybernetics, vol. 19, no. 2, pp. 399-408, March.
Pruski, A., Ennaji, M., Morere, Y., 2002. VAHM: A user
adapted intelligent wheelchair. In Proceedings of the
2002 IEEE International Conference on Control
Applications, Glasgow, p. 784789.
Rebsamen B., Rebsamen, B., Burdet, E., Guan, C.,
Haihong Zhang, Chee Leong Teo, Qiang Zeng, Ang,
M., Laugier, C., 2006. A Brain-Controlled Wheelchair
Based on P300 and Path Guidance. In IEEE/RAS-
EMBS International Conference , vol. 20, pp. 1101-
1106.
Reis, L. P., Braga, R., Sousa, M., Moreira, A. P., 2010.
Intellwheels MMI: A Flexible Interface for an
Intelligent Wheelchair. RoboCup 2009: Robot Soccer
World Cup XIII, Springer Ber-lin/Heidelberg, LNCS,
vol. 5949, Graz, pp. 296-307.
Ross, E., 2000. Intelligent User Interfaces: Survey and
Research Directions. University of Bristol, Bristol,
Technical Report: CSTR-00-004.
Satoh Y., Sakaue, K., 2007. An Omnidirectional Stereo
Vision-Based Smart Wheelchair. EURASIP Journal on
Image and Video, p. 11.
Simpson, R., 1998. NavChair: An Assistive Wheelchair
Navigation System with Automatic Adaptation. In
Assistive Technology and Artificial Intelligence.
Berlin: Springer-Verlag Berlin Heidelberg, p. 235.
Simpson, R. C., 2005. Smart wheelchairs: A literature
review. Journal of Rehabilitation Research and
Development, vol. 42 (4), pp. 423–436.
Volosyak, I., Ivlev, O., Graser, A., 2005. Rehabilitation
robot FRIEND II - the general concept and current
implementation. In ICORR 2005 - 9th International
Conference on Rehabilitation Robotics, Chicago, pp.
540-544.
Wellman, P., Krovi, V., Kumar, V., 1994. An adaptive
mobility system for the disabled. In Proc. IEEE Int.
Conf. on Robotics and Automation.
Wooldridge, M., 2002. An Introduction to Multi-Agent
Systems. John Wiley & Sons.
A METHODOLOGY FOR CREATING INTELLIGENT WHEELCHAIR USERS' PROFILES
179