Assessment of Ambient Assisted Living Services in a
Living Lab Approach: A Methodology based on ICF
Ana Isabel Martins
1
, Alexandra Queirós
2
, Margarida Cerqueira
2
,
Joaquim Alvarelhão
2
, António Teixeira
3
and Nelson Rocha
4
1
Institute of Electronics and Telematics Engineering
Campus Universitário, 3810 Aveiro, Portugal
2
Health Sciences School, University of Aveiro, Campus Universitário, 3810 Aveiro, Portugal
3
Dep. Electronics Telecommunications and Informatics, IEETA, University of Aveiro
3810 Aveiro, Portugal
4
Health Sciences Department, University of Aveiro
Campus Universitário, 3810 Aveiro, Portugal
Abstract. A major problem inherent to the development of Ambient Assisted
Living (AAL) products and services is its assessment and validation. It’s crucial
to involve the users in the validation/development process. In a Living Lab
approach, the validation of AAL products and services is focused in user’s
needs and preferences, integrating their daily lives and social roles. The
International Classification of Functioning, Disability and Health (ICF) arises
as a conceptual framework to develop instruments for the evaluation of AAL
products and services. In this sense, the purpose of this paper is the description
of an evaluation methodology of AAL products and services in a Living Lab
approach based on ICF.
1 Introduction
Ambient Assisted Living (AAL) refers to concepts, products or services that connect
and improve new technologies and social systems, in order to promote quality of life
of all people during all stages of their lives [1]. The AAL emerged from a European
Union initiative that aims to meet the elderly population growing needs, which
represents nowadays a major concern in terms of sustainability [2]. AAL enables the
utilization of products and technologies and the provision of distance services
including health care, helping to achieve autonomy, independence and dignity [3].
These characteristics make AAL appropriate to fulfill those needs, yet this paradigm
is inclusive and universal, so it favors not only the old people as well as all others
whether they have a limitation or not [3]; [4].
A major problem inherent in the development of AAL products and services is its
validation. Only through validation it’s possible to assess the adequacy of the
products or services to their users, identifying problems and developing guidelines for
Martins A., Queirós A., Cerqueira M., Alvarelhão J., Teixeira A. and Rocha N..
Assessment of Ambient Assisted Living Services in a Living Lab Approach: A Methodology based on ICF.
DOI: 10.5220/0003878300630071
In Proceedings of the 2nd International Living Usability Lab Workshop on AAL Latest Solutions, Trends and Applications (AAL-2012), pages 63-71
ISBN: 978-989-8425-93-5
Copyright
c
2012 SCITEPRESS (Science and Technology Publications, Lda.)
users and developers [5]. The evaluation of these products or services is usually made
according to the usual approach. Normally, specialists are responsible for the idea,
design and development of technologies, deciding what functions and services to
integrate and how users interact with them. User’s needs, experiences and mental
models are, in general, not considered from the beginning, only at a later development
stage.
Thus the concept of Living Lab arises and it represents an innovative approach in
which development process all interested parts in a service, product or application are
involved [6, 7]. The main difference between usual research programs and Living
Lab approach is the involvement of users, in their daily lives context, embracing and
integrating all their social roles [6]. This method allows a more realistic validation
centered in the user.
The utilization of AAL products and services aims to improve people's
participation and performance in carrying out activities i.e., improve the individual
functioning. The International Classification of Functioning Disability and Health
(ICF) developed by the World Health Organization (WHO), defines functionality as
the result of complex interaction between health conditions and contextual factors,
including environmental factors. Because AAL products and services intent to change
the personal environment surrounding in an "invisible way", in order to improve users
participation, they should be considered as environmental factors in an ICF approach.
The ICF designates environmental factors as the physical, social and attitudinal
environment where people live and conduct their lives [8].
In ICF, an environmental factor is classified as a facilitator if contributes to
increase users performance and participation. On the other hand, if an environmental
factor restricts users’ performance and participation, then it is classified as a barrier.
Thus, different environments may have a distinct impact on the same individual with
a particular health condition. Therefore, if the individual is surrounded with services
and products tailored to his characteristics, he will be able to reach a higher level of
functionality. Accordingly, the ICF may arise as a conceptual model for the holistic
development of a methodology to evaluate environmental factors and, consequently,
AAL products and services.
The existence of a conceptual framework based on standard concepts provides a
common language between designers, technicians, stakeholders, service providers
and users [9]. Using the ICF as a framework to develop instruments for the evaluation
of AAL services permits the terminology, concepts and coded information to be
aggregated with the available information, and also to be used as a comprehensive
model to characterize users and their contexts, activities and participation. Therefore,
the ICF can be used to specify, develop and characterize AAL products and services,
as well as to develop appropriate tools to assess them and their impact on user’s daily
life [9]. Since the current stage of development/evaluation of AAL products and
services is still very technology-oriented and its functionality has not been addressed
adequately, we propose to develop assessment tools that address the individuals’
functionality, assessing environmental factors according to an ICF approach.
This paper consists of five sections, beyond this introduction: 2) Methodology
description: definition of the conceptual validation and prototype test, and
considerations about the assessment tools developed; 3) Results: presentation of the
results obtained by the application of the prototype phase of the methodology; 4)
64
Discussion, reflection about the results and the assessment tools developed, based on
the ICF; 5) Conclusions: final considerations and presentation of suggestions for
future work.
2 Methodology Description
The development of this methodology involves the description of all methodological
steps that allow a standardization of procedures that can easily be adapted to different
products or services, without making the validity of the methodology unfeasible.
2.1 Phases
The evaluation methodology assumes three phases of reference. The first phase is the
conceptual validation, followed by a prototype test, and finally a pilot test. Those
phases are not isolated, and they are based on a spiral approach that follows the
development progress since the beginning (Fig.
1).
The first phase of evaluation, conceptual validation, aims to determine if an idea
of a product or service is sustainable in terms of interface and functions. In prototype
test, the second phase of the evaluation, it is intended to collect information regarding
the usability and user satisfaction. At this phase there is already a physical
implementation of the product or service prototype in order to be tested by users. The
prototype test is conducted in a controlled environment. Finally, the third phase of
evaluation, pilot test, intends to evaluate, in addition with usability and satisfaction,
the meaning that a product or service has on users’ lives. For this reason, this last
phase of testing differs from the prototype phase in the context where it happens. The
product or service should be installed in users’ homes and integrated into their daily
life routines.
In this paper we focus on the first and second phases of the evaluation
methodology, because, so far, our work has focused mainly on these two.
Fig. 1. Phases of reference of the Living Lab methodology.
65
This methodology is based on the concepts of scene, task and scenario. A scene is
the period in which the properties that constitute the context (such as light and noise)
are unchanged. Task is the action to be executed and scenario is the context where the
scene and tasks take place.
To outline a plan of a generic methodology requires a set of conceptual operations
and logistics aspects that needs to be defined for each evaluation phase, including:
goals and objectives, evaluation moments, sample selection, instruments and material,
facilities, session description and data collection.
The phases of conceptual validation and prototype test are described below.
2.1.1 Phase 1 – Conceptual Validation
The main objective of this phase is to verify the sustainability of an idea regarding the
user interface and functions of the product or service. This phase is divided into two
subphases: designing the idea and testing with mock ups.
One of the most often used methods to validate/develop an idea is the technique of
brainstorming. This technique explores the creative potential of a group. A
brainstorming session involves a small group of people (12 participants) who produce
a set of ideas in a short period of time [10]. This session should be conduct by a
moderator and is composed of four parts: introduction, warm-up, generating ideas,
analysis and summary. The main ideas of the session should be listed and registered.
After the idea is validated it is necessary to develop a mock up that allows, trough
the techniques of focus group and simulation, to determine the opinions of potential
users about the product or service. Mock ups are low fidelity prototypes (not
implemented) used to collect preliminary data on the interaction user-system [11].
The mock up test can be done using two methods: focus groups and simulation.
The focus group methodology is a collection of qualitative data that involves a
small number of people (6-8 participants) in an informal discussion, focused on a
specific subject [12]. This session should be conducted by a moderator and it is
composed of five parts: introduction, warm-up, visualization of the mock up,
discussion and summary. The main ideas of the session should be listed and
registered.
A simulation is an imitation of the functioning of real operations. This technique
uses a simulator, considering it as total or partial representation of an object or a task
to be repeated [13]. For an adequate evaluation simulation should be used at an early
stage of testing, before the service reaches a stable state of development [13]. A
simulation session should have eight participants and consists of five parts:
introduction, warm-up, testing, filling the questionnaire and summary. The instrument
used to collect data is the conceptual validation assessment questionnaire.
Questionnaires should be analyzed (content analysis) and the results registered and
used in a critic review meeting, where the research team decides whether to move on
the methodology phases or to repeat conceptual validation.
2.1.2 Phase 2 – Prototype Test
The prototype emerges from the conceptual validation phase, where all functions and
66
aspects related to the layout of the product or service were attested. The main
objective of the prototype testing phase is to evaluate the product or service in terms
of usability and satisfaction. Data collection was divided into three evaluation
moments: i) pre-test, ii) evaluation during the session test and iii) post-test.
The pre-test prototype is the first of the three evaluation moments that constitute
the prototype test and happens immediately before the session of testing. Pre-test
session consists of two parts: introduction and filling the pre-test assessment
questionnaire. Questionnaires should be analyzed (content analysis) and the results
registered.
The second evaluation moment occurs during the execution of prototype test
itself. It consists of real time and in loco evaluation of the interaction between the
user and the product or service. This phase consists of three parts: introduction,
preparation for the tasks, and execution of scenes. It should be held in the Living Lab
that should contain all the necessary equipment related to the product or service that
is being evaluated. It should also be equipped with the necessary infrastructures to
condition the environment in order to create the different scenes. The data collection
is done by observation and registration of field notes. The session is filmed and the
video is analyzed using two methods: observation grid filling and register of critical
incidents.
The post-test prototype is the last of the three evaluation moments that constitute
prototype testing. It consists of filling the post-test assessment questionnaire that
assesses the usability and satisfaction of users. Questionnaires should be analyzed
(content analysis) and the results registered. This material, as well as the registrations
from the analysis of the other assessment methods of all moments of evaluation,
should be used in a review meeting where the research team decides whether to move
on in the methodology phases, to repeat prototype test or to go back to conceptual
validation.
2.2 Assessment Tools
The development of the methodology described in the previous subsection resulted in
a group of assessment tools. The assessment tools are a very important part of the
evaluation and for that reason it’s very important to ensure their quality.
Since it was intended to verify if the ICF could be a good framework to evaluate
AAL products and services in a Living Lab approach, the research team created the
final assessment tool for the different phases, as a result of several brainstorming
meetings.
The final assessment tools are: conceptual validation assessment questionnaire
and post-test assessment questionnaire regarding to the first and second phases
respec-tively. These tools permit a classification of each component feature of the
product or service as a barrier or a facilitator according to the ICF concept. The
answer key of the questionnaire was adapted from the first qualifier of the ICF
environmental factors. Since the user must take a positive or negative position for
each item, the neutral qualifier was removed from the answer key (see example in
Figure 2).
67
Fig. 2. Excerpt of the final assessment tool.
The application of the final assessment tool showed that it enables the rating of
component features as facilitators or barriers. The instrument was also able to record
what happened during the test session, due to the “not applicable” (NA) option.
However, there were difficulties in the response scale, namely the graduation of the
barrier/facilitator. For example, in the evaluation of some components, the
participants weren’t able to identify if a component feature was a small or a medium
facilitator.
3 Results from a Case Study
The evaluation methodology described was first used to evaluate the Tele-
Rehabilitation service under development in the LUL project [14]. This session
presents the prototype test results as an example of the methodology application. In
general terms, this service allows supervised remote exercise sessions as a way to
maintain health and prevent illness [14]. Table 1 profiles the facilitators and barriers
mentioned by the clients and
Table 2 those mentioned by the service provider.
Table 1. Aspects mentioned by the clients.
Facilitators Barriers
Sequence of the session Self-video image mirrored
Graphic layout of the components Absence of voice interface
The written information and images Font size
Touch screen Closing button too small
Visualization of self-video Commands not significant
Receptiveness to using this service Monitoring area display too large
The importance of this kind of services
available to older people
Small emphasis on the information provided
by the health professionals
Table 2. Aspects mentioned by the service provider.
Facilitators Barriers
Monitor vital signs at distance Inability to see the same as clients
Service stable throughout all session Video window too small
Existence of a repository of available
exercises
Inability to zoom client’s image
Absence of voice interface
1. Rate in relation to the characteristics of layout
Barrier Facilitator
-3 -2 -1 1 2 3
NA
The login was a
The different components graphic’s disposition was a
68
4 Discussion
The success of the Living Lab methodology depends on the assessment tools used. In
that sense, a major concern with the development of this methodology was to verify
the instruments quality, particularly those based on the new ICF conceptual frame. In
general terms the results of the TeleReabilitation case study were positive and will be
very useful to guide the next phases and further work. Most of the criticisms
mentioned by clients and service providers, such as lack of voice interface and the
non-adaptation of text size, correspond to aspects that weren’t yet functioning due to
the very early stage of TeleRehabilitation prototype implementation, but they are a
part of the future work planned for the project which is itself a good service
specification indicator. When the implementation is complete, it is intended to
replicate the prototype test with a larger number of users, integrating the results of
this test session and collecting more significant data about usability and satisfaction
with TeleReabilitation service.
Although the instrument was not sensitive in recognizing why a particular
component or feature acts as a facilitator or barrier, the results seem to point towards
a good measure to discriminate facilitators and barriers. Users had difficulty, in the
answer key, in gradating the facilitator or the barrier of an environmental factor, and
this is one of the major flaws of the instrument. This may happen because this
methodology represents an effort to update the concepts behind the development of
assessing tools in this particular area. Nevertheless this gradation is required because
it is intended to quantify the impact that a certain product or service can induce in
users’ daily life.
In a Living Lab approach the implementation of ongoing tests to identify the
reasons why a product is a barrier is essential, because that’s the only way to
understand what changes should be made in the product in order to make it more
adapted to the user’s needs. In other way, it is crucial to understand the facilitator
aspects of a product or service particular components in order to define good
practices: if a component has a good level of facilitation, then it can be replicated in
similar products or services.
5 Conclusions
We proposed ICF as a framework for the development of an evaluation scheme for
AAL products and services and we developed an evaluation methodology based on
three reference phases (the first two are described in this paper). This methodology
was then implemented in a case study – TeleRehabilitation service.
Despite of the operational difficulties in evaluating AAL products and services
using ICF conceptual framework, it is still an added value because it focuses the
assessment in the user´s functionality. The ICF seems to be useful in a first level of
screening, discriminating facilitators and barriers. However, for a more accurate
assessment and for the identification of the reasons why an environmental factor is
considered facilitator or barrier, it should be associated with other assessment tools.
As future work in LUL and follow up projects (already started) it is suggested to
69
improve the instrument in order to address the critical aspects identified, including the
gradation of the answer key, and to increase the sensibility in recognizing if a
particular component or feature acts as a facilitator or a barrier. It’s also needed more
work to develop new assessment tools that allow the specification of AAL products
and services, based on the CIF concept, which would standardize the language
between the different stakeholders interested in development of AAL products or
services. It is also planned to continue the implementation and testing with this
methodology, namely through the pilot test phase and the establishment of a plan of
continuous improvement to ensure the quality of the evaluation methodology, in
which a set of control mechanisms will be included in order to continually assess the
evaluation tools created.
Acknowledgements
This work is part of the Living Usability Lab for Next Generation Networks
(www.livinglab.pt) project, a QREN project, co-funded by COMPETE and FEDER.
References
1. Storf, H., Becker. M. and Riedl, M., “Rule-based Activity Recognition Framework:
Challenges, Technique and Learning”, in Pervasive Computing Technologies for
Healthcare, London, United Kingdom, 2009, pp.1-7.
2. Sánchez-Pi, N. &and Molina, J. M., “A Centralized Approach to an Ambient Assisted
Living Application: An Intelligent Home”, 10th International Work-Conference on
Artificial Neural Networks (IWANN), Salamanca, Spain, 2009, pp.706-709.
3. Wojciechowski, M., Ristok, H., Brandes, W., Lange, B. and Baumgarten, B., “Architecture
of the ‘Daily Care Journal’ for the Support of Health Care Networks” in Wichert, R. &
Eberhard, B., (Eds) Ambient Assisted Living: 4, AAL-Kongress 2011. Germany: Springer,
2011.
4. European Commission, E., “The Build-for-All Reference Manual”. Luxemburg, Info-
Handicap and the “Build-for All” project, 2006.
5. Hoppe, A., “Technological Stress: Mental Strain of Younger and Older Users If
Technology Fails”, in Ambient Asssisted Living, R. Wichert and B. Eberhardt (Editors),
Germany: Springer, 2011, pp. 17-30.
6. Moumtzi, V. and C. Wills, “Utilizing Living Labs approach for the Validation of Services
for the Assisting Living of Elderly People”, in 3rd IEEE International Conference on
Digital Ecosystems and Technologies, Istanbul, Turkey, 2009, pp. 552-557.
7. Feurstein, K., Hesmer. A., Hribernik, K., Troben, K., and Schumacher, J. “Living Labs: A
new development strategy in Schumacher, J. & V.-P. Niitamo (Eds.), European Living
Labs: A new approach for human centric regional innovation, Berlim: Wissenschaftlicher
Verlag, 2008.
8. WHO, “International Classification of Functioning, Disability and Health (ICF)”, Geneva,
2011.
9. Queirós, A., Alvarelhão, J., Silva A., Amaro, A., Teixeira, A., Rocha, N., “The
International Classification of Functioning, Disability of Health as a Conceptual
Framework for the Design, Development and Evaluation of AAL Services for Older
70
Adults”, in Workshop AAL, Rome, Italy, 2011, pp. 46-59.
10. InteliMap: Brainstorming [cited 2011 29 de Julho]; Available from: http://www.intelimap.
com.br/papers/brainstorm.pdf, 2010.
11. Bernsen, N. and Dybkjaer, L., “Multimodal Usability”. Denmark: Springer, 2009.
12. Wilkinson, S., “Focus Groups” in J. Smith. (Ed.), Qualitative Psychology: A Practical
Guide to Research Methods, London: Sage Publications, 2003, pp. 184-204.
13. Ai, H. and Weng F., “User Simulation as Testing for Spoken Dialog Systems”, Proceedings
of the 9th SIGdial Workshop on Discourse and Dialogue, Ohio, 2008, pp. 164–171.
14. Teixeira, A., Pereira, C., Silva, M., Alvarelhão, J., Silva, A., Martins, A., Cerqueira, M.,
Pacheco, O., Almeida, N., Oliveira, C., Costa, R. and Neves, A., “New Telerehabilitation
Services for the Elderly”, in I. Miranda & M. Cruz-Cunha (Eds.), Handbook of Research
on ICTs for Healthcare and Social Services, IGI Global, 2012 (submitted).
71