Design of a New Digital Cognitive Screening Tool
on Tablet: AlzVR Project
Florian Maronnat
1a
, Guillaume Loup
1b
, Jonathan Degand
2
, Frédéric Davesne
1c
and
Samir Otmane
1d
1
Université Paris-Saclay, Univ. Évry, IBISC, 91020, Évry-Courcouronnes, France
2
Independent Researcher
Keywords: Digital Tablet, Screening, Alzheimer's Disease, Usability.
Abstract: Alzheimer's disease is the first cause of dementia worldwide without any current curative treatment. It
represents a public health challenge with an increasing prevalence and associated costs. Usual diagnostic
methods rely on extended interviews and paper tests provided by an exterior examiner. We aim to create a
novel, quick cognitive-screening tool on a digital tablet. This program, built and edited with Unity®, runs on
Android® for the Samsung Galaxy Tab S7 FE®. Composed of seven tasks inspired by the Mini-Mental Status
Examination and the Montréal Cognitive Assessment, it browses several cognitive functions. The
architectural design of this tablet application is distinguished by its multifaceted capabilities, encompassing
not only seamless offline functionality but also a mechanism to ensure the singularity of data amalgamated
from diverse sites. Additionally, a paramount emphasis is placed on safeguarding the confidentiality of patient
information in the healthcare domain. Furthermore, the application empowers individual site managers to
access and peruse specific datasets, enhancing their operational efficacy and decision-making processes. We
performed a preliminary usability assessment among 24 healthy patients with a final F-SUS score of
"excellent". Participants perceived the tool as simple to use and completed the test in a mean time of 142
seconds.
1 INTRODUCTION
Alzheimer's disease (AD) is the first cause of
neurodegenerative decline, affecting millions of
people worldwide with a considerable cost for
countries (International et al., 2020). In AD, patients
progressively lose their cognitive abilities, and
behavioral troubles can occur. Without any current
and efficient treatment, loss of memory and
autonomy become an essential burden for caregivers
and families. AD management is a global and public
challenge for health systems that face a constantly
increasing prevalence in aging populations.
Precocious screening of cognitive decline leads to
better and early support for patients and their families.
Unfortunately, general practitioners (GPs) do not
always have enough time to perform initial cognitive
a
https://orcid.org/0000-0003-3766-5789
b
https://orcid.org/0000-0003-3476-583X
c
https://orcid.org/0000-0001-9100-7109
d
https://orcid.org/0000-0003-2221-4264
explorations. They address their patients at
specialized centers whose appointments can be long,
delaying diagnostic but also symptomatic and social
measures. These labeled memory consultations are
primarily available in hospitals, and the diagnostic
process still relies on paper tests involving an exterior
examiner.
We have developed the AlzVR project, which
aims to propose a multimodal digital program for
cognitive screening. The first published program was
an autonomous immersive assessment composed of
thirteen tasks inspired by MMSE and MoCA and
displayed on Oculus Quest® (Maronnat et al., 2020,
2022). This assessment has not yet been tested among
an older population, but Clay et al. showed that
immersive environments could be efficient in
cognitive screening (Clay et al., 2020). However,
Maronnat, F., Loup, G., Degand, J., Davesne, F. and Otmane, S.
Design of a New Digital Cognitive Screening Tool on Tablet: AlzVR Project.
DOI: 10.5220/0012837800003753
Paper published under CC license (CC BY-NC-ND 4.0)
In Proceedings of the 19th International Conference on Software Technologies (ICSOFT 2024), pages 283-292
ISBN: 978-989-758-706-1; ISSN: 2184-2833
Proceedings Copyright © 2024 by SCITEPRESS Science and Technology Publications, Lda.
283
using virtual reality in primary care can be
challenging to implement. Thus, we developed a new
modality of AlzVR, which could be easier to use as a
non-immersive environment. This auto-questionnaire
would run on a digital tablet with questions inspired
by MMSE and MoCA as in the immersive version.
The assessment should browse several cognitive
functions in a short time.
2 RELATED WORK
Numerous tests exist to assess cognition (global or
precise evaluation), but the Mini-Mental Status
Examination (MMSE) (Folstein et al., 1975) and the
Montréal Cognitive Assessment (MoCA)
(Nasreddine et al., 2005) are widely used in primary
screening, and most professionals know them. Both
tests share several questions and explore
approximately the same cognitive functions, even
though MoCA evaluates frontal deficits more
precisely. They both browse several cognitive
functions quickly and can be repeated through the
medical following of patients. More recently, the
MoCA seems to have a higher sensitivity (Se) than
the MMSE in differentiating healthy subjects from
demented patients, whereas MMSE still performs a
higher specificity (Sp) (Ciesielska et al., 2016).
Nevertheless, there are good correlations between the
two tests (Bergeron et al., 2017; Chua et al., 2019).
Besides these classical evaluations, numerous
authors have developed new screening tools on digital
tablets that show good correlations with usual tests.
Although several recent systematic reviews have
been published (Amanzadeh et al., 2022; Chan et al.,
2021; Tsoy et al., 2021), only one meta-analysis with
good results focused on digital drawing tasks (Chan
et al., 2022). In all these studies, numerical tests could
run either on digital tablets or simple computer touch
screens and were self or hetero-administered.
Unfortunately, few of these programs were available
in French (Liu et al., 2021; Rai et al., 2020; Wu et al.,
2017), limiting their use in francophone patients.
Finally, they have not exceeded the
experimentation stage and are not used in daily
practice by health practitioners, whereas many of
these applications are already available on
commercial platforms such as Apple iTunes or
Google Play Store (Thabtah et al., 2020).
However, the usability of digital tablets has been
globally demonstrated among large populations
(Kortum & Sorber, 2015), and there is better
accessibility to new technologies, with most patients
owning a tablet or a smartphone.
Physicians would benefit from using these innovative
tools to perform early cognitive assessments in
primary care before addressing their patients for
specialized consultations. Digital tablet assessment
should be short, reliable, and understandable for
patients with cognitive tasks reproducing classical
questions from usual paper tests.
Facing this lack of available digital assessments,
we aim to create a new auto-questionnaire on a
touchscreen-based application inspired by MMSE
and MoCA tests.
3 MATERIALS AND METHODS
3.1 Experiences Architecture
We constructed our program using Unity®
(v.2021.3.11) for Android® (tablet).
The game consists of 3 main scenes:
1. The "Menu" scene includes the main menu,
medical questionnaires, and results consultation;
2. The "InGame" scene contains the tutorial and
all the user's tasks;
3. The "Survey" scene collects user feedback,
which the administrator can only consult.
The main module of the "InGame" scene, the
GameManager, references the list of nine tasks to be
performed. Although each task has a different
objective, each has textual and audio instruction and
then proposes none, one or several answers in the
form of images or text. So, the "JExperience" parent
class groups all the attributes and methods common to
all the tasks. However, the specific features of each
task have necessitated the creation of new classes
("JExpMonoChoice", "JExpChoiceTown", "JExp
Images") inherited from "JExperience" (Figure 1).
3.2 Welcome Menu
Three scenes compose AlzVR: the welcome menu,
playing scene, and F-SUS questionnaire.
When launching the application, there are three
possibilities (Figure 2):
1. Supervised experience: medical questionnaire
and cognitive assessment;
2. Quick experience: cognitive assessment only;
3. Results: Results visualization.
ICSOFT 2024 - 19th International Conference on Software Technologies
284
3.3 Medical Questionnaire
The supervised experience includes a primary
medical questionnaire to collect socio-demographic
items (name, age, type of residence) and medical
background (diagnostic, previous cognitive tests,
treatments, and sensory loss).
3.4 Anonymization
After the last validation of the medical questionnaire,
the program generates an automatic anonymized
number composed of date and hour until seconds
without integrating the name's initials. A typical
anonymous number looks like
YYYYMMDDHHMMSS. This process safeguards
the confidentiality of patient information (personal
information) and allows a further blinded analysis. In
"quick experience", an "A" precedes all anonymized
numbers, such as A-YYYYMMDDHHMMSS. In
"supervised experience", the letter of an eventual
medical center can be automatically added before the
number.
Figure 1: Main classes' diagram.
3.5 Playing Scene
3.5.1 General Aspect
The visual aspect should be simple without any
cognitive surcharge. All scenes appear on a uniform-
coloured background.
In all experiences, the user must select answers by
touching one or several buttons. These buttons are
big, allowing for an easy touch. A maximum of 8
buttons are on the screen, ensuring good visibility. All
the pictures implemented into the scenes (cognitive
tasks) are royalty-free.
Figure 2: Welcome menu view.
3.5.2 Answer Modality
Since the user selects all the answers, a confirmation
screen appears with a button "Yes" and "No". This
step avoids inattentive answers and validates the
choice (Figure 3). A "Yes" leads to the next question,
and "No" allows a new chance to answer. Each
exercise lasts 30 seconds maximum. The next
question automatically occurs if the user does not
answer within the time (counted as "Timeout"). The
choice of a "No" in the confirmation step reinitializes
the time, but only three attempts are allowed. In every
Figure 3: Answer modalities.
Design of a New Digital Cognitive Screening Tool on Tablet: AlzVR Project
285
case (success or failure), a message "Well done!"
congratulates the user. This message provides a
cheerful ambiance and can reduce further false results
of stress or fear.
3.5.3 Preliminary Training Task
A training session occurs before the cognitive
questionnaire to ensure a good understanding of the
tablet's functioning. The user needs to touch shapes
on the screen following an oral order delivered by the
program "Please touch the shape a heart" (Figure 4).
A failure in the training tasks leads to the assessment's
stopping, and the test cannot continue.
Figure 4: Training task.
3.5.4 Cognitive Questionnaire
If the training tasks are successful, the cognitive
assessment begins and comprises seven tasks from
the MMSE and the MoCA. We wanted a varied
assessment, so we selected questions from multiple
cognitive fields presented in Table 1.
Table 1: Numerical cognitive tasks.
Paper
test
Cognitive function
ex
p
lored
Numerical
co
g
nitive task
MoCA,
MMSE
Auditory memory
and attention
Three words task
(
immediate recall
)
MoCA
Memory and
attention
Clock recognition
MoCA,
MMSE
Auditory memory
and attention
Three words tasks
(
dela
y
ed recall
)
MoCA,
MMSE
Spatial orientation
Flags
Town
MoCA,
MMSE
Temporal
orientation
Season
Yea
r
MoCA Abstraction Abstraction
The first task is the « three words » test. In the MMSE
or MoCA, the examiner orally delivers the three
words, and the patient must repeat them (immediately
and with a delayed recall). To get a self-
questionnaire, we kept the oral deliverance by the
program (sound only) but replaced the oral repetition
with a choice of 3 images among eight. There is still
an immediate and delayed recall. The three words
belong to different semantic fields (animal, vehicle,
and vegetable).
The clock recognition task is inspired by the clock
drawing test (Sunderland et al., 1989), where the
patients draw a circle, number, and needles indicating
a precise hour (11h10, for example). We created a
novel task proposing three different clocks: the good
one (10h30), the symmetric clock (05h50), and a false
clock. The oral instruction delivers the hour to choose
("select the clock indicating"), and the patient
selects on the screen. There are two series of clocks
followed by the three words delayed recall.
To explore spatial and temporal orientation, we
selected a simple format with an oral question (what
is the current season? Select the country's flag where
we are) and several pictures as answers. The country
is represented as a flag, limiting written instructions
for spatial orientation. Names of town are presented
as classical French signs. The answer for the town can
be changed depending on the site assessment.
Temporal orientation tasks are relatively similar,
as the user needs to select the current season and
present the name of the season and a typical image
(Figure 5). Considering varying dates for season
changes, we left a 48-hour margin for the answer.
In the year test, we introduced several confusing
dates (minus one year, minus one century). All dates
finish by the same number as the current year.
In the MoCA test, abstraction's ability is tested on
the similarity between two words (for example, an
orange and a banana are fruits). In the abstraction
task, the user must fill a fruit series with a third picture
(Figure 6). A confusing element is among the four
choices (one picture from the three-word test).
Figure 5: Season test.
ICSOFT 2024 - 19th International Conference on Software Technologies
286
3.6 Results Menu
The results' menu allows a simple visualization of the
patient's score after the cognitive questionnaire
(Figure 7). A password protects this section and only
uses an anonymized number (ID patient). Three
possibilities exist: "X" (failure), "V" (success), and
"?" (Timeout).
Figure 6: Abstraction test.
Figure 7: Results menu view.
3.7 F-SUS Questionnaire
After the cognitive tasks, we implemented the French
translation of the System Usability Scale (Brooke,
1996), the F-SUS questionnaire (Gronier & Baudet,
2021). It evaluates global satisfaction through ten
questions and five degrees of response from 1
(strongly disagree) to 5 (strongly agree). F-SUS
results do not appear in the results menu and are
directly stored.
3.8 Data Storage
All data are stored inside the tablet's internal memory
in CSV format. This type of file can be easily
exported and exploited. We planned separate storage
for personal information (first name, last name, date
of birth) from other results (experiences and F-SUS)
in different files. All results are presented using only
the anonymized identification number. Thus, a
blinded analysis is possible using only anonymized
data (Figure 8). Correspondence with identified data
is restricted to investigators.
Figure 8: Process of data storage and anonymization.
3.9 Preliminary Usability Assessment
3.9.1 Study Population
We carried out an experimental, qualitative study in
IBISC Laboratory (University of Évry-Paris Saclay,
Department of Sciences and Technologies) among
volunteers (staff and students) to assess preliminary
usability using ISO 9241-11 norm (International
Organization for Standardization, 2018) and the
Nielsen method (Valentin & Lemarchand, 2010). The
tablet was a Samsung Galaxy Tab S7 FE® (screen of
315.0 mm, 2560x1600) running on Android 11 (user
interface One UI 3.1).
The exclusion criteria were age < 18 years old, no
understanding of the French language, and visual or
hearing loss with no equipment.
Participants were recruited through mailing lists
of university and advertisements in locals.
3.9.2 Ethical Statement
This work has been carried out in accordance with the
Declaration of Helsinki of the World Medical
Association, revised in 2013 for experiments
involving humans. Data exploitation was anonymous
using an automatic number of participation generated
from the date and hour of completion. The local
University Paris-Saclay ethics committee approved
all documents and protocols on 2022/07/07 (file 433).
Informed consent was obtained from all subjects
involved in the study. Participation was free with no
remuneration.
Design of a New Digital Cognitive Screening Tool on Tablet: AlzVR Project
287
3.9.3 Stages
Participants successively and anonymously achieved
several stages:
1. Pre-questionnaire: fill in an online questionnaire
to collect socio-demographic data (age,
profession, sex) and numerical habits
(smartphone and tablets);
2. Quick experience;
3. F-SUS questionnaire;
4. Post-questionnaire: online questionnaire to collect
free comments about the program.
3.9.4 Data Collection and Analysis
During the tests, we collected the following
parameters: answer (success, fail), number of trials,
and response time (ms).
We chose the total F-SUS score calculated on the
author's recommendation (Brooke, 1996; Gronier &
Baudet, 2021) as the primary endpoint to assess
usability with a goal of 85.5 %, considered
"excellent".
All data were blinded, collected, and analyzed
using the anonymized numbers of participants.
4 RESULTS
4.1 Population
We included 24 participants between 2022/09/27 and
2022/10/12. Their socio-demographics are presented
in Table 2, and their numerical habits are in Table 3.
Table 2: Socio-demographic characteristics of the
population.
Population (n = 24)
Gender (F/M) 10/14
Age* (years)
m
(
sd
)
[min-max]
41.88 (13.11)
[23-66]
Profession
Student 2
Enginee
r
2
Doctoral student 3
Technician 3
Researche
5
Administrative 9
* m = mean; sd = standard deviation; min = minimum;
max = maximum
4.2 Success Rate
Cognitive tasks were completed by 100% of
participants. We observed a success rate for the
questions of 97.4 % (187 correct answers out of 192).
The two tests that presented failures were the clock
task (2 failures) and the season (3 failures).
Table 3: Numerical habits of the population.
Question*
100
Have you ever used a smartphone?
(%)
12.37 (4.8) If yes, for how many years? m(sd)
Everyday 100 If yes, during 2022, how often? (%)
Yes 96
Have you ever used a digital tablet?
(%)
No 4
8.25 (3.13) If yes, for how many years? m(sd)
Everyday 21.7
If yes, during 2022, how often? (%)
Once/week13.1
Once/month17.4
Once/year 47.8
* m = mean; sd = standard deviation
4.3 Time of Completion
The average test administration time (excluding
training tasks) was 141.47 18.77) seconds, and
details of task completion times are presented in
Table 4.
4.4 F-SUS Questionnaire
Ninety-six percent of participants completed the F-
SUS questionnaire (one person left the application
before completing it), and the results for each
question are presented in Table 5. The overall score
on the F-SUS questionnaire was 89.24%, considered
"excellent".
4.5 General Remarks
In the post-questionnaire, we collected general
opinions about the computer program. Users
overwhelmingly found the program to be easy to use.
The negative remarks reported were the lack of
fluidity of the oral instructions and the tests being
judged too simple. User reviews are shown in Figure 9.
5 DISCUSSION
Numerous existing paper tests assessed cognition for
a global screening or precisely for a specific function
(De Roeck et al., 2019). At the same time, several
authors studied the possibility of digital tablet use in
ICSOFT 2024 - 19th International Conference on Software Technologies
288
evaluating cognitive decline and performing training
tasks in healthy and cognitively impaired patients
(Koo & Vizer, 2019; Wilson et al., 2022). Despite
these numerous and efficient digital tests (Chan et al.,
2021), cognitive evaluations still rely on paper tests
and need an exterior examiner. Facing an increasing
prevalence of patients in the future decades
(International et al., 2020) with a more and more
precise diagnostic (biological, functional) (Dubois et
al., 2021), there is a need to get simple, quick and
performing tools to help practitioners in cognitive
decline screening. During our conception, we chose
to create a new tool in the French language inspired
by two primary used and recommended tests (Janssen
et al., 2017; Pinto et al., 2019): MMSE, MoCA, and
the clock drawing test (CDT) (integrated into the
MoCA).
In usual tests, the patient answers most of the
questions orally to the examiner. Conceiving our
tasks, we chose not to use speech recognition because
of its current limitations (Basak et al., 2023).
Incorrect speech interpretation would have led to
false results. However, excluding oral answers does
not allow a global language evaluation as in MMSE
or MoCA.
Table 4: Tasks completion times.
Completion time*
m(sd) [min-max]
Cognitive task
31.98 (2.62)
[25.16-38.60]
Three words task (immediate recall)
36.43 (5.49)
[25.86-51.05]
Clock recognition (2 series)
17.41 (2.83)
[11.04-22.65]
Three words task (delayed recall)
12.23 (1.87)
[8.66-16.29]
Flags
10.91 (2.39)
[6.95-16.45]
Town
10.41 (2.12)
[6.38-14.68]
Year
11.73 (3.97)
[6.75-23.68]
Season
10.39 (2.67)
[6.23-15.19]
Abstraction
141.47 (18.77)
[97.64-183.58]
Total
* m = mean; sd = standard deviation; min = minimum;
max = maximum
CDT is hugely used in daily practice and belongs to
quick screening tools such as Codex (Belmin et al.,
2007) or Minicog (Borson et al., 2003). Müller et al.
have proposed a digital clock drawing task using a
stylus (Müller et al., 2019), showing good
correlations with paper tests. This transposition still
requires exterior human validation or automatic
image analysis, as proposed by Park et al. (Park &
Lee, 2021). We wanted a simple and short task with
no exterior analysis, so we switched from a drawing
task to a recognition picture task. Drawing a clock
and placing needles requires visuospatial abilities and
executive functions. Nevertheless, there were
technical limitations to producing a self-administered
questionnaire with few written instructions, no
exterior validation, and simple orders. These
limitations may lead to potential bias in cognitive
evaluation with an underestimation of executive
functions.
Figure 9: Word cloud of user reviews.
Finally, our assessment does not evaluate writing
abilities because we did not want to use a stylus or
further human validation. Thus, it is known that
dysgraphia is a symptom of AD (Onofri et al., 2016).
Despite exploring several questions and different
cognitive fields, our new assessment has limitations
that need following and potential future upgrades in
new versions.
Before evaluating our digital tool in an elderly
population, we performed a short usability
assessment of a healthy population (without cognitive
decline) among university users.
Completion time should be short, and the mean time
observed in our study (142 seconds) is a good result.
Moreover, usability reached the global score of
89.24%, surpassing the initial objective of 85.5% and
close to 90.9% (« best imaginable »).
The participants globally perceived the test as easy to
use, corresponding to F-SUS scores (questions 3, 5,
7, and 8). It was a positive evaluation because
participants did not know about cognitive tests and
thus discovered them for the first time. These results
are preliminary satisfying data, but there is a
considerable limitation about the population. Indeed,
our participants were young (41.88 years old),
healthy, and used to touch screens. This mean age is
widely below the age of AD patients (> 60 years)
Design of a New Digital Cognitive Screening Tool on Tablet: AlzVR Project
289
(National Institute on Aging, n.d.), which can explain
the observed good results. They may not be
transposable into an elderly population with cognitive
decline and poor use of tablets. Nevertheless, our
population uses tablets poorly, with less than 50% of
people using them yearly (Table 3). The questions
were negatively perceived as « too simple » or « too
slow », due to the young age of our participants.
AlzVR should be tested in an older patient population
for usability assessment and accuracy of
discrimination between healthy and demented
subjects.
Although the participants were healthy, we noted
errors in the clock recognition task, probably due to
the needle shapes signaled in complimentary remarks
(Figure 9). However, recent findings showed that
students had more and more difficulties reading
traditional clocks (BBC News, 2018), and our two
failed users were 24 years old. These difficulties
appear in the mean task time of realization (Table 4);
indeed, it is the task with the most significant
difference between the minimal and maximal time of
realization. Season errors may be explained by the
recent season changes (summer/autumn) before the
beginning of the study (September 27). We also found
an extensive range in task time realization.
When extracting the results from the tablet, we
reported no errors in the CSV files. Data were easily
exploitable and well anonymized.
Table 5: F-SUS questionnaire results.
Result*
m(sd)
Question
2.70 (1.46)
I think that I would like to use this system
frequently.
1.52 (0.71)I found the system unnecessarily complex.
4.91 (0.28)I thought the system was easy to use.
1 (0)
I think that I would need the support of a
technical
p
erson to be able to use this s
y
stem.
4.65 (0.87)
I found the various functions in this system
were well inte
g
rated.
1.43 (0.97)
I thought there was too much inconsistency in
this system.
4.96 (0.20)
I would imagine that most people would learn
to use this s
y
stem ver
y
q
uickl
y
.
1.65 (1.34)I found the system very cumbersome to use.
4.83 (0.38)I felt very confident using the system.
1.74 (1.42)
I needed to learn a lot of things before I could
g
et
g
oin
g
with this s
y
ste
m
* m = mean; sd = standard deviation
6 CONCLUSIONS
We have developed a new digital cognitive screening
tool with preliminary good feedback among a young
and healthy population. The application could also be
transposed onto smartphones to enhance its diffusion
and utilization. This preliminary study belongs to the
global study COGNUM-AlzVR, which aims to
evaluate the efficiency and relevance of two
numerical programs on tablets for cognitive
assessment in AD patients. The Committee for the
Protection of People of Ile de France approved the
multicentric project in 2022, and the study began in
April 2023 (NCT06032611).
ACKNOWLEDGEMENTS
The authors thank all participants and the Génopole
(Evry-Courcouronnes, France) for their partnership
with IBISC Laboratory.
CONFLICTS OF INTEREST
The authors declare no conflict of interest and have
no known competing financial or personal
relationships that could be viewed as influencing the
work reported in this paper. This work did not receive
any grant from funding agencies in the public,
commercial, or not-for-profit sectors.
REFERENCES
Amanzadeh, M., Hamedan, M., Mahdavi, A., &
Mohammadnia, A. (2022). Digital Cognitive Tests for
Dementia Screening: A Systematic Review.
https://doi.org/10.21203/rs.3.rs-2275675/v1
Basak, S., Agrawal, H., Jena, S., Gite, S., Bachute, M.,
Pradhan, B., & Assiri, M. (2023). Challenges and
Limitations in Speech Recognition Technology: A
Critical Review of Speech Signal Processing
Algorithms, Tools and Systems. CMES-Computer
Modeling in Engineering & Sciences, 135(2). https://
cdn.techscience.cn/ueditor/files/cmes/135-2/TSP_CM
ES_21755/TSP_CMES_21755.pdf
BBC news. (2018, April 24). Young can 'only read digital
clocks'. BBC News. https://www.bbc.com/news/e
ducation-43882847
Belmin, J., Pariel-Madjlessi, S., Surun, P., Bentot, C.,
Feteanu, D., Lefebvre des Noettes, V., Onen, F.,
Drunat, O., Trivalle, C., Chassagne, P., & Golmard, J.-
L. (2007). The cognitive disorders examination
(Codex) is a reliable 3-minute test for detection of
ICSOFT 2024 - 19th International Conference on Software Technologies
290
dementia in the elderly (validation study on 323
subjects). Presse Medicale (Paris, France: 1983), 36(9
Pt 1), 1183–1190. https://doi.org/10.1016/j.lpm.2007.03.
016
Bergeron, D., Flynn, K., Verret, L., Poulin, S., Bouchard,
R. W., Bocti, C.,p, T., Lacombe, G., Gauthier, S.,
Nasreddine, Z., & Laforce, R. J. (2017). Multicenter
Validation of an MMSE-MoCA Conversion Table.
Journal of the American Geriatrics Society, 65(5),
1067–1072. https://doi.org/10.1111/jgs.14779
Borson, S., Scanlan, J. M., Chen, P., & Ganguli, M. (2003).
The Mini-Cog as a screen for dementia: Validation in a
population-based sample. Journal of the American
Geriatrics Society, 51(10), 1451–1454. https://doi.org/
10.1046/j.1532-5415.2003.51465.x
Brooke, J. (1996). SUS: A 'Quick and Dirty' Usability
Scale. In Usability Evaluation In Industry (pp. 189–
194). CRC Press.
Chan, J. Y. C., Bat, B. K. K., Wong, A., Chan, T. K., Huo,
Z., Yip, B. H. K., Kowk, T. C. Y., & Tsoi, K. K. F.
(2022). Evaluation of Digital Drawing Tests and Paper-
and-Pencil Drawing Tests for the Screening of Mild
Cognitive Impairment and Dementia: A Systematic
Review and Meta-analysis of Diagnostic Studies.
Neuropsychology Review, 32(3), 566–576. https://doi.
org/10.1007/s11065-021-09523-2
Chan, J. Y. C., Yau, S. T. Y., Kwok, T. C. Y., & Tsoi, K.
K. F. (2021). Diagnostic performance of digital
cognitive tests for the identification of MCI and
dementia: A systematic review. Ageing Research
Reviews, 72, 101506. https://doi.org/10.1016/j.arr.
2021.101506
Chua, S. I. L., Tan, N. C., Wong, W. T., Allen Jr, J. C.,
Quah, J. H. M., Malhotra, R., & Østbye, T. (2019).
Virtual Reality for Screening of Cognitive Function in
Older Persons: Comparative Study. Journal of Medical
Internet Research, 21(8), e14821. https://doi.org/10.
2196/14821
Ciesielska, N., Sokołowski, R., Mazur, E., Podhorecka, M.,
Polak-Szabela, A., & Kędziora-Kornatowska, K.
(2016). Is the Montreal Cognitive Assessment (MoCA)
test better suited than the Mini-Mental State
Examination (MMSE) in mild Cognitive Impairment
(MCI) detection among people aged over 60? Meta-
analysis. Psychiatria Polska, 50(5), 1039–1052.
https://doi.org/10.12740/PP/45368
Clay, F., Howett, D., FitzGerald, J., Fletcher, P., Chan, D.,
& Price, A. (2020). Use of Immersive Virtual Reality in
the Assessment and Treatment of Alzheimer's Disease:
A Systematic Review. Journal of Alzheimer's Disease:
JAD, 75(1), 23–43. https://doi.org/10.3233/JAD-191218
De Roeck, E. E., De Deyn, P. P., Dierckx, E., &
Engelborghs, S. (2019). Brief cognitive screening
instruments for early detection of Alzheimer's disease:
A systematic review. Alzheimer's Research & Therapy,
11, 21. https://doi.org/10.1186/s13195-019-0474-3
Dubois, B., Villain, N., Frisoni, G. B., Rabinovici, G. D.,
Sabbagh, M., Cappa, S., Bejanin, A., Bombois, S.,
Epelbaum, S., Teichmann, M., Habert, M.-O.,
Nordberg, A., Blennow, K., Galasko, D., Stern, Y.,
Rowe, C. C., Salloway, S., Schneider, L. S., Cummings,
J. L., & Feldman, H. H. (2021). Clinical diagnosis of
Alzheimer's disease: Recommendations of the
International Working Group. The Lancet. Neurology,
20(6), 484–496. https://doi.org/10.1016/S1474-4422
(21)00066-1
Folstein, M. F., Folstein, S. E., & McHugh, P. R. (1975).
'Mini-mental state'. A practical method for grading the
cognitive state of patients for the clinician. Journal of
Psychiatric Research, 12(3), 189–198.
Gronier, G., & Baudet, A. (2021). Psychometric Evaluation
of the F-SUS: Creation and Validation of the French
Version of the System Usability Scale. International
Journal of Human–Computer Interaction, 37(16), 1571–
1582. https://doi.org/10.1080/10447318.2021.1898828
International, A. D., Guerchet, M., Prince, M., & Prina, M.
(2020). Numbers of people with dementia worldwide: An
update to the estimates in the World Alzheimer Report
2015. https://www.alzint.org/resource/numbers-of-people
-with-dementia-worldwide/
International Organization for Standardization, I. (2018). ISO
9241-11:2018. https://www.iso.org/standard/63500.html
Janssen, J., Koekkoek, P. S., Moll van Charante, E. P., Jaap
Kappelle, L., Biessels, G. J., & Rutten, G. E. H. M.
(2017). How to choose the most appropriate cognitive
test to evaluate cognitive complaints in primary care.
BMC Family Practice, 18, 101. https://doi.org/
10.1186/s12875-017-0675-4
Koo, B. M., & Vizer, L. M. (2019). Mobile Technology for
Cognitive Assessment of Older Adults: A Scoping
Review. Innovation in Aging, 3(1), igy038. https://
doi.org/10.1093/geroni/igy038
Kortum, P., & Sorber, M. (2015). Measuring the Usability
of Mobile Applications for Phones and Tablets.
International Journal of Human–Computer
Interaction, 31(8), 518–529. https://doi.org/10.10
80/10447318.2015.1064658
Liu, X., Chen, X., Zhou, X., Shang, Y., Xu, F., Zhang, J.,
He, J., Zhao, F., Du, B., Wang, X., Zhang, Q., Zhang,
W., Bergeron, M. F., Ding, T., Ashford, J. W., &
Zhong, L. (2021). Validity of the MemTrax Memory
Test Compared to the Montreal Cognitive Assessment
in the Detection of Mild Cognitive Impairment and
Dementia due to Alzheimer's Disease in a Chinese
Cohort. Journal of Alzheimer's Disease: JAD, 80(3),
1257–1267. https://doi.org/10.3233/JAD-200936
Maronnat, F., Davesne, F., & Otmane, S. (2022). Cognitive
assessment in virtual environments: How to choose the
Natural User Interfaces? Laval Virtual VRIC
ConVRgence Proceedings 2022, 1(1). https://doi.org/
10.20870/IJVR.2022.1.1.5503
Maronnat, F., Seguin, M., & Djemal, K. (2020). Cognitive
tasks modelization and description in VR environment
for Alzheimer's disease state identification. 2020 Tenth
International Conference on Image Processing Theory,
Tools and Applications (IPTA), 1–7. https://doi.
org/10.1109/IPTA50016.2020.9286627
Müller, S., Herde, L., Preische, O., Zeller, A., Heymann, P.,
Robens, S., Elbing, U., & Laske, C. (2019). Diagnostic
value of digital clock drawing test in comparison with
Design of a New Digital Cognitive Screening Tool on Tablet: AlzVR Project
291
CERAD neuropsychological battery total score for
discrimination of patients in the early course of
Alzheimer's disease from healthy individuals. Scientific
Reports, 9(1), 3543. https://doi.org/10.1038/s41598-
019-40010-0
Nasreddine, Z. S., Phillips, N. A., Bédirian, V.,
Charbonneau, S., Whitehead, V., Collin, I., Cummings,
J. L., & Chertkow, H. (2005). The Montreal Cognitive
Assessment, MoCA: A brief screening tool for mild
cognitive impairment. Journal of the American
Geriatrics Society, 53(4), 695–699. https://doi.org/1
0.1111/j.1532-5415.2005.53221.x
National Institute on Aging. (n.d.). What Are the Signs of
Alzheimer's Disease? Retrieved March 6 2023, from
https://www.nia.nih.gov/health/what-are-signs-
alzheimers-disease
Onofri, E., Mercuri, M., Archer, T., Rapp-Ricciardi, M., &
Ricci, S. (2016). Legal medical consideration of
Alzheimer's disease patients' dysgraphia and cognitive
dysfunction: A 6 month follow up. Clinical
Interventions in Aging, 11, 279–284. https://doi.org/10
.2147/CIA.S94750
Park, I., & Lee, U. (2021). Automatic, Qualitative Scoring
of the Clock Drawing Test (CDT) Based on U-Net,
CNN and Mobile Sensor Data. Sensors (Basel,
Switzerland), 21(15), 5239. https://doi.org/10.3390/s
21155239
Pinto, T. C. C., Machado, L., Bulgacov, T. M., Rodrigues-
Júnior, A. L., Costa, M. L. G., Ximenes, R. C. C., &
Sougey, E. B. (2019). Is the Montreal Cognitive
Assessment (MoCA) screening superior to the Mini-
Mental State Examination (MMSE) in the detection of
mild cognitive impairment (MCI) and Alzheimer's
Disease (AD) in the elderly? International
Psychogeriatrics, 31(4), 491–504. https://doi.org/10.
1017/S1041610218001370
Rai, L., Boyle, R., Brosnan, L., Rice, H., Farina, F.,
Tarnanas, I., & Whelan, R. (2020). Digital Biomarkers
Based Individualized Prognosis for People at Risk of
Dementia: The AltoidaML Multi-site External
Validation Study. Advances in Experimental Medicine
and Biology, 1194((Rai L.; Boyle R.; Brosnan L.; Rice
H.; Farina F.; Whelan R.) Trinity College Institute of
Neuroscience, Trinity College Dublin, Dublin, Ireland),
157–171. Embase. https://doi.org/10.1007/978-3-030-
32622-7_14
Sunderland, T., Hill, J. L., Mellow, A. M., Lawlor, B. A.,
Gundersheimer, J., Newhouse, P. A., & Grafman, J. H.
(1989). Clock drawing in Alzheimer's disease. A novel
measure of dementia severity. Journal of the American
Geriatrics Society, 37(8), 725–729. https://doi.org/10
.1111/j.1532-5415.1989.tb02233.x
Thabtah, F., Peebles, D., Retzler, J., & Hathurusingha, C.
(2020). Dementia medical screening using mobile
applications: A systematic review with a new mapping
model. Journal of Biomedical Informatics, 111.
https://doi.org/10.1016/j.jbi.2020.103573
Tsoy, E., Zygouris, S., & Possin, K. L. (2021). Current State
of Self-Administered Brief Computerized Cognitive
Assessments for Detection of Cognitive Disorders in
Older Adults: A Systematic Review. The Journal of
Prevention of Alzheimer's Disease, 8(3), 267–276.
https://doi.org/10.14283/jpad.2021.11
Valentin, A., & Lemarchand, C. (2010). La construction des
échantillons dans la conception ergonomique de
produits logiciels pour le grand public. Quel quantitatif
pour les études qualitatives ? Le travail humain, 73(3),
261–290. https://doi.org/10.3917/th.733.0261
Wilson, S. A., Byrne, P., Rodgers, S. E., & Maden, M.
(2022). A Systematic Review of Smartphone and
Tablet Use by Older Adults with and Without Cognitive
Impairment. Innovation in Aging, 6(2), igac002.
https://doi.org/10.1093/geroni/igac002
Wu, Y.-H., Vidal, J.-S., De Rotrou, J., Sikkes, S. A. M.,
Rigaud, A.-S., & Plichart, M. (2017). Can a tablet-
based cancellation test identify cognitive impairment in
older adults? PLoS ONE, 12(7). Embase.
https://doi.org/10.1371/journal.pone.0181809
ICSOFT 2024 - 19th International Conference on Software Technologies
292