Focus groups are best suited for sensitive topics like
ethical points of view (Hermanowicz, 2002 p.480).
Respondents (aged 18-67 years, most in between 36-
55) were recruited in four different nursing homes in
the Netherlands and within two extramural
organisations where home care is provided. Each
focus group was held with six to ten caregivers in
their own working environments (Reed and Payton,
1997). After a general introduction, participants
were shown six brief video clips portraying
prototypes of either assistive (e.g., Riba II Care
Support Robot For Lifting Patients), monitoring
(e.g., Mobiserv) or companion care robots (e.g.,
AIST Paro robot baby seal) and were encouraged to
reflect on their possible objections or perceived
benefits to a particular type of care technology.
Questions were asked about general thoughts on
the need for care technology in the near future,
gradually tuning into more specific topics of interest.
We considered it important that respondents could
express their opinions and possible concerns without
too much interference from our part. Participants
were reassured that their opinions were confidential
and answers would be processed anonymously.
Participants provided consent for video recordings
for coding purposes. Three different coders analysed
the videotapes independently and, after prior
training, coded each opinion according to the moral
categories autonomy, beneficence, non-maleficence,
and justice (Beauchamp, 2009) or as an opinion
expressing a possible (non)utility of care robots,
each in relation to the three types of care robots. All
coders used the software Atlas Ti and coded straight
from the video footage. Cohen’s Kappa of 0.71
revealed sufficient reliability among the decisions of
the coders.
Results showed that moral concerns regarding
justice and autonomy were hardly mentioned. Most
moral concerns among professional caregivers are
raised in terms of maleficence (i.e., risk of being
harmful), mostly so for assistive healthcare robots,
followed by monitoring robots and the least for
companionship robots. Out of 93 utterances coded as
moral, 40 related to moral maleficence of which 25
times for Assistive robots (pair-wise Chi-square tests
showed significant differences: 10.80 < χ
2
< 17.25,
p’s < .05). Caregivers reported concerns like fear
that the assistive healthcare technology might fail,
let a patient drop, squeeze too hard, and cause
physical harm, among others. Somewhat related,
they also mentioned that their patients might be
afraid of healthcare robots, especially those suffering
from dementia. Comments like the following
express such moral concerns of maleficence: “What
if the robot scares my patients and is not capable of
reassuring specific needs? I would never leave a
patient alone with a robot.”; “I mean, you never
know, certain patients tend to react unpredictable.
How can a robot understand what they want or
need?”; and “If there is no human around, who can
explain what is going on when my patients are
delusional?” In summary, when talking about
assistive healthcare robots, moral concerns of
potential harmfulness were most expressed, while
beneficence and utility were perceived to a much
lesser extent.
In response to monitoring robots, maleficence
concerns were mostly expressed in terms of a
decreased human contact between caregivers and
care receivers, which is generally considered as non-
desirable in healthcare relationships. In this respect,
moral concerns of privacy were hardly considered
important for the participating caregivers. While
most caregivers acknowledged that monitoring
technology could decrease their workload and
enable the elderly to stay more independent living
on their own, most concerns were expressed about
the lonely and diminished human contact, and
considered potentially harmful for the wellbeing of
the elderly. Some quotes expressing this concern
are: “Often, I am the only one they see throughout
the whole day.”; “She (the old woman) is dearly
waiting for me to show up, so she could have a
conversation.”; “My patient has no relatives and
cannot go outside on his own anymore. If a robot
would replace my task, he would not see anyone
throughout the day.” In sum, in talking about
monitoring healthcare robots, moral concerns of
diminished human contact and loneliness were
expressed while the highest level of utility was
perceived for monitoring robots in healthcare
compared to the other robot types.
The caregivers perceived highest beneficence
and lowest maleficence concerns in companion
robots for the elderly. Caregivers expressed feelings
or thoughts about the possible reassuring or
smoothing effect of a companion robot on a patient.
Most of them were already acquainted with the
Paro-seal, which was known for having positive
effects on especially demented elderly as this robots
is in use in a number of nursing homes in The
Netherlands. Therefore, most quotes express a
positive attitude towards the companion robot Paro:
“Look how happy she is, I could look at it all day. If
something makes you that happy it doesn’t matter
anymore that it is not alive.”; “Oh, they are very
cute. I want one of my own, when my time comes”;
“I don’t see any harm in it. I mean, they (her
HEALTHINF2015-InternationalConferenceonHealthInformatics
650