Progressive Co-adaptation in Human-Machine Interaction
Paolo Gallina
1
, Nicola Bellotto
2
and Massimiliano Di Luca
3
1
Department of Architecture and Engineering, University of Trieste, Via A. Valerio, 10, 34127, Trieste, Italy
2
Lincoln Centre for Autonomous Systems, School of Computer Science, University of Lincoln, Lincoln, U.K.
3
Centre for Computational Neuroscience and Cognitive Robotics & School of Psychology,
University of Birmingham, Birmingham, U.K.
Keywords:
Human-in-the-Loop, Usability, Teleoperation, Active Vision, Assistive Technology, Cyber-physical Systems.
Abstract:
In this paper we discuss the concept of co-adaptation between a human operator and a machine interface and
we summarize its application with emphasis on two different domains, teleoperation and assistive technology.
The analysis of the literature reveals that only in a few cases the possibility of a temporal evolution of the
co-adaptation parameters has been considered. In particular, it has been overlooked the role of time-related
indexes that capture changes in motor and cognitive abilities of the human operator. We argue that for a more
effective long-term co-adaptation process, the interface should be able to predict and adjust its parameters ac-
cording to the evolution of human skills and performance. We thus propose a novel approach termed progres-
sive co-adaptation, whereby human performance is continuously monitored and the system makes inferences
about changes in the users’ cognitive and motor skills. We illustrate the features of progressive co-adaptation
in two possible applications, robotic telemanipulation and active vision for the visually impaired.
1 INTRODUCTION
In the field of Human-Computer Interaction (HCI)
and, more generally, in the field of Human-Machine
Interaction (HMI), the term co-adaptation refers to
the process of adjustment of both the machine and
the human operator during prolonged interaction. In
other words, “both the human user and the machine
should be able to adapt to the other through ex-
periencing the interaction occurring between them“
(Sawaragi, 2005). Designers of HMI applications
can adopt several strategies for implementing adap-
tive changes in the HMI system with goals akin to
co-adaptation: these approaches are defined human-
centered and goal-oriented .
The aim of human-centered (or user-centered) co-
adaptation is to create a pleasant interface (Dixon,
2012) that maximizes usability. The design needs are
shaped around the user skills and expectations. Ac-
cording to ISO 9241-210:2010 (Jokela et al., 2003),
human-centered design is an “approach to systems
design and development that aims to make interactive
systems more usable by focusing on the use of the
system and applying human factors/ergonomics and
usability knowledge and techniques” (see Fig. 1).
Goal-oriented co-adaptation focuses instead on
Usability
Performance
t
t=0
t=0
progressive co-adaptation
goal-oriented design
human-centered design
t
t
Figure 1: Performance and usability goals according to dif-
ferent HMI approaches. The separation between progres-
sive co-adaptation and human-centered design should hap-
pen once performance is approaching a plateau.
designing a user interface that can be exploited at
its maximum potential. This approach makes the as-
sumption that the user is very skilled in performing
the task. The drawback is that only well trained users
can benefit from the adaptability of the system. A
typical example of this approach is the use of joy-
sticks to operate an excavator. Joysticks are widely
used as human-machine interfaces in many applica-
tions of excavators, cranes, forklifts, electric-powered
wheelchairs, and telemanipulated robots due to their
reliability, ergonomy, and low cost. For some of these
362
Gallina P., Bellotto N. and Di Luca M..
Progressive Co-adaptation in Human-Machine Interaction.
DOI: 10.5220/0005561003620368
In Proceedings of the 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO-2015), pages 362-368
ISBN: 978-989-758-123-6
Copyright
c
2015 SCITEPRESS (Science and Technology Publications, Lda.)
applications (e.g. excavators) the operator is required
to undergo an extensive training as the mapping be-
tween the master device (joysticks) and the slave end-
effector (shovel) can be counterintuitive. Another ex-
ample of goal-oriented approach in a different domain
is The vOICe, a mobile assistive device for the visu-
ally impaired that converts visual images to auditory
signals (Ward and Meijer, 2010). In this case, given
the richness of the information, the system requires
long periods of training and user adaptation to the in-
herent cognitive load.
Performances and usability during interaction
evolve in a different way depending on the selected
approach, with human-centered design prioritizing
usability increase and goal-oriented design giving
more importance to performance increments (Fig. 1).
The co-adaptation design approach aims to combine
the advantages of human-centered design with the fi-
nal performances achievable using the goal-oriented
design by creating an interface that can adapt to the
skills of the user assessed over time. It is adopted in
all those applications where long-time training is re-
quired to make the interaction efficient and reliable.
Examples of co-adaptation design can be found in
speech recognition software, gesture classifiers, and
multimodal interfaces coupling human training and
learning algorithms (Christoudias et al., 2006). Some
form of co-adaptation is embedded in collaborative
control strategies for robotic wheelchairs to determine
the user’s intention (e.g. desired destination) and ad-
just the control signals accordingly, adapting the level
of assistance based on the affordances of the sur-
roundings (Carlson and Demiris, 2012).
There are cases where co-adaptation is a require-
ment of the system. An example comes from the
field of Brain-Computer Interfaces (BCI) applied to
teleoperation applications. Electroencephalography
(EEG) signals coming from the brain are collected,
filtered, and coded in order to control a remote robot
(Bi et al., 2013). For this types of applications, co-
adaptation is necessary because the system needs to
learn how to associate desired commands with brain
signals patterns while the participant is adapting to the
novel task. The interface needs to be adaptive so to
account for changes in cortical plasticity which mod-
ify brain connectivity leading to a different neural re-
sponses. This can be obtained in several ways. Bryan
et al., for example, combined reinforcement learning
and Bayesian inference (Bryan et al., 2013). G
¨
urel
and Mehring, instead, implemented an unsupervised-
learning decoder with a cost function derived from
neuronal recording that allows on-line adjustment of
its parameters (G
¨
urel and Mehring, 2012).
In all these cases the interface should be able
to dynamically modify its architecture at each time
point, so to be able to optimise both long-term usabil-
ity and performance. However, existing co-adaptation
approaches are usually biased towards goal-oriented
or human-centered interface improvements, which
unavoidably lead to similar plateaus typical of non-
adaptive solutions.
The main contributions of this paper is the defi-
nition and formalization of a new paradigm in HMI,
named progressive co-adaptation. A progressive co-
adaptive interface is one that incorporates two func-
tions: self-adaptation of the interface to the actual
user skills, and simultaneous training of the user in or-
der to increase long-term performances. The concept
is discussed for two potential case scenarios, namely
a robotic telemanipulation task and an active vision
system for the visually impaired, which could both
benefit from this novel approach.
2 CO-ADAPTATION IN TASKS
INVOLVING HUMAN
MOTOR-LEARNING SKILLS
In master-slave teleoperation implementations, the
adaptation of the interface can take place for several
reasons and at multiple levels. The control needs to
account for various parameters, including time-delay
(to increase performance and keep the system stable)
(Chen et al., 2014), operator model uncertainties (re-
lated to human body impedance, not to cognitive or
skill aspects) (Chan et al., 2014), and environment
force uncertainties (Passenberg et al., 2010). Potential
control problems are usually avoided by using classic
control approaches (Hokayem and Spong, 2006) or
with more advanced control techniques such as adap-
tive admittance control (Love and Book, 2004) or
impact stabilization controllers (McAree and Daniel,
2000). In all these applications, the interface adapts
immediately to the human and environment condi-
tions to optimize the quality of the performed action
in terms of transparency, stability, and reliability. Hu-
man adaptation of course still takes place, but it is not
taken into account as a direct key-parameter to pro-
mote a modification of the interface.
If we indicate with χ the set of parameters that
characterize the interface and that can be adjusted in
order to adapt the interface to the human, then we can
write the following:
χ = f (Σ) (1)
where Σ represents the group of measurable parame-
ters related to the aforementioned time-delay during
ProgressiveCo-adaptationinHuman-MachineInteraction
363
telemanipulation, operator model uncertainties, envi-
ronment force uncertainties, stability parameters, etc.
To elucidate this equation, we consider the mouse
of a computer as an example. Such a control inter-
face maps the two-dimensional motion on a surface
to command a pointer on a display. For this interface,
χ represents the speed parameter, the ratio between
displacement of the cursor on the screen and displace-
ment of the mouse on the surface.
Interface adaptation can take place at different
levels and it can be prompted by parameters related
to the operator skills. In master-slave teleopera-
tion scenarios, the term Human Adaptive Mechatron-
ics (HAM) indicates those interfaces that “are aimed
to assisting the human according to his or her skill
level by changing their own functions”(Harashima
and Suzuki, 2010). For example, Furuta et al. pro-
posed a haptic interface for operating a pendulum-
like juggling slave mechanism (Furuta et al., 2011;
Furuta, 2003) where a dedicated module provides a
correction force controller to assist the human oper-
ator. Igarashi et al. proposed a graphical user inter-
face, provided with effective alert functions, to reduce
operator’s misrecognition in a teleoperation task of a
quadruped robot (Igarashi et al., 2005). Alert infor-
mation is modulated depending on the human sensi-
tivity to the features of the graphical user interface.
In this kind of applications it is necessary to per-
form a direct measure of the humans’ skills (Suzuki
et al., 2013) in terms of social abilities, planning,
cognitive functions, dexterity, sensory-motor perfor-
mance, or their combination. For this, models of
human behavior models are required (Cui and Hua,
2013). Mavridis et al., for example, introduced a met-
ric to evaluate operator’s skills involved the teleoper-
ation of a robot controlled by two joysticks. The met-
ric considers kinematic parameters and is correlated
to facial expression analysis (Mavridis et al., 2015b;
Mavridis et al., 2015a). Suzuki et al. monitored the
learning process involved in a bimanual teleoperation
tasks (the tracking of two markers on a screen) by
evaluating the tracking errors (Suzuki et al., 2008).
Human skills can also be monitored by assistive tech-
nologies. In Hoey et al. (Hoey et al., 2010), for exam-
ple, a vision-based system for automated handwash-
ing assistance monitored the psychological state of a
person with dementia to adapt accordingly. An alter-
native, indirect estimate of activity can be obtained
by monitoring brain activity through EEG or Near-
infrared spectroscopy (Ishikuro et al., 2014).
To emphasize the key role of quantitative indexes
in the interface adaptation process related to human
skills, eq. 1 can be modified as follows:
χ = f (Σ, Φ) (2)
where Φ represents the set of measurable parame-
ters related to human skills. Note that in the human-
centered design approach, the interface adjusts its pa-
rameters on-the-fly, as expressed by eq. 2. On the
contrary, eq. 1 refers to the goal-oriented approach,
where the interface does not account for the user
skills. Back to the mouse example, Φ could be the
average of the inverse of the error between the desired
and the actual pointer position over a training section.
Following eq. 2, it is therefore necessary to introduce
a metric that explicitly measures human skills over
time.
3 PROGRESSIVE
CO-ADAPTATION
All previously described adaptive techniques rely on
the assumption that: a) there exists a relationship be-
tween the actual interface architecture (χ) and the hu-
man skills (Φ), and b) that the mapping between the
two aspects is static and known a priori. In other
words, adaptive techniques assume that there is a
proper interface for each skill level of the user. How-
ever, as the inventor of the mouse Douglas Engelbart
wisely stated: “if ease of use was the only valid cri-
terion, people would stick to tricycles and never try
bicycles”. Indeed other factors, like the maximization
of the overall performance, should be considered.
To achieve this, the relationship between the in-
terface parameters χ and the human skills Φ can be
defined as being dynamic rather than static:
χ = f (Σ, Φ,t) (3)
where t is the time, which becomes a key parameter
in the design of the interface. For example, the mouse
speed (i.e. χ parameter) could be linearly increased
with time (χ = Costant × t ). This simple strategy is
meant to train non-skilled users over a long-time pe-
riod. Note that, in the context of eq. 3, the adaptation
rate of the interface
dχ
dt
has to be properly calibrated,
since it affects the stability of the system (Merel et al.,
2013) as well as the final achievable performances of
the human-machine interaction.
Another aspect that needs to be considered in a dy-
namic mapping interface is when the user is not able
to improve the task performance (e.g. because of the
limitation of the current interface parameters). This
can be monitored by observing the term
dΦ
dt
, which
captures changes in the human skills Φ. The interface
can change to accommodate lacks of performance im-
provement, for example by linking the interface adap-
tation rate to
dΦ
dt
. Therefore, the co-adaptation inter-
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
364
face can be described as follows:
χ = f
Σ,Φ,
dΦ
dt
,t
(4)
Such equation is the general representation of the re-
quirements of interfaces based on the notion of pro-
gressive co-adaptation, characterized by gradual but
steady improvements. In practice, the separation be-
tween progressive co-adaptation and human-centered
design should happen once the performance is ap-
proaching a plateau and
dΦ
dt
in eq. 4 decreases. In
the following sections we will analyze two scenarios
where the framework captured by this equation can be
applied.
4 APPLICATION #1:
TELEMANIPULATION
An interface for teleoperation is made up of several
“layers”, each one with parameters that need to be
modified over time for the co-adaptation to take place,
even at the hardware level (Jin et al., 2014).
Operation performance depends not only on the
mapping between slave DoFs and operator’s DoFs
(DoFs of the hands and/or arms involved in the re-
mote operation), but also on several other geometric
and control parameters, such as control gain (Huys-
mans et al., 2006) and stiffness (Oliver et al., 2006).
To become the operator of heavy equipments, trainee
require extensive training because of the counterin-
tuitive and demanding cognitive mapping. Training
simulators are often employed to reduce cost and
time requirements. We believe that the parameters
involved in teleoperation training can be modified
over time according to the progressive co-adaptation
paradigm in order to improve the effectiveness of the
training procedures.
This approach can be applied to the domain of
multi-robot teleoperation and, in particular, to the
case of three n-DoFs independent serial robots tele-
operated by a sole operator. To solve this issue we
propose a control strategy based on a mixed control
involving the Direct Rate control and the Resolved
Position Control.
In the Direct Rate control manipulator there is a
direct correspondence between each DoF of the mas-
ter and each manipulator joint velocity. In this way,
the master position is interpreted as a velocity com-
mand for the manipulator joint. Therefore the veloc-
ity can vary linearly with respect to the master po-
sition. Typically, this approach is used in excavators
and cranes because the joystick position directly com-
mands the hydraulic valve opening. In fact, there ex-
ists a linear relationship between the manipulator joint
speed and the valve opening. In the Resolved Position
Control, instead, the mapping is simply between each
master DoF and the spatial DoFs of the manipulator.
s
1
s
3
s
2
l
i
h
2
h
1
q
i
Figure 2: DoFs mapping between the 3 robots slave system
and the two operator’s hand poses.
The two approaches can be combined as shown in
Fig. 2. A vision-based tracking device or a two glove-
based master measures the positions of the operator’s
fingers. The two thumb poses h
1
and h
2
are used to
control the end-effectors of two robots s
1
and s
2
ac-
cording to a Resolved Position Control approach.
The third robot is controlled in a direct rate con-
trol mode by the other fingers. Let ˙q
i
be the command
speed of each joint of the robot and l
i
the distance of a
fingertip from the origin of the frame h
1
(or h
2
). The
control role of each robot’s joint is given then by the
set of relations:
˙q
i
= k
i
(l
i
l
i0
) (5)
where k
i
and l
i0
are parameters to be defined which
characterize the human-robot interface. Therefore,
in this case, the vector of the interface parameters is
χ = {k
1
,..., k
n
,l
10
,..., l
n0
}. The latter can be adjusted
adopting the proposed progressive co-adaptation ap-
proach.
For this application, we assume that the data trans-
fer time-delays between the master device and the
slave are negligible. Moreover, we assume that the
slave system operates in a structured environment.
Under these conditions, the parameter Σ in eq. 4 is
not relevant to the adjusting process and can be omit-
ted. Instead, we need to define a metric to measure
the operator’s motor skills Φ over time. Let ¯e be the
error defined as the sum of the distance of the three
end-effector poses s
1
, s
2
and s
3
from target poses im-
posed by the specific task s
1
task
, s
2
task
and s
3
task
. The
skills parameter Φ can be defined then as the average
of the inverse error 1/ ¯e during each training section.
One possible progressive co-adaptation function
could be implemented by increasing the parameters
ProgressiveCo-adaptationinHuman-MachineInteraction
365
k
1
,...k
n
by a constant value every time
dΦ
dt
drops be-
low a given threshold ε. In this way, as the user’s skills
tend to reach a plateau over time (additional training
will not produce effects on motor skills), the interface
parameters are adjusted in order to increase the over-
all performance of the human-robot interaction.
The co-adaptation process ends when no further
operator’ skill improvements are registered. This
strategy would guarantee a stepwise training phase.
5 APPLICATION #2: ACTIVE
VISION
Another domain where co-adaptation can be applied
is with mobile cyber-physical systems, in particular
those addressing the fundamental problem of active
vision with human-in-the-loop. The preliminary work
in (Bellotto, 2013), for example, proposes a multi-
modal interface for handheld cameras devised to im-
prove the navigation experience of the visually im-
paired. Such interface enables the user to point a
smartphone’s camera towards features of the environ-
ment facilitating for example navigation tasks like ob-
stacle detection or place localization. In this scenario,
co-adaptation between user and mobile device is par-
ticularly challenging due to the unpredictability of hu-
man motion and sensor uncertainty.
Active vision with human-in-the-loop is related to
the classic problem of active perception, where the
goal is to find models and control strategies that can
facilitate the execution of a task (Bajcsy, 1988). In
particular, the processes involved can be represented
by a closed-loop system, in which the feedback from
the mobile device and the vision algorithms are con-
verted into “control signals” for the user to execute.
The goal is to orient the camera towards particular ob-
jects or features in the environment, whose locations
are used as reference for the system. Fig. 3 illustrates
the concept. The input r is the reference provided,
for example, by some obstacle detection or a place lo-
calization algorithm, giving the direction of a visual
target the camera should be pointing at. The error e
between the reference and the actual orientation y of
the camera is used by the controller C to generate the
control signal u.
Typical active vision systems are concerned with
the optimal control of some electro-mechanical de-
vice that regulates the internal and/or external cam-
era’s parameters, like position, orientation, focal
length, etc. (Rivlin and Rotstein, 2000). Active vision
with human-in-the-loop, instead, tries to control the
output of the whole human-camera subsystem, illus-
trated in Fig. 3 by block H and P respectively. Within
C H P
ur e u
*
y
Figure 3: Feedback configuration with human-in-the-loop.
The error e is the difference between the input reference r
and system’s output y. The control signal u is generated by
the controller C to act on the human H. The latter moves
the smartphone camera P through another control signal u
.
this subsystem, the control signal u
corresponds to
the torque applied by the human to the handheld cam-
era to change its direction and orientation.
In (Bellotto, 2013), a possible multimodal inter-
face was proposed. The system’s goal is to convey
information that needs to be transmitted from the con-
trol algorithm C to the user, i.e. to define a “signal”
u that, at least in most of the cases, can be interpreted
correctly by the person within a reasonable time. An
important aspect of this multimodal interface was the
combination of vibrations, 3D sounds and vocal mes-
sages (on bone-conduction headphones) to instruct
the user while pointing the smartphone’s camera to-
wards a visual target. The most efficient combina-
tion of the three modalities, as well as their individual
tuning, can be achieved only by taking into account
the long-term co-adaptation of the human-smartphone
system.
The problem can be framed in the progressive co-
adaptation paradigm of eq. 4 by first defining the pa-
rameters of the smartphone’s interface χ. These in-
clude the position p
s
of the 3D sound source corre-
sponding to the visual target, as well as the frequen-
cies { f
v
, f
s
, f
m
} and the amplitudes {a
v
,a
s
,a
m
} of the
vibrations, 3D sounds and vocal messages respec-
tively. For example, f
m
would be the average num-
ber of vocal messages generated by the system within
a fixed time interval, while a
m
would be the volume
they are played at. Together, these parameters define
the whole interface χ = {p
s
, f
v
, f
s
, f
m
,a
v
,a
s
,a
m
}.
Leaving aside the parameters Σ related to time-
delays etc., the next important element is the user
“pointing” skill Φ. One way to measure this is to com-
pute the average error ¯e of the control loop in Fig. 3
for a fixed time interval, then take its inverse (i.e. the
smaller the error, the more skilled the user is).
As in the previous application, the interface pa-
rameters could be adjusted by increasing the value of
one or more of them whenever
dΦ
dt
goes below a given
threshold ε. For example, if during training the user
does not improve in pointing the camera to the correct
direction (i.e.
dΦ
dt
< ε), the frequency f
m
of the vocal
messages could increase to better assist him/her in the
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
366
task. Note that in general there is not a linear relation
between interface and skills parameters, and the map
from one to the other can be much more complex than
the examples provided here.
6 CONCLUSIONS
In this paper we proposed a general framework
for the description of interface design for human-
machine interaction tasks involving motor-learning
skills. In particular, two well-established approaches,
the goal-oriented approach and human-centered ap-
proach, have been reviewed. Their advantages and
drawbacks have been analyzed in terms of usabil-
ity and performance, considering also their temporal
evolution.
During the training phase of every long-term
human-machine interaction, a process of neuroplas-
ticity occurs: the user adapts to the interface and
his/her motor skills improve. If, at the same time,
the interface adjusts its characteristic parameters, the
whole process can be referred to as a process of co-
adaptation. The analysis of time-adjusting interfaces
coming from the literature reveals that the temporal
evolution of the parameters related to co-adaptation
(and in particular to the user’s performance) has not
been fully exploited. In particular, the role of time-
related indexes that capture changes in motor and cog-
nitive abilities of the user has been overlooked.
A novel approach named progressive co-
adaptation is proposed to fill the gap. In this
framework, human performance is continuously
monitored and the system makes inferences about
changes in the users’ cognitive and motor skills. As
argued in the paper, progressive co-adaptation at-
tempts to combine the advantages derived from both
the goal-oriented and human-centered approaches,
while it mitigates their drawbacks. Indeed, for a
more effective long-term co-adaptation process, the
interface should be able to predict and adjust its
parameters according to the evolution of human skills
and performance (and not only to the actual ones).
To validate the proposed approach we plan to im-
plement it for two practical applications of teleoper-
ation and assistive technology, namely robotic tele-
manipulation and active vision for the visually im-
paired. For both cases, initial challenges have been
described and potential solutions based on progres-
sive co-adaptation have been discussed. We are rea-
sonably confident that future research in this direc-
tion will highlight the advantages of the proposed ap-
proach.
ACKNOWLEDGEMENTS
We thank the EPSRC Network on Visual Image Inter-
pretation in Humans and Machines for fostering this
collaboration.
REFERENCES
Bajcsy, R. (1988). Active perception. Proc. of the IEEE,
76(8):966–1005.
Bellotto, N. (2013). A multimodal smartphone interface for
active perception by visually impaired. In IEEE SMC
Int. Workshop on Human-Machine Systems, Cyborgs
and Enhancing Devices (HUMASCEND).
Bi, L., Fan, X.-A., and Liu, Y. (2013). Eeg-based brain-
controlled mobile robots: A survey. IEEE Transac-
tions on Human-Machine Systems, 43(2):161–176.
Bryan, M., Martin, S., Cheung, W., and Rao, R. (2013).
Probabilistic co-adaptive brain-computer interfacing.
Journal of Neural Engineering, 10(6).
Carlson, T. and Demiris, Y. (2012). Collaborative control
for a robotic wheelchair: Evaluation of performance,
attention, and workload. IEEE Trans. on Systems,
Man, and Cybernetics – Part B, 42(3):876–888.
Chan, L., Naghdy, F., and Stirling, D. (2014). Applica-
tion of adaptive controllers in teleoperation systems:
A survey. IEEE Transactions on Human-Machine Sys-
tems, 44(3):337–352.
Chen, Z., Liang, B., Zhang, T., Zhang, B., and Song, H.
(2014). An adaptive force reflection scheme for bilat-
eral teleoperation. Robotica.
Christoudias, C., Saenko, K., Morency, L.-P., and Darrell,
T. (2006). Co-adaptation of audio-visual speech and
gesture classifiers. In ICMI’06: 8th International
Conference on Multimodal Interfaces, pages 84–91.
Cui, Y. and Hua, J. (2013). Human behavior characteristics
analysis in teleoperation system. Applied Mechanics
and Materials, 373-375:163–166.
Dixon, J. (2012). Human Factors in Reliable Design, pages
137–155. John Wiley & Sons, Inc.
Furuta, K. (2003). Control of pendulum: From super
mechano-system to human adaptive mechatronics. In
Proceedings of the IEEE Conference on Decision and
Control, volume 2, pages 1498–1507.
Furuta, K., Kado, Y., Shiratori, S., and Suzuki, S. (2011).
Assisting control for pendulum-like juggling in hu-
man adaptive mechatronics. Proceedings of the In-
stitution of Mechanical Engineers. Part I: Journal of
Systems and Control Engineering, 225(6):709–720.
G
¨
urel, T. and Mehring, C. (2012). Unsupervised adapta-
tion of brain-machine interface decoders. Frontiers in
Neuroscience, 6(16).
Harashima, F. and Suzuki, S. (2010). State-of-the-art in-
telligent mechatronics in human-machine interaction.
IEEE Industrial Electronics Magazine, 4(2):9–13.
Hoey, J., Poupart, P., von Bertoldi, A., Craig, T., Boutilier,
C., and Mihailidis, A. (2010). Automated hand-
washing assistance for persons with dementia using
ProgressiveCo-adaptationinHuman-MachineInteraction
367
video and a partially observable markov decision pro-
cess. Computer Vision and Image Understanding,
114(5):503–519.
Hokayem, P. and Spong, M. (2006). Bilateral teleoperation:
An historical survey. Automatica, 42(12):2035–2057.
Huysmans, M., de Looze, M., Hoozemans, M., van der
Beek, A., and van Dieen, J. (2006). The effect of joy-
stick handle size and gain at two levels of required
precision on performance and physical load on crane
operators. Ergonomics, 49(11):1021–1035.
Igarashi, H., Takeya, A., Kubo, Y., Suzuki, S., Harashima,
F., and Kakikura, M. (2005). Human adaptive gui
design for teleoperation system. In IECON Proceed-
ings (Industrial Electronics Conference), pages 1973–
1978.
Ishikuro, K., Urakawa, S., Takamoto, K., Ishikawa, A.,
Ono, T., and Nishijo, H. (2014). Cerebral functional
imaging using near-infrared spectroscopy during re-
peated performances of motor rehabilitation tasks
tested on healthy subjects. Frontiers in Human Neu-
roscience, 8(MAY).
Jin, X., Zhang, J., and Liu, Y. (2014). The ergonomics re-
search of the joystick in excavator cab. Applied Me-
chanics and Materials, 494-495:128–131.
Jokela, T., Iivari, N., Matero, J., and Karukka, M. (2003).
The standard of user-centered design and the standard
definition of usability: Analyzing iso 13407 against
iso 9241-11. In Proceedings of the Latin American
Conference on Human-computer Interaction, CLIHC
’03, pages 53–60.
Love, L. and Book, W. (2004). Force reflecting teleopera-
tion with adaptive impedance control. IEEE Transac-
tions on Systems, Man, and Cybernetics, Part B: Cy-
bernetics, 34(1):159–165.
Mavridis, N., Pierris, G., Gallina, P., Papamitsiou, Z., As-
taras, A., and Moustakas, N. (2015a). On the sub-
jective difficulty of joystick-based robot arm teleop-
eration with auditory feedback. In Proc. of 8th IEEE
GCC Conference and Exhibition.
Mavridis, N., Pierris, G., Gallina, P., Papamitsiou, Z., As-
taras, A., and Moustakas, N. (2015b). Subjective diffi-
culty and indicators of performance of joystick-based
robot arm teleoperation with auditory feedback. In
Proc. of IEEE International Conference on Robotics
and Automation (ICRA).
McAree, P. and Daniel, R. (2000). Stabilizing impacts
in force-reflecting teleoperation using distance-to-
impact estimates. International Journal of Robotics
Research, 19(4):349–364.
Merel, J., Fox, R., Jebara, T., and Paninski, L. (2013). A
multi-agent control framework for co-adaptation in
brain-computer interfaces. In Advances in Neural In-
formation Processing Systems.
Oliver, M., Rogers, R., Rickards, J., Tingley, M., and Biden,
E. (2006). Effect of stiffness and movement speed on
selected dynamic torque characteristics of hydraulic-
actuation joystick controls for heavy vehicles. Er-
gonomics, 49(3):249–268.
Passenberg, C., Peer, A., and Buss, M. (2010). A survey of
environment-, operator-, and task-adapted controllers
for teleoperation systems. Mechatronics, 20(7):787–
801.
Rivlin, E. and Rotstein, H. (2000). Control of a camera
for active vision: Foveal vision, smooth tracking and
saccade. International Journal of Computer Vision,
39:8–96.
Sawaragi, T. (2005). Dynamical and complex behaviors in
human-machine co-adaptive systems. In IFAC Pro-
ceedings Volumes (IFAC-PapersOnline), volume 16,
pages 94–99.
Suzuki, S., Igarashi, H., Kobayashi, H., Yasuda, T., and
Harashima, F. (2013). Human adaptive mechatronics
and human-system modelling. International Journal
of Advanced Robotic Systems, 10.
Suzuki, Y., Takase, H., Pan, Y., Ishikawa, J., and Furuta,
K. (2008). Learning process of bimanual coordina-
tion. In 2008 International Conference on Control,
Automation and Systems, ICCAS 2008, pages 2830–
2835.
Ward, J. and Meijer, P. (2010). Visual experiences in the
blind induced by an auditory sensory substitution de-
vice. Consciousness and Cognition, 19(1):492–500.
ICINCO2015-12thInternationalConferenceonInformaticsinControl,AutomationandRobotics
368