2012) that BCI systems can be used to directly con-
trol robotic arms. Similar system has been used for
patient’s limb control (Ajiboye et al., 2017). These
advances are, however, made with expensive and usu-
ally invasive BCI devices. Consumer-grade BCI de-
vices still have very limited capabilities (Maskeliunas
et al., 2016).
Systems that do not use direct limb or robotic arm
control require UI to operate. HCI efficiency directly
affect system usability. A standard way of presenting
UI to the users is by using conventional displays. This
approach is inefficient when the user is communicat-
ing or interacting with the environment. Augmented
and virtual reality have successfully been used in as-
sistive technologies and rehabilitation (Hondori et al.,
2013). Augmented reality is better suited for infor-
mation presentation, because virtual reality headset
would further separate locked in patient from the en-
vironment. Projection mapping is one of the best
ways to display augmented reality UI.
Projection mapping can either be manual or au-
tomatic. Manual projection mapping is mostly used
in entertainment and art industries where the scene is
static. The system presented in this paper uses auto-
matic projection mapping, because the environment is
dynamic and objects can change their positions. Au-
tomatic projection mapping can be performed when
the transformation between depth camera and projec-
tor optical frames is known. This transformation is
obtained by calibrating camera-projector system. One
way to calibrate camera-projector system is by using
structured-light (Moreno and Taubin, 2012). Alterna-
tively, method proposed in (Kimura et al., 2007) can
be used when the camera is already calibrated. This
paper utilized a practical calibration method proposed
in (Yang et al., 2016).
The system presented in this paper is similar to
(Benko et al., 2012), but without the accounting for
deformations caused by physical objects. More ad-
vanced dynamic projection mapping methods have
been created in recent years (Sueishi et al., 2015).
Such systems require more expensive hardware setup.
Figure 1 shows the experimental setup of the pre-
sented system.
The remaining paper is structured as follows. Sec-
tion 2 describes the proposed projection mapping
based system. The results and discussions are pre-
sented in section 3. Section 4 is the conclusion.
2 MATERIALS AND METHODS
The augmented reality UI is constructed and pre-
sented using a camera-projector system. The work
flow for setting up and calibrating the projection map-
ping system is as follows:
1. The depth camera and projector are setup in front
of the scene. The projector and camera should
be fixed sturdily to each other. Ideally the cam-
era and projector should be fixed to a common
metal frame or integrated into one housing. This
is necessary so that the extrinsic parameters cal-
culated during calibration do not change when the
system is operating. If the camera-projector sys-
tem would be integrated into a single device the
calibration process could be performed only once
i.e. factory calibration.
2. Calibration of projector-camera system is per-
formed by placing a board with circular black dot
pattern in front of projector and camera. The pro-
jector is used to show a similar white dot pattern
that appears on the same board. The camera im-
age of the board is recorded and positions of pro-
jected and real dots are estimated. To correctly es-
timate the system parameters several images with
varying board position and orientation have to be
captured. The estimates are used to calculate the
intrinsic and extrinsic parameters of the projector-
camera system.
In the camera-projector system a projector is
treated as a virtual camera device. We use a pin-
hole camera model to describe both the camera
and the projector (i.e. a virtual camera). The pin-
hole camera intrinsic parameters consist of a 3x3
camera matrix C and a 1x5 distortion coefficients
matrix D. Intrinsic parameter matrices C and D
can be combined to create a 3X4 camera projec-
tion matrix P (Hartley and Zisserman, 2003). Ma-
trix P can be used to project 3D world points in
homogeneous coordinates into an image. During
calibration we obtain two camera projection ma-
trices P
c
for camera and P
p
for projector.
Camera-projector system also has extrinsic cam-
era parameters. Extrinsic camera parameters con-
sist of translation vector T and rotation matrix
R. In our case T and R define the translation
and rotation of the projector optical origin in the
camera origin coordinate system. After calibra-
tion camera-projector system intrinsic and extrin-
sic parameters are obtained. These parameters
are used to perform automatic projection mapping
during system operation.
During the system operation depth senor is acquiring
depth images of the scene in camera optical frame
coordinate system. These images have to be trans-
formed into a projector optical frame coordinate sys-
tem. This transformation consists of the following
Augmented Reality Object Selection User Interface for People with Severe Disabilities
157