is the rope pull on which the movements are carried
out. The patient stands in front of a depth sensor,
which extracts the skeletal data and performs a mo-
tion analysis. The results are visualised in real time
on a display belonging to the assistance system. Both
the coloured animation mesh and the textual hints as-
sociated with the movement are visualised.
Figure 7: The design of the assistance system for the study
consists of a wire rope hoist, depth sensor and a display
monitor (J. Richter, 2017).
The C++ standardised toolkit Qt was used for
the realisation of the assistence system. This frame-
work is characterised by increased flexibility in GUI
creation. Qt has the declarative programming lan-
guage QML, whereby individual GUIs can be cre-
ated with the help of a JSON-based metalanguage.
During the implementation of the visual feedback, a
real-time realisation was a particularly noteworthy as-
pect. Since the patients had to be informed immedi-
ately about their wrong movements, which should al-
ready avoid their error during the next movement ex-
ecution, a GLSL shader implementation became un-
avoidable. Even meshes beyond the 100,000 individ-
ual vertices could be displayed in real time. Also
the animation with different single models and the
change of the models was realised. Thus the motiva-
tion of the patients could be increased, because they
preferred different animation models. Since OpenGL
was already used for the animation, the coloration of
the bone segments could be implemented in a further
GLSL shader step on the fragment side. This im-
plementation meets the necessary real-time require-
ments, which require a frame rate of 25 frames per
second, and rounds off the desired assistance system
in terms of information density. For the application’s
humanoid standard mesh, which requires 146754 ver-
tices to be rendered per visualization step, the frame
rate is 60 frames per second, where one AMD ATI
Radeon RX Vega M GL graphics card is used.
A user acceptance study was conducted to test the
system in clinical practice. The aim was to investi-
gate how the visual feedback system encouraged and
motivated patients to perform the movements to be
trained properly and how the visual design was per-
ceived. For this purpose, the assistance system was
tested in a clinical facility, for patients with a new hip
implant. 15 subjects completed a three-week rehabil-
itation program and trained with the assistance sys-
tem under therapeutic supervision. Patients were then
asked to complete a standardised questionnaire with
twelve questions and five possible answers.
In the survey, the system was predominantly as-
sessed positively. The evaluation of the movement
feedback, which was important for the question of the
paper, and the indications of an autocorrection were
mostly understandable and helpful (Nitzsche, 2018).
In addition, the error frequencies were evaluated
over the duration of the therapy. The aim was to
determine whether the patient’s movements could be
improved during the training period with the help of
the assistance system. This also shows that the error
rate monitored by the assistance system decreased by
more than 50% (L
¨
osch, 2019).
6 CONCLUSION AND FUTURE
WORK
A visual feedback was created for the movement ther-
apy, in which it is possible to select an individual
avatar for the display. The movements were anal-
ysed and evaluated. Errors were coloured according
to their importance in traffic light colours on the previ-
ously calculated bone segments. This allows patients
to see immediately where the error occurred. In order
to evaluate the assisted therapy system, a user study
was carried out with 15 subjects. The results indicates
a high user acceptance. The independent evaluation
of the recorded training results showed a noticeable
gain in health-promoting movements.
In future the integration of real scanned individual
characters would promote immersion. Of course, the
creation of the models would have to be feasible as a
sideline and without too much time expenditure.
The monitoring of the control should also extend
to other body regions. Both the hands and the face
could be integrated into the motion monitoring. In ad-
dition, an automatic system to record reference move-
ments have to be implemented. Thereby, the thera-
pists could store a new motion sequence at any time.
The integration of the complete system into a vir-
tual reality environment would have an increased im-
mersion. The limitation that is associated with a
fixed location feedback system would be abolished.
It would no longer be necessary to realign the display
device for body rotated exercise changes.
VISAPP 2020 - 15th International Conference on Computer Vision Theory and Applications
212