a refined transformation, this time using not only
three points but all matching points.
This approach is similar to the RANSAC algo-
rithm. However, considering the small size of the
point sets, an exhaustive search can be performed
without any significant performance drawback.
The transformations are smoothed with a Kalman-
filter: The head is considered a rigid body. The
current position and rotation (using versor represen-
tation) are both 3D vectors x, r ∈ R
3
. The kalman
state vector considers two derivatives, being therefore
(x, r, ˙x, ˙r, ¨x, ¨r) ∈ R
18
.
Then, the transformations are passed on to the
MRI scanner via ethernet.
3 RESULTS
The system described in section 2 has been imple-
mented in C
++
. All image processing was done on
the graphics board using the CUDA API. An overall
frame rate of approximately 40 fps has been achieved
(which is enough to analyze all frames coming from
the cameras) and a latency of 0.05 s. The follow-
ing experiments were made to illustrate the systems
performance with MR compatible cameras, but un-
der better lighting conditions than usually found in-
side the MR scanner.
3.1 Accuracy
One major problem in feature detection is that pro-
jective transformations transform circles into shapes
similar to ellipses. In most cases, the circle center
are not projected onto the center of gravity of the el-
lipse. A blue circular marker with a small black dot
in its center was used to measure the distance. Under
extreme angles the displacement can be as much as
10% of the circle radius.
To evaluate the accuracy of the center of gravity,
a static scene has been constructed containing only
a single circular marker. The experiment shows that
under good lighting conditions, the center of gravity
of a marker is detected with an accuracy of roughly
0.5 pixels.
To evaluate the accuracy of the triangulated 3D
position of a single marker, the same setup has been
used. Thus, stereo matching always works correctly
and the triangulated 3D position should be exact. The
reconstructed 3D positions have a standard deviation
of 0.015 mm for the x- and y-axis, and 0.08 mm for
the z-axis.
4 CONCLUSIONS
A method to build an optical tracking system that can
be used for head motion compensation in MRI has
been presented. To achieve MRI compatibility, cer-
tain drawbacks had to be accepted: MR compatible
cameras are used that provide images with standard
TV resolution. As a result, markerless tracking ap-
proaches cannot be used. Instead, an approach track-
ing circular blue markers sticking to the patients fore-
head has been chosen.
Because of the image resolution provided by the
cameras, tracking accuracy of this approach is not
as good as it would have been with standard indus-
trial cameras with a much higher resolution and frame
rate. Furthermore, the stereo feature matching and the
model tracking algorithms had to be able to cope with
noisy feature positions.
Thus, an MR compatible tracking system has been
built that can be used with non-cooperative patients.
In contrast to other approaches, the system is com-
pletely MR compatible. It uses markers that do not
require the patients cooperation
REFERENCES
Dold, C., Zaitsev, M., Speck, O., Firle, E. A., Hennig, J.,
and Sakas, G. (2006). Advantages and limitations of
prospective head motion compensation for mri using
an optical motion tracking device. Academic Radiol-
ogy, 13(9):1093–1103.
Hartley, R. and Zisserman, A. (2000). Multiple View Geom-
etry in Computer Vision. Cambridge University Press,
Cambridge.
Langner, J. (2008). Event-Driven Motion Compensation
in Positron Emission Tomography: Development of a
Clinically Applicable Method. PhD thesis, Faculty of
Medicine Carl Gustav Carus, University of Technol-
ogy Dresden, Germany.
Ma, W. P. T., Hamarneh, G., Mori, G., Dinelle, K.,
and Sossi, V. (2008). Motion estimation for func-
tional medical imaging studies using a stereo video
head pose tracking system. Nuclear Science Sympo-
sium Conference Record, 2008. NSS ’08. IEEE, pages
4086–4090.
Munkres, J. (1957). Algorithms for the assignment and
transportation problems. Journal of the Society for
Industrial and Applied Mathematics, 5(1):32–38.
Ohayon, S. and Rivlin, E. (2006). Robust 3d head tracking
using camera pose estimation. International Confer-
ence on Pattern Recognition, 1:1063–1066.
Se, S., Lowe, D., and Little, J. (2001). Vision-based
mobile robot localization and mapping using scale-
invariant features. In IEEE International Conference
on Robotics and Automation (ICRA), pages 2051–
2058.
VISAPP 2010 - International Conference on Computer Vision Theory and Applications
456