CAN ANISOTROPIC IMAGES BE UPSAMPLED?
Mads F. Hansen, Thomas H. Mosbech, Hildur
´
Olafsd
´
ottir, Michael S. Hansen and Rasmus Larsen
DTU Informatics, Technical University of Denmark, Richard Petersens Plads, Kgs. Lyngby, Denmark
Keywords:
Image reconstruction, Image registration, Riemannian elasticity, Super resolution penalization prior.
Abstract:
This paper presents a novel method for upsampling anisotropic medical gray-scale images. The resolution
is increased by ﬁtting an image function, modeled by cubic B-splines, to the slices. The method simulates
the observed slices with an image function and iteratively updates the function by comparing the simulated
slices with observed slices. The approach handles partial voluming by modeling the thickness of the slices.
The formulation is ill-posed, and thus a prior needs to be included. Correspondences between adjacent slices
are established using a symmetric registration method with a free-form deformation model. The correspon-
dences are then converted into a prior that penalizes gradients along lines of correspondence. Tests on the
Shepp-Logan phantom show promising results, and the approach performs better than methods such as cubic
interpolation and one-way registration-based interpolation.
1 INTRODUCTION
Image interpolation plays an important role in many
medical image analysis applications by closing the
gap between the true continuous nature of an image
and the practical discrete representation of an image.
Uniform tensor splines (UTS), e.g. tricubic interpo-
lation, is the method of choice for most applications
due to the regular sampling of discrete images. A po-
tential problem with this approach is the inherent as-
sumption of a smooth transition between neighboring
voxels.
The idea of registration-based interpolation was
introduced as a solution to the problem (Penney et al.,
2004). Here, correspondences between neighboring
slices are determined by one-way registrations in 2D.
The interpolation is then performed along these lines
of correspondence to achieve a smooth transition,
rather than using the usual lateral neighborhood.
The method was extended by utilizing both a for-
ward and backward registration – a weighted sum of
two non-symmetric displacement ﬁelds – for the in-
terpolation (Frakes et al., 2008).
Recently, (
´
Olafsd
´
ottir et al., 2010) proposed a
method to improve the method even further. The pa-
per presents an interpolation based on weighting both
intensity and deformation by the inter-slice distance
of the interpolation point. The method combines both
a forward and backward interpolation into a less time-
consuming algorithm in comparison to (Penney et al.,
2004) by using an approximation to the inverse de-
formation, while still reporting sufﬁciently accurate
results.
The quality of registration-based interpolation is
highly dependent on the quality of the correspon-
dences obtained. This can be somewhat questionable
as there may not exist a one-to-one mapping between
adjacent slices. That is, structures may disappear be-
tween two slices, and in these situations one must
rely on the nice behavior of the chosen registration
scheme. Another previously untouched problem in
registration-based interpolation is the partial volume
effect; image artifacts occurring as part of the image
digitalization, where images are formed by slices of
thick volumetric blocks rather than inﬁnitely thin 2D
planes.
As an alternative to the common registration-
based interpolation we propose ﬁtting a parametric
function to the thick slices, accounting for the par-
tial volume effect by incorporating the thickness of a
slice. Furthermore, we use symmetric registration be-
tween adjacent slices to form a prior to stabilize the
ill-posed problem.
As the idea behind the method is to identify
the underlying image rather than interpolating the
thick slice voxels, we say the method upsamples the
anisotropic image under reasonable constraints.
76
F. Hansen M., H. Mosbech T., Ólafsdóttir H., S. Hansen M. and Larsen R. (2010).
CAN ANISOTROPIC IMAGES BE UPSAMPLED?.
In Proceedings of the International Conference on Computer Vision Theory and Applications, pages 76-81
DOI: 10.5220/0002846500760081
Copyright
c
SciTePress