Figure 6: Synthetic MR comparisons with MI NRR. (a)
Ground truth T1 and T2 images, (b) moving image (c) result
using MI NRR (d) result using approach (e)Error plot (Blue
- before NRR, Cyan - MI NRR, Red - Approach).
comment that the metric used in [Bruhn et al.] is
a linear approximation to the patch based SSD met-
ric, and is suitable for recovering small deformations
only. In spite of the advantages patch based methods
offer, these methods are restrictive in recovering local
non-rigid deformations and are computationally ex-
pensive. Our method shows the advantages of patch
based methods without the above drawbacks. In Fig.
5, in the two examples shown, the fixed image (a) is
non-rigidly deformed to generate the moving image
(b), (c) the overlay showing the mismatch. In (d), we
see that patch SSD has done quite well in recover-
ing deformations under noise. However, artifacts are
seen in the registered image since the metric is too re-
strictive to recover local non-rigid deformations. (e)
shows results using our Approach.
A note on the timing (Table. 1), for the T1 NRR
experiment (250x250 images) in Fig. 5. All runs were
in Matlab, on a 2.6 GHz laptop. It is noted that the
timing for our approach using intensity based weights
is comparable to that with gaussian weights due to
the approximation scheme. The main cost of our ap-
proach over SSD NRR is the use of non-local regular-
ization with a gaussian kernel of width (25x25). Patch
SSD is seen to be expensive since the EL does not in-
volve convolutions and also takes more iterations to
converge under local deformations.
b) Mutual Information [MI] based NRR. There has
been recent interest in extending MI based metrics for
dense NRR. In (Lu et al., 2010), an MI based exten-
sion to diffeomorphic Demon’s algorithm is imple-
mented in a multi-resolution framework [MI NRR].
The main challenge of MI based methods for dense
NRR is that the number of samples available to con-
struct the joint histograms at each point could be
less, resulting in interpolation artifacts. This would
mean optimizing a non-convex metric leading to sub-
optimal solutions, and giving un-realistic deforma-
tions due to over emphasis on regularization.
Now, quantitative comparisons with MI NRR for
MR synthetic data and gated PET-CT data are shown.
In Fig. 6, we consider two experiments, MR T1-T1
NRR and MR T1-T2 NRR. The first row shows the
ground truth T1 and T2 images that are well aligned.
We then generated 10 T1(T2) images by applying in-
creasing ranges of random non-rigid motion to the
ground truth T1(T2) images. The 10 T1(T2) images
are then non-rigidly registered to the ground truth T1
image using MI NRR and our approach. A repre-
sentative example is shown in (a)-(c), (a) is the mov-
ing image, (b) is the result using MI NRR, and (c) is
the result using proposed approach. The plot (d) is
the maximum absolute difference error between the
ground truth T1(T2) image and the registered images.
The Blue curve is the error before NRR, the Cyan
curve is error using MI NRR, and the Red curve is
using proposed approach. As clearly indicated in the
error plots and in the examples, the motion recovery
and the quality of the registered images is better using
the proposed approach.
Next, we look at NRR for gated PET-CT. In gated
PET-CT, PET and Cine CT are synchronized across
breathing phases, using e.g. an external tracking de-
vice. The data consisted of 6 cases ( 3 clinical and
3 synthetic PET Phantoms). Each of the data cases
consisted of 6 PET gates (128x128x50), and 6 corre-
sponding in-phase CT gates (512x512x72). We then
picked the central coronal slices for our 2D experi-
ment and brought the CT images to PET resolution.
For each of the 6 data sets, a PET image and its corre-
sponding in-phase CT image were picked as the ref-
erence. Now, the other 5 CT images were non-rigidly
registered to the reference PET image resulting in 30
NRR computations for the 6 data cases. For valida-
tion, the reference CT image is compared with the
registered CT images for each of the data cases.
We illustrate on a clinical case in Fig. 7. Col-
umn (a) shows the PET reference image and the cor-
responding in-phase CT image (ground truth for com-
parison). Column (b) is the registered CT image, Col-
umn (c) is the overlay of the CT image (b) with the
reference PET image, Column (d) is the difference
image of (b) with the reference CT image. The first,
second and third rows are the outputs before NRR, us-
ing MI NRR, and using the proposed approach. For
quantitative comparison (Fig. 8), we look at the ab-
solute difference errors between the registered image
and the reference CT image around 10 key landmark
VISAPP2013-InternationalConferenceonComputerVisionTheoryandApplications
262