for a threshold T , e.g. T =
1
2
. In the case of three or
more input images, one again just selects one image
as the iconic one and the generalisation is straightfor-
ward.
4 MORE RESULTS
Figure 5 shows results for a real-life surveillance ex-
ample (in particular do watch the poles).
The result at the middle left is based on the stan-
dard fusion rule (selection), the result at the middle
right is based on the iconic fusion rule (6) (selection).
In this section, for comparison, we point to additional
results for the smooth variant of the said iconic fusion
(bottom right) and for the rule of Burt & Kolczynski
(bottom left). The latter rule implies that where simi-
larity is low ( m
k
IA
(.|p) < T ) the maximum selection
rule is applied, where similarity is high ( m
k
IA
(.|p) ≥ T
) the rule is presented by ω
k
I
(.) =
1
2
−
1
2
(1 − m
k
IA
(.|p))
(1 − T )
and ω
k
A
(.) = 1−ω
k
I
(.) which comes close to averaging
mode. An important difference with the iconic fusion
rule is that the outcome is symmetric with respect to
interchanging the input images. Contrary to the new
iconic fusion rule which tries, roughly speaking, to
convert the infrared contrasts into visual light con-
trasts and interchanging the input images then would
imply converting visual light contrasts into infrared
contrasts.
4.1 Fusion Metrics
Due to lack of a ground-truth, especially in the con-
text of multimodality, quantitative assessment of fu-
sion is quite a challenge, and still appears an open
problem. Many different metrics have already been
proposed, but they rate algorithms differently (Liu
et al., 2012). A rather general metric as the mutual in-
formation fusion metric persistently favors fusion by
simply averaging input images (Cvejic et al., 2006),
and looks not very suited for our new method. The
choice for a metric is driven by the requirements of
the application (Liu et al., 2012). In future research
we plan to apply the 12 metrics used by the latter, and
possibly devise an additional one of our own making,
to make an objective assessment of our new method.
5 CONCLUDING REMARKS
Within the context of multiresolution schemes a new
fusion rule has been proposed, coined iconic fusion
rule, so as to deal with opposite contrast which might
occur in a set of multimodal images. The rule is a
biased one, with the bias towards the contrasts ob-
served (if any) in an image with a favoured spectrum,
the so-called iconic image. Qualitative evidence for
the soundness of the rule has been given by means
of a few examples. A survey with quantitative assess-
ment of several testproblems and applying a variety of
quality measures is part of future research. Given the
intent of the new method, quite likely a new quality
measure needs to be devised so as to deal with images
with opposite contrast.
ACKNOWLEDGEMENTS
The research leading to these results has received
funding from the European Community’s Seventh
Framework Programme (FP7-ENV-2009-1) under
grant agreement no FP7-ENV-244088 ”FIRESENSE
- Fire Detection and Management through a Multi-
Sensor Network for the Protection of Cultural Her-
itage Areas from the Risk of Fire and Extreme
Weather”. We gratefully used images that have been
provided to us by Xenics (Leuven, Belgium).
REFERENCES
Burt, P. and Adelson, E. (1983). The laplacian pyramid as a
compact image code. IEEE Transactions on Commu-
nications, 31(4):532–540.
Burt, P. J. and Kolczynski, R. J. (1993). Enhanced image
capture through fusion. In Proceedings Fourth Inter-
national Conference on Computer Vision, pages 173–
182, Los Alamitos, California. IEEE Computer Soci-
ety Press.
Cvejic, N., Canagarajah, C. N., and Bull, D. R. (2006). Im-
age fusion metric based on mutual information and
tsallis entropy. Electronic Letters, 42(11):626–627.
De Zeeuw, P. M. (2005). A multigrid approach to image
processing. In Kimmel, R., Sochen, N., and We-
ickert, J., editors, Scale Space and PDE Methods in
Computer Vision, volume 3459 of Lecture Notes in
Computer Science, pages 396–407. Springer-Verlag,
Berlin Heidelberg.
De Zeeuw, P. M. (2007). The multigrid image transform. In
Tai, X.-C., Lie, K. A., Chan, T. F., and Osher, S., ed-
itors, Image Processing Based on Partial Differential
Equations, Mathematics and Visualization, pages 309
– 324. Springer Berlin Heidelberg.
De Zeeuw, P. M., Piella, G., and Heijmans, H. J. A. M.
(2004). A matlab toolbox for image fusion (matifus).
CWI Report PNA-E0424, Centrum Wiskunde & In-
formatica, Amsterdam.
VISAPP 2012 - International Conference on Computer Vision Theory and Applications
156