gated the matching results with other available match-
ing algorithms in order to have an understanding on
how it would perform in a real ontology matching sys-
tem. We used the LexiVis algorithm as it was shown
to perform better than the original Visual Similarity
algorithm (Figure 6). The other algorithms that were
used are the ISub and Similarity Flooding matchers in
addition to the baseline NameEq matcher. These were
used in order to have a combination of matchers that
exploit different features, i.e. string, structural and vi-
sual. The matchers were combined using an adaptive
weighting approach similar to (Cruz et al., 2009). For
this test we again used the conference track bench-
mark dataset of OAEI 2014. For this dataset, results
regarding the performance of the participating match-
ing systems are published in OAEI’s website and in
(Dragisic et al., 2014). It can be seen from Table
1, in the line denoted with italic font, that the inclu-
sion of the LexiVis ontology matching algorithm in
the matching system results in better overall perfor-
mance than running the system without it. The added
value of 0.01 in F1 results in an overall F1 value of
0.60 which brings our matching system in the top 5
performances. The rather small added value of 0.01
is mainly due to the fact that the benchmark is quite
challenging as can be seen from the results of Table 1.
For example the XMap system, which is ranked 4
th
,
managed to score 0.07 more in F1 than the baseline
NameEq matcher which simply compares strings and
produces a valid pair if the names are equal. Even this
small increase of F1 just by including the LexiVis al-
gorithm proves that it can improve results in such a
challenging benchmark thus showing its benefit.
6 CONCLUSIONS
In this paper a novel ontology matching algorithm
which is based on visual features is presented. The al-
gorithm exploits ImageNet’s structure which is based
on Wordnet in order to correspond image sets to the
ontological entities and state of the art visual pro-
cessing is employed which involves visual feature
descriptors extraction, codebook-based feature rep-
resentation, dimensionality reduction and indexing.
The visual-based similarity value is taken by calcu-
lating a modified version of the Jaccard set similarity
value. A new matcher is also proposed which com-
bines visual and lexical features in order to determine
entity similarity. The proposed algorithms have been
evaluated using the established OAEI benchmark and
has shown to outperform Wordnet-based approaches.
A limitation of the proposed visual-based matching
algorithm is that since it relies of visual depictions of
entities, in cases where entity names are not words,
e.g. alphanumeric codes, then its performance will be
poor as images will be able to be associated with it. A
way to tackle this is to extend the approach to include
other data, such as rdfs:label, which are more de-
scriptive. Another limitations of this approach would
be the mapping of concepts that are visually hard
to express, e.g. “Idea” or “Freedom”, however this
is partly leveraged by employing web-based search
which likely retrieves relevant images for almost any
concept.
The current version of the algorithm only uses en-
tity names Future work will focus in optimizing the
processing pipeline in order to have visual similarity
results in a more timely manner using processing op-
timizations and other approaches such as word sense
disambiguation in order to reduce the image sets that
correspond to each entity.
ACKNOWLEDGEMENTS
This work was supported by MULTISENSOR (con-
tract no. FP7-610411) and KRISTINA (contract no.
H2020-645012) projects, partially funded by the Eu-
ropean Commission.
REFERENCES
Bay, H., Ess, A., Tuytelaars, T., and Van Gool, L. (2008).
Speeded-up robust features (SURF). Computer vision
and image understanding, 110(3):346–359.
Chatfield, K. and Zisserman, A. (2013). VISOR: Towards
on-the-fly large-scale object category retrieval. In
Asian Conference of Computer Vision – ACCV 2012,
pages 432–446. Springer Berlin Heidelberg.
Chen, X., Xia, W., Jim´enez-Ruiz, E., and Cross, V. (2014).
Extending an ontology alignment system with biopor-
tal: a preliminary analysis. In Poster at Intl Sem. Web
Conf.(ISWC).
Cruz, I. F., Palandri Antonelli, F., and Stroe, C. (2009). Ef-
ficient selection of mappings and automatic quality-
driven combination of matching methods. In ISWC
International Workshop on Ontology Matching (OM)
CEUR Workshop Proceedings, volume 551, pages 49–
60. Citeseer.
Dragisic, Z., Eckert, K., Euzenat, J., Faria, D., Ferrara, A.,
Granada, R., Ivanova, V., Jimenez-Ruiz, E., Kempf,
A., Lambrix, P., et al. (2014). Results of the ontology
alignment evaluation initiative 2014. In International
Workshop on Ontology Matching, pages 61–104.
Euzenat, J. (2004). An API for ontology alignment. In The
Semantic Web–ISWC 2004, pages 698–712. Springer.
Faria, D., Pesquita, C., Santos, E., Cruz, I. F., and Couto,
F. M. (2014). Automatic background knowledge se-